1 ©HortonworksInc.2011– 2016.AllRightsReserved
ApacheSparkandObjectStores—[email protected]@steveloughran
October2016
Steve Loughran,Hadoop committer, PMC member, …
Chris Nauroth, Apache Hadoop committer & PMC ASF member
Rajesh BalamohanTez Committer, PMC Member
3 ©HortonworksInc.2011– 2016.AllRightsReserved
ORC, Parquetdatasets
inbound
ElasticETL
HDFS
external
4 ©HortonworksInc.2011– 2016.AllRightsReserved
datasets
external
Notebooks
library
5 ©HortonworksInc.2011– 2016.AllRightsReserved
Streaming
6 ©HortonworksInc.2011– 2016.AllRightsReserved
AFilesystem:Directories,Filesà Data
/
work
pending
part-00
part-01
00
00
00
01
01
01
complete
part-01
rename("/work/pending/part-01", "/work/complete")
7 ©HortonworksInc.2011– 2016.AllRightsReserved
ObjectStore:hash(name)->blob
00
00
00
01
01
s01 s02
s03 s04
hash("/work/pending/part-01")["s02", "s03", "s04"]
copy("/work/pending/part-01","/work/complete/part01")
01
010101
delete("/work/pending/part-01")
hash("/work/pending/part-00")["s01", "s02", "s04"]
8 ©HortonworksInc.2011– 2016.AllRightsReserved
RESTAPIs
00
00
00
01
01
s01 s02
s03 s04
HEAD /work/complete/part-01
PUT /work/complete/part01x-amz-copy-source: /work/pending/part-01
01
DELETE /work/pending/part-01
PUT /work/pending/part-01... DATA ...
GET /work/pending/part-01Content-Length: 1-8192
GET /?prefix=/work&delimiter=/
9 ©HortonworksInc.2011– 2016.AllRightsReserved
Often:EventuallyConsistent
00
00
00
01
01
s01 s02
s03 s04
01
DELETE /work/pending/part-00
GET /work/pending/part-00
GET /work/pending/part-00
200
200
200
10 ©HortonworksInc.2011– 2016.AllRightsReserved
org.apache.hadoop.fs.FileSystem
hdfs s3awasb adlswift gs
11 ©HortonworksInc.2011– 2016.AllRightsReserved
s3:// —“inode on S3”
s3n://“Native” S3
s3a://Replaces s3n
swift://OpenStack
wasb://Azure WASBs3a:// Stabilize
oss://Aliyun
gs://Google Cloud
s3a://Speed and consistency adl://
Azure Data Lake
2006
2008
2013
2014
2015
2016
s3://Amazon EMR S3
History of Object Storage Support
12 ©HortonworksInc.2011– 2016.AllRightsReserved
CloudStorageConnectorsAzure WASB ● Strongly consistent
● Good performance● Well-tested on applications (incl. HBase)
ADL ● Strongly consistent● Tuned for big data analytics workloads
Amazon Web Services S3A ● Eventually consistent - consistency work in progress by Hortonworks
● Performance improvements in progress● Active development in Apache
EMRFS ● Proprietary connector used in EMR● Optional strong consistency for a cost
Google Cloud Platform GCS ● Multiple configurable consistency policies● Currently Google open source● Good performance● Could improve test coverage
13 ©HortonworksInc.2011– 2016.AllRightsReserved
FourChallenges
1. Classpath
2. Credentials
3. Code4. Commitment
Let's look At S3 and Azure
14 ©HortonworksInc.2011– 2016.AllRightsReserved
UseS3AtoworkwithS3(EMR: useAmazon'ss3://)
15 ©HortonworksInc.2011– 2016.AllRightsReserved
Classpath:fix“NoFileSystem forscheme:s3a”
hadoop-aws-2.7.x.jar
aws-java-sdk-1.7.4.jarjoda-time-2.9.3.jar(jackson-*-2.6.5.jar)
See SPARK-7481
Get Spark with Hadoop 2.7+ JARs
16 ©HortonworksInc.2011– 2016.AllRightsReserved
Credentials
core-site.xml orspark-default.conf
spark.hadoop.fs.s3a.access.key MY_ACCESS_KEY
spark.hadoop.fs.s3a.secret.key MY_SECRET_KEY
spark-submit automaticallypropagatesEnvironmentVariablesexport AWS_ACCESS_KEY=MY_ACCESS_KEY
export AWS_SECRET_KEY=MY_SECRET_KEY
NEVER: share, check in to SCM, paste in bug reports…
17 ©HortonworksInc.2011– 2016.AllRightsReserved
AuthenticationFailure:403
com.amazonaws.services.s3.model.AmazonS3Exception:The request signature we calculated does not matchthe signature you provided.Check your key and signing method.
1. Check joda-time.jar & JVM version2. Credentials wrong3. Credentials not propagating4. Local system clock (more likely on VMs)
18 ©HortonworksInc.2011– 2016.AllRightsReserved
Code:BasicIO
// Read in public datasetval lines = sc.textFile("s3a://landsat-pds/scene_list.gz")val lineCount = lines.count()
// generate and write dataval numbers = sc.parallelize(1 to 10000)numbers.saveAsTextFile("s3a://hwdev-stevel-demo/counts")
All you need is the URL
19 ©HortonworksInc.2011– 2016.AllRightsReserved
Code:justusetheURLoftheobjectstore
val csvdata = spark.read.options(Map("header" -> "true","inferSchema" -> "true","mode" -> "FAILFAST"))
.csv("s3a://landsat-pds/scene_list.gz")
...read time O(distance)
20 ©HortonworksInc.2011– 2016.AllRightsReserved
DataFrames
val landsat = "s3a://stevel-demo/landsat"csvData.write.parquet(landsat)
val landsatOrc = "s3a://stevel-demo/landsatOrc"csvData.write.orc(landsatOrc)
val df = spark.read.parquet(landsat)val orcDf = spark.read.parquet(landsatOrc)
21 ©HortonworksInc.2011– 2016.AllRightsReserved
FindingdirtydatawithSparkSQL
val sqlDF = spark.sql("SELECT id, acquisitionDate, cloudCover"
+ s" FROM parquet.`${landsat}`")
val negativeClouds = sqlDF.filter("cloudCover < 0")negativeClouds.show()
* filter columns and data early * whether/when to cache()?* copy popular data to HDFS
22 ©HortonworksInc.2011– 2016.AllRightsReserved
spark-default.conf
spark.sql.parquet.filterPushdown truespark.sql.parquet.mergeSchema falsespark.hadoop.parquet.enable.summary-metadata false
spark.sql.orc.filterPushdown truespark.sql.orc.splits.include.file.footer truespark.sql.orc.cache.stripe.details.size 10000
spark.sql.hive.metastorePartitionPruning true
23 ©HortonworksInc.2011– 2016.AllRightsReserved
Notebooks? Classpath & Credentials
24 ©HortonworksInc.2011– 2016.AllRightsReserved
TheCommitmentProblem
⬢ rename() usedforatomiccommitmenttransaction
⬢ timetocopy()+delete()proportionaltodata*files
⬢ S3:6+MB/s⬢ Azure:alotfaster—usually
spark.speculation falsespark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 2spark.hadoop.mapreduce.fileoutputcommitter.cleanup.skipped true
25 ©HortonworksInc.2011– 2016.AllRightsReserved
What about Direct Output Committers?
26 ©HortonworksInc.2011– 2016.AllRightsReserved
RecentS3APerformance(Hadoop2.8,HDP2.5,CDH5.9(?))
// forward seek by skipping streamspark.hadoop.fs.s3a.readahead.range 157810688
// faster backward seek for ORC and Parquet inputspark.hadoop.fs.s3a.experimental.input.fadvise random
// PUT blocks in separate threadsspark.hadoop.fs.s3a.fast.output.enabled true
27 ©HortonworksInc.2011– 2016.AllRightsReserved
AzureStorage:wasb://
AfullsubstituteforHDFS
28 ©HortonworksInc.2011– 2016.AllRightsReserved
Classpath:fix“NoFileSystem forscheme:wasb”
wasb:// :Consistent,withveryfastrename(hence:commits)
hadoop-azure-2.7.x.jarazure-storage-2.2.0.jar
+ (jackson-core; http-components, hadoop-common)
29 ©HortonworksInc.2011– 2016.AllRightsReserved
Credentials:core-site.xml /spark-default.conf
<property><name>fs.azure.account.key.example.blob.core.windows.net</name><value>0c0d44ac83ad7f94b0997b36e6e9a25b49a1394c</value></property>
spark.hadoop.fs.azure.account.key.example.blob.core.windows.net0c0d44ac83ad7f94b0997b36e6e9a25b49a1394c
wasb://[email protected]
30 ©HortonworksInc.2011– 2016.AllRightsReserved
Example:AzureStorageandStreaming
val streaming = new StreamingContext(sparkConf,Seconds(10))val azure = "wasb://[email protected]/in"val lines = streaming.textFileStream(azure)val matches = lines.map(line => {
println(line)line
})matches.print()streaming.start()
* PUT into the streaming directory* keep the dir clean* size window for slow scans
31 ©HortonworksInc.2011– 2016.AllRightsReserved
NotCovered
⬢ Partitioning/directorylayout
⬢ InfrastructureThrottling
⬢ Optimalpathnames⬢ Errorhandling
⬢ Metrics
32 ©HortonworksInc.2011– 2016.AllRightsReserved
Summary
⬢ ObjectStoreslookjustlikeanyotherURL
⬢ …butdoneedclasspathandconfiguration
⬢ Issues:performance,commitment
⬢ UseHadoop2.7+JARs
⬢ TunetoreduceI/O
⬢ Keepthosecredentialssecret!
34 ©HortonworksInc.2011– 2016.AllRightsReserved
BackupSlides
35 ©HortonworksInc.2011– 2016.AllRightsReserved
DependenciesinHadoop2.8
hadoop-aws-2.8.x.jar
aws-java-sdk-core-1.10.6.jaraws-java-sdk-kms-1.10.6.jaraws-java-sdk-s3-1.10.6.jarjoda-time-2.9.3.jar (jackson-*-2.6.5.jar)
hadoop-aws-2.8.x.jar
azure-storage-4.2.0.jar
36 ©HortonworksInc.2011– 2016.AllRightsReserved
S3Server-SideEncryption
⬢ EncryptionofdataatrestatS3
⬢ SupportstheSSE-S3option:eachobjectencryptedbyauniquekeyusingAES-256cipher
⬢ NowcoveredinS3Aautomatedtestsuites
⬢ Supportforadditionaloptionsunderdevelopment(SSE-KMSandSSE-C)
37 ©HortonworksInc.2011– 2016.AllRightsReserved
Advancedauthentication
<property><name>fs.s3a.aws.credentials.provider</name><value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider,com.amazonaws.auth.EnvironmentVariableCredentialsProvider,com.amazonaws.auth.InstanceProfileCredentialsProvider,org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider</value>
</property>
+encrypted credentials in JECKS files on HDFS
38 ©HortonworksInc.2011– 2016.AllRightsReserved
What Next? Performance and integration
39 ©HortonworksInc.2011– 2016.AllRightsReserved
NextStepsforall ObjectStores
⬢ OutputCommitters– Logicalcommitoperationdecoupledfromrename(non-atomicandcostlyinobjectstores)
⬢ ObjectStoreAbstractionLayer– AvoidimpedancemismatchwithFileSystem API
– ProvidespecificAPIsforbetterintegrationwithobjectstores:saving,listing,copying
⬢ OngoingPerformanceImprovement
⬢ Consistency