Cloud databases in Amazon Web Services Roman Gomolko [email protected] October 2015 Ciklum Speakers Corner
Let’s get acquired
UserReport
Developing products that allow to learn the audienceStarted using AWS more than 5 years agoFully migrated to AWS more than 1.5 years agoProcessing 3 billions requests monthlyGenerating reports based on 8 billions of requests with batched
reportsOnline reports on 300 millions of recordsUsed ~50% of services provided by AWSTotally happy regarding using AWS
A database is an organized collection of data
RDS
Relational Databases hosted and maintained by Amazon
Different Engines & Editions & Versions
Captain Obvious’s notes
●RDS doesn’t host particular DB but it hosts RDMS●Create your root user, create separate users for each
database/application●Your instance is firewalled with security groups●Advanced configuration is available through parameter groups
Multi A-Z deployments for production workloads
●SLA 99.95% monthly uptime●Doubles prices●Allows to maintain your database without downtime
○ Minor updates○ Major updates○ Disk resize○ EC2 upgrade
●No support for MS SQL Web, Express, Standard
Pricing
RDS price = EC2 + ELB + license
On-Demand or Reserved purchases with up-front payment
Backups
●Automated with automated rotation●Restore to point of time●Restore will create new instance and deploy desired version. It
takes a while●Manual backup via Snapshots
Advanced optimizations
●Read replicas○ you can create on the fly high available read-only copies of your
data●Using ElastiCache for performance boost
○ Using memcache will massively boost your queries
Downsides
●No control over EC2 for very advanced optimizations●Backup works over instance
○ One RDS per DB○ Or custom backups
●No Active Directory integration●No Cross-region replication
Aurora
MySQL compatible database by Amazon with cloud in the mind
Aurora
Available and DurableAmazon Aurora is designed to offer greater than 99.99% availability, replicating 6 copies of data across 3 Availability Zones and backing up data continuously to Amazon S3. Recovery from physical storage failures is transparent and instance restarts typically require less than a minute.
Aurora
Highly ScalableYou can use Amazon RDS to scale your Amazon Aurora database instance up to 32 vCPUs and 244GiB Memory. You can also add up to 15 Amazon Aurora Replicas across three availability zones to further scale read capacity. Amazon Aurora automatically grows storage as needed, from 10GB up to 64TB.
DynamoDB
Document database with biscuits by Amazon
DynamoDB overview
●Operates with tables●Table definition consist of
○ key (required)○ sort (range) key (optional)○ indexes (optional)
●Table contains items●Items is described by
○ key ○ sort (range) key (if defined for table)○ attributes
DynamoDB item overview
●Max 64 KB●Unlimited number of attributes●Attribute types
○ string○ string array○ number○ number array○ binary○ binary array○ json
DynamoDB operations
●Put - insert or update●Get●Delete●Scan ●Query
Demo time
DynamoDB show-case
DynamoDB performance
●You provision read and write capacity●DynamoDB is divided into shards. Each shard has following
limits:○ 2 Gb of data○ 3000 Read Capacity Units○ 2000 Write Capacity Units
●Your requests can be throttled (API cares about retry-logic in most cases)
●You can setup autoscale of DynamoDB
DynamoDB Streams
●Triggers on data changes●Cross-region replication●ElasticSearch integration to allow to search among your data
https://aws.amazon.com/blogs/aws/new-logstash-plugin-search-dynamodb-content-using-elasticsearch/
Backups and maintenance
●All data is replicated on three nodes - no backup required●Change of provisioned throughput does not downgrade
performance●You can setup AutoScale for DynamoDB
https://github.com/sebdah/dynamic-dynamodb
*hit happens
DynamoDB had massive outage (high error rate on API request) in N. Verginia that affected:
●SQS●CloudWatch●AutoScale Groups●SNS
https://aws.amazon.com/message/5467D2/
Application design best practices
ElastiCache
Key-value store is also database
Redis
●Extremely fast in-memory database●Different data structures
○ Sets○ Lists○ Ordered sets○ HyperLogLog○ HashSets○ Geo data○ Pub/Sub
Redis hosted in AWS
●Different versions supported●Multi AZ master/slave configuration maintained by Amazon●Automated backups●Monitoring with CloudWatch●No chance to patch Redis for your needs (geeks like custom
operations)
Example 1. Calculating unique visitors
PFADD visitors.20151001 xxxPFCOUNT visitors.20151001
INC pageviews.20151001GET pageviews.20151001
Example 2. Working with sets
# users 1 and 2 add item to basketSADD added_item_to_cart id1SADD added_item_to_cart id2SADD begin_checkout id1# users haven’t began checkoutSDIFFSTORE no_checkout added_item_to_cart begin_checkout# users with email and haven’t started checkoutSINTER known_email no_checkout
Example 3. Top scored users
ZADD gamescore 1 user1ZADD gamescore 4 user2ZADD gamescore 2 user3ZREVRANGE gamescore 0 9
user2user3user1
Learn more
Redshift
It’s like PostgreSQL but for peta-bytes
Redshift
●Multiple-node cluster deployment that scales up to petabytes●$1000/Tb/year●Good for data mining●Query execution minutes or hours
Table design
●HashKey - how data will be distributed across nodes●SortKey - how data will be sorted within node●Primary key, foreign keys, constraints - they are hints to query
optimizer
Uploading data
●From CSV●From DynamoDB●From EMR●Bulk insert
http://docs.aws.amazon.com/redshift/latest/dg/r_COPY_command_examples.html
Loading data from S3
copy tablefrom 's3://mybucket/data/table.txt' credentials 'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret-access-key>' csv [gzip] [delimiter "|"];
Query Execution
●PostgreSQL compatible syntax with many disabled features●No views●No stored procedures●Recently deployed scalar custom functions●10 parallel queries
Getting query results
unload ('select * from mytable)to 's3://mybucket/unload/result/' credentials 'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret-access-key>';
S3 + EMR
Why don’t query files?
EMR
EMR can launch Elastic Map Reduce cluster so●Hadoop●Spark●Hive●Presto
Distributed SQL Query Engine for Big Data
Demo time
One size fits all principle does not work here