AWS Brown bag session 11 Nov 2016 1 UK Sales Conference Image by Leah Juarez Pathum Fernando
AWS Brown bag session
11 Nov 2016
1UK Sales Conference
Image by Leah Juarez
Pathum Fernando
What is AWS stands for ?
Agenda
DynamoDB
Simple Queue Service ( SQS )
Simple Storage Service ( s3 )
DynamoDB
What is DynamoDB ?
● Amazon DynamoDB is a fast and flexible NoSql service
for all Application.
● Single digit millisecond latency
● It is fully managed by AWS
● Supported Key-Value data model
Quick facts about DynamoDB
● Stored on SSD Storage
● Spread across 3 geographical distints datacenter
● Eventually consistent read
● Strongly consistent read
Read More…
1. Why is Amazon DynamoDB built on Solid State Drives?
2. What does read consistency mean? Why should I care?
3. What is the consistency model of Amazon DynamoDB?
The Basics ● Table
● Items ( Think a raw of data in a table )
● Attributes ( Think of column of data in a table )
Items
Attributes
Table
The Basics
Indexing and Keys ● Two type of primary keys Available
Single Attribute ( think unique ID )
-Partition Key
Composite ( Think unique ID and a date range )
- Partition Key and sort Key composed of two attributes
● Local indexing
● Global Indexing
Provisioned throughput
1. Unit of Read Provisioned throughput
A. All reads are round up to increments of 4KB
B. Eventually consistent Reads ( default ) consist of 2 reads per
second
C. Strongly consistent reads consist of 1 read per second.
1. Unit of write provisioned throughput
A. All writes are 1KB
B. All writes consist of 1 write per second.
Provisioned throughput ( Magic formula ) You have an application that needs to read 25 items of 13kb in size per
second. Your application uses strongly consistent reads. What should
you set the read throughput to?
Read Throughput
Write Throughput
Read Throughput
(Size of Read rounded up to nearest 4KB chunk / 4KB )
no of items= X
Write Throughput
( Size of Write rounded up to nearest 1KB chunk / 1KB )
no of items= X
Simple Queue Service ( SQS )
What is SQS ?
● Amazon SQS is a web service that gives you access to a message
queue
● Amazon SQS is distributed queue system
● This is the very first service that the AWS launched
● Using SQS, you can decouple the components of an application
● Amazon SQS ensures delivery of each message at least once, and
support multiple readers and writers.
● SQS does not guarantee first in, first out delivery of messages.
SQS illustrate To illustrate, suppose you have a number of image files to encode. In
an Amazon SQS worker queue, you create an Amazon SQS message
for each file specifying the command (jpeg-encode) and the location of
the file in Amazon S3. A pool of Amazon EC2 instances running the
needed image processing software does the encoding
SQS illustrate
SQS illustrate
SQS illustrate
SQS - Autoscaling
Simple Storage Service ( s3 )
What is S3 ? ● S3 is a safe place to store your files
● It is object based storage. Allows you to upload files
● The data is spread across multiple devices and facilities
● File can be from 1 byte to 5TB
● S3 is unlimited storage
● Files are stored in Buckets
● Bucket name should be unique globally
● Key-Value store
● Amazon guarantees 99.99% availability for the s3 platform and
99.999999999% durability for s3 information
● Use as a static hosting method
S3 consistency models
● Read after write consistency for PUTS of new objects
● Eventual consistency for overwrite PUTS and DELETES ( can take
some time to propagate )
S3 Storage Tiers/Classes
● S3 - 99.99% availability and 99.999999999% durability ,
redundantly across multiple devices in multiple facilities
● S3 - IA ( Infrequently Accessed ) For data that is accessed less
frequently, but requires rapid access when needed, Lower free
than S3
● Reduced Redundancy Storage - Designed to provide 99.99%
durability and 99.99% availability.
● Glacier - Very cheap, but used for archival only, It takes 3-5 hours
to restore from Glacier
S3 Storage Tiers/Classes
S3 Version Control
● Stores all version of an object
● Great backup tool
● Integrates with Lifecycle rules
● Versioning has MFA Delete capability
S3 Lifecycle Management
You manage an object's lifecycle by using a lifecycle configuration,
which defines how Amazon S3 manages objects during their lifetime.
Lifecycle configuration enables you to simplify the lifecycle
management of your objects, such as automated transition of
less-frequently accessed objects to low-cost storage alternatives and
scheduled deletions. You can configure as many as 1000 lifecycle rules
per bucket.
Are you looking for a AWS Certification ?
Are you looking for a AWS Certification ?