Architecting cloud
Post on 03-Sep-2014
1381 Views
Preview:
DESCRIPTION
Transcript
Architecting your application for the cloud
Traditional solutionTraditional solution1) Buy servers2) Buy storage3) Sign a CDN contract
(Content Delivery Network)4) Launch website/application5) Manage scaling and provisioning
Cloud solutionCloud solutionBenefits from Cloud Computing:1)No need to buy IT Infrastructure2)Deploy worldwide3)Scale up/down when needed4)Save time5)Focus on your business
Stage One – The Beginning
• Simple architecture • Low complexity and overhead means quick
development and lots of features, fast.• No redundancy, low operational costs – great
for startups.
Stage 2 - More of the same, just bigger
• Business is becoming successful – risk tolerance low.
• Add redundant firewalls, load balancers.• Add more web servers for high performance.• Scale up the database.• Add database redundancy.• Still simple .
Stage 3 – The pain begins.
• Publicity hits.• Squid or varnish reverse proxy or high end
load balancers.• Add even more web servers. Managing
contents becomes painful.• Single database can’t cut it anymore. Splits
read and write. All writes go to a single master server with read only slaves.
• May require some re-coding of the apps.
Stage 4 – The pain intensifies
• Replication doesn’t work for everything. Single writes database – Too many writes – Replication takes too long.
• Database partitioning starts to make sense. Certain features get their own database.
• Shared storage makes sense for contents.• Requires significant re-architecting of the app
and DB.
Stage 5 – This Really Hurts !!
• Panic sets in. Re-thinking entire application. Now we want to go for scale?
• Can’t just partition on features – what else can we use? Geography, lastname, user Id etc. Create user-cluster.
• All features available on each cluster.• Use a hashing scheme or master DB for
locating which user belongs to which cluster.
Stage 6 – Getting a little less painful
• Scalable application and database architecture.
• Acceptable performance.• Starting to add new features again.• Optimizing some of the code.• Still growing, but manageable.
Stage 7 – entering the unknown...
• Where are the remaining bottleneck?– Power, Space– Bandwidth, CDN, Hosting provider big enough?– Firewall, load balancer bottlenecks?– Storage– Database technology limits – key/value store
anyone?
Amazon Services usedAmazon Services usedServers: Amazon EC2Storage: Amazon S3Database: Amazon RDSContent Delivery: Amazon CloudFrontExtra: Autoscaling, Elastic Load Balancing
What is in step 1What is in step 1Launched a Linux server (EC2)Installed a web serverDownloaded the websiteOpened the website
Now, our traffic goes up...
To reach fans worldwide, we need a CDN.To reach fans worldwide, we need a CDN.
Changes in HTML codeChanges in HTML codeimages/stirling1.jpg
Becomes
d135c2250.cloudfront.net/stirling1.jpg
What is in step 2What is in step 2Uploaded files to Amazon S3Enabled a Cloudfront DistributionUpdated our picture location
Our IT Architecture needs an updateOur IT Architecture needs an update
What is in step 3What is in step 3We added Autoscaling, and watched it grow the number of serversWe added Elastic Load Balancer
What we is in step 4What we is in step 4Launched a Database InstancePointed the web servers to RDSCreated a Read ReplicaCreated a Snapshot
What is difficult about Databases?What is difficult about Databases?
Availablity Patterns
• Fail-over IP• Replication– Master-slave– Master-master– Tree replication– Buddy replication
Master-Slave Replication
Master-Slave ReplicationAssume both Master and Slave is running on Ubuntu Natty(11.04) with
MySQL installed.Configure the Master: we must configure the mysql to listen to all IP
addresses. We move to /etc/mysql/my.cnf
#skip-networking#bind-address = 127.0.0.1Set the mysql log file, the database name that we will replicate and tell that
this will be the master log-bin = /var/log/mysql/mysql-bin.log
binlog-do-db=exampledbserver-id=1
Then we made a restart:/etc/init.d/mysql restart
Master – Slave ReplicationNow we enter the mysql on master server:mysql -u root -p
Enter password:We grant all privileges for slave for this database:GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%'
IDENTIFIED BY '<some_password>'; FLUSH PRIVILEGES;
Then we run the following commands:USE exampledb;
FLUSH TABLES WITH READ LOCK;This will show the master log file name and the read position:
SHOW MASTER STATUS;
Master-Slave ReplicationWe make a dump of the database of the master server:mysqldump -u root -p<password> --opt exampledb >
exampledb.sqlOr we can run this command on the slave to fetch the
data from master:LOAD DATA FROM MASTER;Now we will unlock the tables:mysql -u root -p
Enter password:UNLOCK TABLES;quit;
Master-Slave Replication : Configure the Slave
First we enter the slave mysql and create the database:mysql -u root -p
Enter password:CREATE DATABASE exampledb;quit;
We import the database using the mysql dump:mysql -u root -p<password> exampledb < /path/to/exampledb.sqlNow we will configure the slave server:/etc/mysql/my.cnfWe write the below information:server-id=2
master-host=192.168.0.100master-user=slave_usermaster-password=secretmaster-connect-retry=60replicate-do-db=exampledb
Then we restart mysql:
/etc/init.d/mysql restart
Master-Slave Replication: Configure the Slave
We can also load the database using the below command:mysql -u root -p
Enter password:LOAD DATA FROM MASTER;quit;
Then we stop the slave server.mysql -u root -p
Enter password:SLAVE STOP;
And we run the below command to adjust the master informations:CHANGE MASTER TO MASTER_HOST='192.168.0.100',
MASTER_USER='slave_user', MASTER_PASSWORD='<some_password>', MASTER_LOG_FILE='mysql-bin.006', MASTER_LOG_POS=183;
And then we start the slave server:START SLAVE;
quit;
Master-Master Replication:
Master-Master Replication: master1 configuration
we will call system 1 as master1 and slave2 and system2 as master2 and slave 1.We go to the master mysql configuration file:/etc/mysql/my.cnf.Then we add the below code block. We show the path and socket path, the log file for the db to replicate.[mysqld]
datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockold_passwords=1
log-binbinlog-do-db=<database name> binlog-ignore-db=mysql binlog-ignore-db=test
server-id=1
[mysql.server]user=mysqlbasedir=/var/lib
[mysqld_safe]err-log=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.pid
mysql> grant replication slave on *.* to 'replication'@192.168.16.5 \identified by 'slave';
Master-Master Replication: slave2 configuration
Now we enter the slave2 mysql configuration file.[mysqld]
datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockold_passwords=1
server-id=2
master-host = 192.168.16.4master-user = replicationmaster-password = slavemaster-port = 3306
[mysql.server]user=mysqlbasedir=/var/lib
[mysqld_safe]err-log=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.pid
Master-Master Replication: start master1/slave1 server
We start the slave:
mysql> start slave;mysql> show slave status\G;
Slave_IO_State: Waiting for master to send event Master_Host: 192.168.16.4 Master_User: replica Master_Port: 3306 Connect_Retry: 60 Master_Log_File: MASTERMYSQL01-bin.000009 Read_Master_Log_Pos: 4 Relay_Log_File: MASTERMYSQL02-relay-bin.000015 Relay_Log_Pos: 3630 Relay_Master_Log_File: MASTERMYSQL01-bin.000009 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table:Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 4 Relay_Log_Space: 3630 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 1519187
Master-Master Replication: Creating the master2/slave2
On Master2/Slave 1, edit my.cnf and master entries into it: [mysqld]
datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sock
old_passwords=1server-id=2
master-host = 192.168.16.4master-user = replicationmaster-password = slavemaster-port = 3306
log-bin binlog-do-db=adam
[mysql.server]user=mysqlbasedir=/var/lib
[mysqld_safe]err-log=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.pid
Create a replication slave account on master2 for master1:mysql> grant replication slave on *.* to 'replication'@192.168.16.4 identified by 'slave2';
Master-Master Replication: Creating the master2/slave2
Edit my.cnf on master1 for information of its master.[mysqld]
datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sock
old_passwords=1
log-binbinlog-do-db=adambinlog-ignore-db=mysqlbinlog-ignore-db=test
server-id=1#information for becoming slave.master-host = 192.168.16.5master-user = replicationmaster-password = slave2master-port = 3306
[mysql.server]user=mysqlbasedir=/var/lib
Master-Master Replication:
• Restart both mysql master1 and master2.• On mysql master1:• mysql> start slave;• On mysql master2: • mysql > show master status;• On mysql master 1:• mysql> show slave status\G;
Managing overload
Load Balancing Algorithm
Random allocation
Round robin allocation
Weighted allocation
Dynamic load balancing
Least connections
Least server CPU
Load Balancer in Rackspace
1. Add a cloud load balancer. If you already have a Rackspace Cloud account, use the “Create Load Balancer” API operation.
2. Configure cloud load balancer. Then we select name, protocol, port, algorithm, and which servers we need load balanced.
3. Enjoy the cloud load balancer which will be online in just a few minutes. each cloud load balancer can be customized or removed as our needs change.
SecuritySecurity
Security
• Firewalls – iptables.• The iptables program lets slice admins
configure the Linux kernel firewall• Logrotator.• "Log rotation" refers to the practice of
archiving an application's current log, starting a fresh log, and deleting older logs.
Iptables
Configuring the IPtablesudo /sbin/iptables -F sudo /sbin/iptables -A INPUT -i eth0 -p tcp -m tcp --dport
30000 -j ACCEPTsudo /sbin/iptables -A INPUT -m state --state
ESTABLISHED,RELATED -j ACCEPTsudo /sbin/iptables -A INPUT -j REJECT sudo /sbin/iptables -A FORWARD -j REJECTsudo /sbin/iptables -A OUTPUT -j ACCEPT sudo /sbin/iptables -I INPUT -i lo -j ACCEPTsudo /sbin/iptables -I INPUT 5 -p tcp --dport 80 -j ACCEPT sudo
/sbin/iptables -I INPUT 5 -p tcp --dport 443 -j ACCEPT
Secure??
DDoS Attack: Dynamic Denial of Service attack.
Wikileaks.com is it alive?
Log Rotate/etc/logrotate.confls /etc/logrotate.d/var/log/apache2/*.log { weekly missingok rotate 52 compress delaycompress notifempty create 640 root adm sharedscripts postrotate if [ -f "`. /etc/apache2/envvars ; echo ${APACHE_PID_FILE:-/var/run/apache2.pid}`" ]; then /etc/init.d/apache2 reload
> /dev/null fi endscript }
Failover IP
• You can actually 'share' an IP between two servers so when one server is not available the other takes over the IP address.
• For this you need two servers. Let's keep it simple and call one the 'Master‘ and one the 'Slave'.
• What this comes down to is creating a High Availability network with your Slices. Your site won't go down.
Heartbeat
• The failover system is not automatic. You need to install an application to allow the failover to occur.
• Heartbeat runs on both the Master and Slave servers. They chat away and keep an eye on each other. If the Master goes down, the Slave notices this and brings up the same IP address that the Master was using.
How to Configure Heartbeat
• sudo aptitude update Once you have done that, have a check to see if anything needs upgrading on the server:
• sudo aptitude safe-upgrade• sudo aptitude install heartbeat• /etc/heartbeat/
Configuring Heartbeat
• sudo nano /etc/heartbeat/authkeys The contents are as simple as this:
• auth 1 • 1 sha1 YourSecretPassPhrase• sudo chmod 600 /etc/heartbeat/authkeys
Configuring Heartbeat
• sudo nano /etc/heartbeat/haresources• master 123.45.67.890/24 • The name 'master' is the hostname of the
MASTER server and the IP address (123.45.67.890) is the IP address of the MASTER server.
• To drive this home, this file needs to be the same on BOTH servers.
Master ha.cf filesudo nano /etc/heartbeat/ha.cf The contents would be as follows:logfacility daemon keepalive 2 deadtime 15 warntime 5 initdead 120 udpport 694 ucast eth1 172.0.0.0 # The Private IP address of your SLAVE server.
auto_failback on node master # The hostname of your MASTER server. node slave # The hostname of your SLAVE server. respawn hacluster /usr/lib/heartbeat/ipfail use_logd yes
Creating Slave ha.cfLet's open the file on the Slave server:sudo nano /etc/heartbeat/ha.cf The contents will need to be:logfacility daemon keepalive 2 deadtime 15 warntime 5 initdead 120 udpport 694 ucast eth1 172.0.0.1 # The Private IP address of your MASTER server. auto_failback on node masternode slaverespawn hacluster /usr/lib/heartbeat/ipfail use_logd yesOnce done, save the file and restart Heartbeat on the Slave Slice:sudo /etc/init.d/heartbeat restart
Testing the failover IP
Start off with both servers running and ping the main IP (the IP we have set to be the failover) on the Master server:
ping -c2 123.45.67.890 The '-c2' option simply tells ping to 'ping' twice. Now shutdown the Master Slice:sudo shutdown -h now Without the failover IP, there would be no response from
the ping request as the server is down.We will notice that the IP is still responding to pings.
Who Am I?
Tahsin HasanSenior Software EngineerTasawr Interactive.
Author of two books ‘Joomla Mobile Web Development Beginner’s Guide’ and ‘Opencart 1.4 Template Design Cookbook’with Packt Publishers Uk.
tahsin352@gmail.comhttp://newdailyblog.blogspot.com (tahSin’s gaRage).
Questions?
top related