Monthly Archives: January 2014

What Amazon EC2 Pricing is right for you? (Don’t be fooled)

Amazon offers different ways of paying for the same thing.
Let’s suppose you want a m1.large ec2 instance, which gives you 7.5GB of memory, 2 compute units, yada yada yada…

You: Ok, how much does it cost?
Me: Well, it depends on how long you need it for.
You: Ok, let’s say I need it running continuously for 200 days. How much does it cost?
Me: It still depends.
You: on what?

You can launch your m1.large EC2 instance and Amazon will happily charge you $0.240/hr. In 200 days, that will cost you $1,152. (This is their On-Demand pricing.)

But you could also purchase a Reserved Instance license.
There are 3 types:

  • Light Utilization, with an upfront cost of $243 plus $0.136/hr
  • Medium Utilization, with an upfront cost of $554 plus $0.084/hr
  • Heavy Utilization, with an upfront cost of $676 plus $0.056/hr

If you went with Light utilization, you’d end up paying about $896.
If you went with Medium utilization, you’d end up paying about $957.
If you went with Heavy utilization, you’d end up paying about $945.

Clearly, Light Utilization is the way the go.

Well, no one can really know exactly how many days or hours they’ll run a server. So you approximate. But you may still like to know when it makes sense to get On-Demand vs Light Reserved vs Medium Reserved vs Heavy Reserved.

So what I did was plot out the lines of the 4 pricing models on a graph. An online graphing tool can be found here.

Here’s an example of the m1.large pricing model.
m1.large
The x-axis is the number of hours of ec2 usage. The y-axis is the cost.
The red line represents the On-Demand pricing. The green line represents the High Reserved Instance pricing. The blue line represents the Medium Reserved Instance pricing. The yellow line represents the Light Reserved Instance pricing.

The idea is that you want to pick the line that is closer to the bottom. The intersections of the lines tell you the break-even points for the various plans. Ie. it tells you when one pricing plan becomes cheaper than another after X hrs of usage.

For example, The red line (on-demand) and yellow line (light reserved) intersect first, at about 2730 hrs and $614. If you plan to use less hours than this, pick on-demand. If you plan to use more, pre-purchase a light reserved instance.

If you follow the yellow line, eventually it intersects with the green line (heavy reserved). If you plan to use more hours than this (5412hrs), you should pre-purchase a heavy reserved instance.

*One thing to note is that the blue line (medium reserved) is never the lowest line on the graph for m1.large instances. So it doesn’t matter how long you plan to use this machine, it NEVER makes sense to pre-purchase the medium reserved instance.
Don’t let Amazon fool you with too many options!!!

Tagged , , ,

Mysql Replication

If you’re worried about your database server going down unexpectedly, (and who isn’t?,) then you should read this.

Mysql is built with support for high-availability and redundancy. With mysql, you can replicate your servers. In the simplest form, it means you have one (master) mysql server and one (slave) mysql server. Any updates to the master server also happens on the slaves. In the event that one server goes down, you still have the second with the most up-to-date state.

In this post, we go through the following:
– Installing MySQL 5.6 (for Ubuntu)
– Setting up replication with Mysql Utilities
– Running through a typical replication scenario.

First a disclaimer. Mysql provides asynchronous replication, which means the slave servers poll the master for updates. This does not guarantee absolute reliability. The master can fail before all updates get to the slaves and you can be out of sync. There’s a small sync lag, but it just depends. MySQL Cluster is supposed to provide synchronous replication, so try that if you need 100% uptime and reliability.

Also, why Mysql 5.6? Previous versions of Mysql provides replication, but that’s all. You still need to handle failover, which means you need to know if the master has failed, and if it has, then to switch over the master’s duty to a slave. Without the new utilities in 5.6, you’d have to write your own monitoring of the master(s), and a set of scripts to change slave to master, and other slaves to sync to the new master.

The following instructions were adapted from a set of instructions here and here.

Installing Mysql 5.6
This assumes that you have removed previous versions of Mysql and that you are starting from a clean slate.

You’ll need to perform many of the steps below as root so just go ahead and make yourself root

sudo -i

Install dependency

apt-get install libaio1

Download the package

wget -O mysql-5.6.deb http://cdn.mysql.com/Downloads/MySQL-5.6/mysql-5.6.14-debian6.0-x86_64.deb

Install package

dpkg -i  mysql-5.6.deb

Setup Startup Script

cp /opt/mysql/server-5.6/support-files/mysql.server /etc/init.d/mysql.server
update-rc.d mysql.server defaults

Update Path

vim /etc/environment
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/opt/mysql/server-5.6/bin"
source /etc/environment

Add mysql User

adduser mysql

The password doesn’t matter. Just enter the defaults for the other info.

Add Data Folder and Change Ownership to mysql User

mkdir /opt/mysql/server-5.6/data
chown mysql /opt/mysql/server-5.6/data
chgrp mysql /opt/mysql/server-5.6/data

Add Log Folder

mkdir /opt/mysql/server-5.6/log

Setting Up Replication

Update Mysql Config File for Replication
The file should be located at /etc/mysql/my.cnf. (don’t ask me why).
This should be the file contents:

[mysqld]
basedir = /opt/mysql/server-5.6
lc-messages-dir = /opt/mysql/server-5.6/share
log_error = /opt/mysql/server-5.6/log/error.log

#Replication
binlog-format=ROW
log-slave-updates=true
gtid-mode=on
enforce-gtid-consistency=true
master-info-repository=TABLE
relay-log-info-repository=TABLE
relay-log=slave-relay-log
sync-master-info=1
server-id=1
log-bin=master-repl-bin.log
report-host=[STATIC_IP_OR_DOMAIN_OF_THIS_MACHINE]
report-port=3306
port=3306

Note that you should create a domain name or static IP for this server and insert it into the config file. If your server domain or IP changes, then the setup will break.

Start Mysql

service mysql.server start

Note, your server will automatically start up when you reboot the machine. We’re starting it manually because we just installed it.
Similarly, you can manually stop the server by typing

service mysql.server stop

Create remote mysql users that other mysql instances can log into for replication. In my case, I just added a password to root and allowed remote access.

mysqladmin -u root password NEWPASSWORD
mysql -u root -p -e "grant all privileges on *.* to 'root'@'%' identified by 'NEWPASSWORD' with grant option; flush privileges;"

Ensure that your firewalls are set up so that every mysql machine can access every other mysql machine. From each machine, you can test access to the other machine with a command such as

mysql -h OTHER_MYSQL_SERVER -u root -p

Download the Mysql Utilities here on one machine. I downloaded this onto my laptop.

Start the replication
From the terminal where Mysql Utilities is installed, issue the following command

mysqlreplicate --master=root:NEWPASSWORD@MASTER_MYSQL --slave=root:NEWPASSWORD@SLAVE_MYSQL_1

Do this for each mysql slave you want to replicate against the master mysql server with.

Set up Automatic Failover
From the terminal where Mysql Utilities is installed, run

mysqlfailover --master=root:NEWPASSWORD@MASTER_MYSQL --discover-slaves-login=root:NEWPASSWORD --rediscover --force

A console will appear and run continuously to monitor the health of the master and slave servers. In the event the master server fails, it will prepare one of the slaves to become the new master, and sync the remaining slaves to the new master.
*Note: I believe there is a bug in the utils. When this is run, an entry is added to the master server’s mysql.failover_console table. I believe it’s supposed to record the IP of the machine where it’s being monitored. But it seems to record its own IP. Furthermore, when the console stops, the entry is never removed. This puts the machine in a weird state and future attempts to connect to it with a monitoring console causes an error. So the –force flag causes the utility to skip the check of the failover_console table and allow it to monitor it for failure.

If you want to run this monitoring process forever (as you probably do), then I found that the best (and only) way to do that is using the screen command

screen -A -m -d -S RUN_AS_USER mysqlfailover --master=root:NEWPASSWORD@MASTER_MYSQL --discover-slaves-login=root:NEWPASSWORD --rediscover --force &

This way, even when you exit the terminal and go to sleep at night, your mysql servers are still being monitored for failure and failover will take place.

You can reconnect to that screen to check the monitor’s status by using the screen commands.
Use this to list the screens you have open

screen -ls

once you have the screen id, use this to re-attach the screen

screen -d [SCREEN_ID]

Running through a typical Replication Scenario
Now that you have replication working, let’s test it out along with fail-over.

Here’s the test script:

  1. Create a table and insert a column on the master mysql server
  2. Ensure that the slaves get the update (by checking if the table exists and has the inserted row)
  3. Simulate a master mysql server failure by shutting it down
  4. Ensure that automatic failover kicks in and that one of the other slaves becomes the new master.
  5. Make sql updates to the new master and check that other slaves are getting its updates
  6. Bring the failed server back online as a slave.
    Use the same command we used before to set up slave.

    mysqlreplicate --master=root:NEWPASSWORD@SLAVE_MYSQL--slave=root:NEWPASSWORD@MASTER_MYSQL
    

    Notice that this time, we’re setting the MASTER_SQL machine to be the slave of SLAVE_MYQL machine. This is so that the machine catches up to any writes missed when it was down. And it’s temporary until we switch it back to master.

  7. Ensure that the returning server gets all the updates it missed from step 5
  8. Switch the returning server over as the master again.
    mysqlrpladmin --master=root:NEWPASSWORD@SLAVE_MYSQL--new-master=root:NEWPASSWORD@MASTER_MYSQL--demote-master --discover-slaves-login=root --force switchover
    

    *I found that the –force flag was necessary to overcome a login credentials issue. The article above didn’t use it and perhaps you won’t need to either, but if you run into any login errors, use it.

    At this point, your mysqlfailover monitoring tool will be in an invalid state. You can quit it and start a new one.

  9. Ensure that the switchover happened correctly. To do this just rerun mysqlfailover after closing the last instance.
    Here’s the command again

    mysqlfailover --master=root:NEWPASSWORD@MASTER_MYSQL --discover-slaves-login=root:NEWPASSWORD --rediscover --force
    
Tagged , , , ,