Monthly Archives: December 2015

Writing Synchronous Code in Node

First of all, you should be writing asynchronous code in node since that is the philosophy behind it.

But I’ve been experimenting with using node to write non-webapp, throwaway code and sometimes it helps to write synchronous code. This is how you’d do it.

There’s a synchronous library called synchronize.js.

Install it in your development environment

npm install synchronize

Then use it. Here’s a sample

var db = require('./db.js');
var sync = require('synchronize');

sync(db.connection, 'query');

function getDBValues() {
  var sql =
      'SELECT value'
    + ' FROM table'
      ;
  console.log(sql);
  var values = [];
  var rows = db.connection.query(sql);
  for(var i=0; i < rows.length; i++) {
    values[i] = rows[i].value;
  }
  return values;
}

sync.fiber(function() {
  var values = getDBValues();
  console.log(gids);

  process.exit();
})

I’ll point out the necessary pieces.

First is the import of synchronize

var sync = require('synchronize');

Then declare the object and the functions within that object which you want to be synchronous. Here, I declare my db.connection object’s query() method to be synchronous

sync(db.connection, 'query');

Wrap your code in a sync.fiber() method.

sync.fiber(function() { ...} )

Now you can make synchronous calls like this

  var rows = db.connection.query(sql);

instead of asynchronous code like

  db.connection.query(sql, function(err, rows) {
    ...
  });
Tagged , , , ,

A Docker Development Environment

In a development environment, you want two things. You want access to your shiny development tools when coding. And you want the full suite of production services at your disposal to test your code in.

In the Getting Started with Docker guide, we went over how to set up a Docker container. You could put your code inside that container, but remember that I warned containers are ephemeral, meaning you could lose your changes inside the container. Second, you’d have to find a way to use your fancy dev tools on the code inside the container and that’s not simple.

It would be nice if you could develop your code on your dev machine and then automatically have changes reflected inside the container so it’s available for testing.

Well there’s a way.

Docker allows you to “mount” a volume from your host machine’s drive to your container’s drive.

> docker run -d -P -v /Users/myaccount/mywebserver:/var/www/public_html myname/myimage:1.0
  • docker run this creates a container from an image
  • -d runs it in the background
  • -P exposes all ports
  • -v to indicate a volume mount
  • /Users/myaccount/myewbserver source folder on host to mount
  • /var/www/public_html destination folder on container to mount to
  • myname/myimage:1.0 name of image to instantiate

If you’re not on a Linux machine, there’s something you should be cautious of.

If you are using Docker Machine on Mac or Windows, your Docker daemon has only limited access to your OS X or Windows filesystem. Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory.

According to Docker

Notice in my example above that I’m mounting a folder within my home folder /Users/myaccount.

Tagged , ,

Getting Started with Docker (using the docker-machine)

Docker is one of the newest kids on the block. Awhile ago, I posted about how you can get started with Vagrant to create isolated development environments. We’re going to do the same with Docker.

What’s the difference? To sum it up, Vagrant manages virtual machines and Docker manages application environments. Check out this Stackoverflow discussion with the authors of both systems chiming in.

TL;DR

  1. docker-machine create (optional) to create host machine
  2. docker build to build docker image
  3. docker run to instantiate a container from an image
  4. docker exec -i … bash to get into container
  5. docker commit (optional) to convert container to image
  6. docker push to upload image to Docker Hub
  7. docker pull to download an image
  8. docker save/load to save a committed image to a tar file
  9. docker stop to stop a container
  10. docker start to start a container

Ok so if you’re convinced you need to run Docker or you just want to add Docker to your skillset, let’s get started…

First get Docker. Go to docker’s website and download it. (I can’t link you directly to the download since I don’t know if you’re running Mac, Windows or Linux.)

It’s worth noting I have Mac OS X 10.10 and I ended up installing Docker 1.9.1 at the time of this writing. Your experience may be different but hopefully not by much.

Quick architecture lesson. Docker works off LinuX Containers (LXC). Mac and Windows are obviously not Linux so they work slightly differently. On my Mac, Docker requires a virtual machine, specifically VirtualBox to run a linux OS so Docker can work.

Here’s my understanding of the Docker architecture in my own image

docker-arch

At the foundation is the machine, or more precisely a Linux machine with a kernel that supports LXC’s. Docker lets you “build” an Image which contains all the layers of changes you made. A Container is an instance of an image. It’s the thing that’s runnable. A good description at Stackoverflow.

You get two tools after the installation

  • Docker Quickstart Terminal (docker’s command-line or CLI): gives you a terminal initialized with the “default” host virtual machine settings. (I’ll comment more about this below.)
  • Kitematic: a GUI to manage the running containers

Let’s talk about the Machine

docker-machine

Besides the two tools, you also get a Virtualbox (if you’re on a Mac or Windows). This is your docker-machine. You can see all the docker machines in your system by running this command in the Docker Quickstart Terminal CLI

> docker-machine ls

Your may get a listing that looks similar to this

NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   ERRORS

default   *        virtualbox   Running   tcp://192.168.99.100:2376           

You likely have one machine called “default”. It’s the active one. I believe what makes it active depends on the environment variables in your terminal. To see what environment variables are set for the “default” machine type

> docker-machine env default

To switch to an active machine, you simply set the environment variables for that machine. You can use the linux command eval to run the output of the last command like so

> eval "$(docker-machine env default)"

You can remove and create machines using the docker-machine command. See all available commands by typing

> docker-machine help

Why would you need a different machine or new machine? I’m not really sure. But I had to remove my “default” one and recreate it again to add some dns settings like so

> docker-machine create -d virtualbox --engine-opt dns=8.8.8.8 default

For some reason, my docker containers didn’t have access to the internet due to bad DNS settings and this has to be set at the machine level.

There may be a less destructive way of altering the DNS according to this thread.

Now that you’re familiar with the machines, let’s move on.

Let’s talk about the Image
docker-image

Open the Docker Quickstart Terminal CLI and run

> docker images

You probably don’t have any images yet. Let’s get an image from Docker Hub.

> docker pull linode/lamp

This command fetches a docker image called linode/lamp which contains a LAMP stack.

If you run “docker images” again, you should now see the fetched image.

At this time, you should get acquainted with Docker Hub and create an account on it (remember your username). Docker Hub is a place where you and others can push docker images to share with the rest of the world. This includes pre-built LAMP stack images or it could just be a simple database image. As a user of docker, you may be able to just get away with using these pre-built images…. but we’re going to get our hands dirty and create our own docker image.

Create an empty directory, and in it, create a file called Dockerfile. Here’s a sample

# inherit from the baseimage ‘linode/lamp’
FROM linode/lamp

# Just a note about who wrote this image
MAINTAINER myname@mydomain.com

# Since it’s a LAMP image, this exposes the the apache2 and mysql ports
EXPOSE 80 3306

Now in the same directory where the Dockerfile is, run

> docker build -t myname/myimage .
  • docker build: command to build an image
  • -t: flag to assign a repo and tag to the image
  • myname: should be the account name you just created on Docker Hub
  • myimage: is a name you want to assign to this image
  • .‘: to indicate the image should be built from the current directory where the Dockerfile is located

Check your list of images again with the “docker images” command and you should see your newly built image.

Now what?

Now let’s talk about containers

docker-container

Remember that a container is an instance of an image. So now that we’ve got an image, we can instantiate a container with the “docker run” command like so

> docker run -d -P myname/myimage
  • docker run: command to create a container from an image
  • -d runs the container in the background
  • -P publishes all exposed ports (ie port 80 and 3306) to arbitrary ports available on the host
  • myname/myimage is the image name that we’re instantiating

You should be able to see your running container by typing

> docker ps

You can also see your containers if you open Kitematic.

Inside Kitematic, look under Settings followed by Port.

There you’ll see the port mappings of the two exposed ports.

You should be able to point your mysql client and browser to them. (Since the image was based on linode/lamp, consult its documentation for mysql credentials)

You might ask, how do I get into the container to view the directory, edit the config files or execute scripts?

From Kitematic, you can enter the container via the Exec button

Screen Shot 2015-12-16 at 11.24.27 PM

This will open a shell script into the container. This shell lacks tab-completion, history and other basic shell niceties, but it gets you inside.

There’s another way in though. Open Docker CLI and type

> docker exec -it container_name bash
  • docker exec executes a command inside the container
  • -it runs the command in interactive mode with text console
  • container_name is arbitrary name given to your container when you created it
  • bash is the command you’re running which in this case is the bash shell

In summary, this (exec)utes an (-i)nteractive (bash) shell into your (container_name) container

This approach is superior since you get history and tab-complete in a familiar interactive bash shell.

Inside the container, go ahead and create a file or edit the filesystem in some way. Now you can take my word for it or try it yourself, but the data inside a container is essentially considered ephemeral. If you were to change the name of the container or some other minor change, everything would be wiped out. More importantly, you couldn’t transfer this container and its contents to anyone else unless you performed one additional step.

Commit your Docker container as an image

We’re going to convert our container into an image which can be persisted and transferred.

First we need the container ID (not the name but the nonsensical alphanumeric ID) from this command

> docker ps

Next commit your container

> docker commit CONTAINER_ID myname/myimage:version
  • docker commit converts the container to an image
  • CONTAINER_ID id of the container you’re converting
  • myname name that should match your Docker Hub account name
  • myimage name of image
  • version version of image (e.g. 1.0 or “latest”)

Check your set of images again with “docker images”. You should now have an image that encapsulates the container and any changes to it.

Now you’re ready to share this image.

Push your docker image to Docker Hub

> docker push myname/myimage:version
  • docker push uploads the image to Docker Hub
  • myname/myimage:version is the image you’re pushing

Check your Docker Hub account and you should see your image. Now anyone can pull your image like we pulled linode/lamp earlier.

Save your docker image to a tar file (and Load it back)

Alternatively, you can save your image to a tar file instead of pushing it to Docker Hub. If you have private, proprietary data, you may not want it in the public Hub, or you may just want to transfer this image internally or to a client

> docker save myname/myimage:version > myimage.tar

To reload it

> docker load < myimage.tar

You should be able to see your loaded image with this command

> docker images

Start and Stop a container

You can list the available containers with this command

> docker ps -a

To start a container

> docker start [CONTAINER_ID]

To stop a container

> docker stop [CONTAINER_ID]
Tagged ,

Internal Git Project vs Open Source Git Project

We use git (bitbucket) internally at my company to manage code development. We trust each other, so we branch and merge back to master as we see fit following the procedure laid out in my earlier post. But managing an open source project with developers you do not know is a little different.

I’m using github to manage an open source project. In that environment, I add Collaborators (under Settings tab) to a project to give them full privileges to branch and merge back to master as they see fit. I can ask people to use the Pull Request feature but there’s nothing stopping anyone from merging back into master as they see fit or just developing directly on the master branch and pushing changes in.

In an open source environment though, we may way more control. As such, we should allow people to fork our project. When they want to merge their fork back, they must submit a Pull Request at which time I can verify the changes.

Tagged , , ,