Tag Archives: docker

Setting up Docker Container with Tensorflow/Keras using Ubuntu Nvidia GPU acceleration

Deep learning is all the rage now. Here’s a quick and dirty guide to setting up a docker container with tensorflow/keras and leveraging gpu accelerations. The info here is available on the official sites of Docker, Nvidia, Ubuntu, and Tensorflow, but I put it all together here for you so you don’t have to hunt around.

I’m assuming you’re on Ubuntu with an Nvidia GPU. (I tested on Ubuntu 18)
In AWS, you can set your instance type to anything that starts with p* (e.g. p3.16xlarge).

Download the Nvidia driver

Visit https://www.nvidia.com/object/unix.html
(Probably pick the Latest Long Lived Branch Version of Linux x86_64/AMD64/EM64T)

wget the download link
e.g.

wget http://us.download.nvidia.com/XFree86/Linux-x86_64/410.93/NVIDIA-Linux-x86_64-410.93.run

Run the nvidia driver install script

chmod +x NVIDIA-Linux-x86_64-410.93.run
sudo ./NVIDIA-Linux-x86_64-410.93.run

Install Docker
reference

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

sudo apt-get update

sudo apt-get install docker-ce

Install Nvidia-Docker 2
reference

# Add the package repositories
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
  sudo apt-key add -

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)

curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-docker.list

sudo apt-get update

# Install nvidia-docker2 and reload the Docker daemon configuration
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

# Test nvidia-smi with the latest official CUDA image
sudo docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi

This is some personal misc setup

Create a “notebooks” under your home dir (/home/ubuntu)

mkdir ~/notebooks

Create a jupyter start up script in your home folder (/home/ubuntu)
filename: jup
Content:

if [ $# -eq 0 ]
  then
    cd $NOTEBOOK_HOME && jupyter notebook --ip=0.0.0.0 --allow-root --NotebookApp.token=''
  else
    cd $1 && jupyter notebook --ip=0.0.0.0 --allow-root --NotebookApp.token=''
fi

Start Docker container with tensorflow-gpu

sudo docker run --runtime=nvidia --env NOTEBOOK_HOME=/home/ubuntu/notebooks -p 8888:8888 -p 8080:8080 -v /home:/home -it --rm tensorflow/tensorflow:latest-gpu-py3-jupyter bash

This docker container will give you tensorflow with gpu support, python3, and a jupyter notebook.
For a list of other tensorflow container (e.g. non-gpu or python2 versions), see here.

If you created the jup script earlier, you can call that to start the Jupyter Notebook. This will also point the notebook home dir to ~/notebooks folder you created:

/home/ubuntu/jup

If you did not install the jup script, then you can run the following command.

jupyter notebook --allow-root

Note that the first time you invoke this, you’ll need to hit the url with the token that’s given to you

To exit the terminal without shutting down Jupyter notebook and the docker container:

Hit Ctrl+p+q

Inside Jupyter Notebook
Open a browser to:

http://SERVER:8888/tree

Some packages require git, so you may install it like so

!apt-get update
!apt-get install --assume-yes git

Inside the notebook, you can install python libraries like so:

!pip install keras
!pip install git+https://www.github.com/keras-team/keras-contrib.git

You can check to make sure your keras is using gpu as backend:

from keras import backend
assert len(backend.tensorflow_backend._get_available_gpus()) > 0
backend.tensorflow_backend._get_available_gpus()

And that’s how you create a docker container with gpu support in ubuntu.
After you install your packages, feel free to save your docker image so you don’t have to redo the apt-get and pip installs every time.

Tagged , , , ,

docker-compose wait for dependencies

Sometimes, when you have an over-optimized system leveraging asynchronization, you have to jump through hoops to synchronize things again to make them work in practice. This is the case with docker-compose currently.

First, let me mention that this is with the docker-compose file format 3.x. Things are different with other versions.

Here’s the issue. Say you’ve got 2 microservices you want to dockerize, a tomcat container and a mysql container. docker-compose will start the two containers in arbitrary order, but you and I know that the tomcat webapp probably needs the database to be available. How do you do it?

The docker compose file allows for an argument called depends_on.
depends_on argument orders the services but it doesn’t actually wait for a service to be “ready”. I believe it just waits for the container’s service port to be exposed. The database may not have started up yet even if the ports are exposed.

Stackoverflow has this solution
Unfortunately, the depends_on’s condition argument that was introduced in compose file version 2.x is deprecated in 3.x.

I believe Docker’s official philosophy is that your application is to be created in a robust manner. That is, if the database is not available (either it’s not started yet or it has gone down temporarily or permanently since starting), the application should handle it gracefully. It’s also very difficult for them to know what it means for a service to be started. e.g. What does it mean for mysql to be started, vs postgres, vs rabbitmq, vs tomcat, vs nodejs, vs some random service you wrote?

Nevertheless, I believe there ought to be a standard way that allows users to order container startup based on readiness.

So let’s talk about the solution that my coworker and I put together, built on other open source utilities.

The first thing I’d like you to take a look at is ufoscout’s docker-compose-wait project. His wait.sh script will wait for a service to be ready before returning. It uses the netcat util to ping the service’s port until it’s available.

In your docker-compose.yml file, you just need to specify the container names and port of the other services that you depend on. See the WAIT_HOSTS environment variable. For example

version: '3'
services:

  mysql:
    image: "mysql:5.7"
    container_name: mysql
    ports:
      - "3306:3306"

  my_app:
    image: "mySuperApp:latest"
    environment:
      WAIT_HOSTS: mysql:3306

The wait.sh file is the key, but there are some caveats to using it. For instance, your container needs the nc (netcat) util. So unless your image is ubuntu, you’re not likely to have that in the standard tomcat image from Docker Hub. You’ll also have to make sure you call the wait.sh script before calling the service startup script. e.g. “CMD /wait.sh && /MySuperApp.sh”

Here’s what I did for the tomcat container. First, I located the the tomcat image’s Dockerfile. It’s referenced in the README for me so that’s nice.

Then I made several modifications to it

ADD https://raw.githubusercontent.com/ufoscout/docker-compose-wait/1.0.0/wait.sh /$CATALINA_HOME/wait.sh
RUN chmod +x /$CATALINA_HOME/wait.sh
RUN apt-get update && apt-get install -y netcat 
# CMD ["catalina.sh", "run"]
CMD ./wait.sh && catalina.sh run

Now I build the image.

docker build -t kanesee/tomcat-wait:9.0 .

The entire set of files and modifications can be found in this github repo.

So now we have a version of tomcat that will wait for other services to be ready before it starts. We just need to add the WAIT_HOSTS argument to docker-compose.yml to specify the name of the database container and port and then it’s ready to go.
If you need longer than the default 30 seconds, you can set WAIT_HOSTS_TIMEOUT to a longer period.

For your benefit, I pushed my tomcat-wait image to Docker Hub so that you don’t need to create your own image if you’re looking for a tomcat v9.0 that does this.

Otherwise, please follow this recipe and create your own “waiting” services. If you create one, I’d love to know about it. Please leave a link to your image in the comments section.

Tagged , , ,

A Docker Development Environment

In a development environment, you want two things. You want access to your shiny development tools when coding. And you want the full suite of production services at your disposal to test your code in.

In the Getting Started with Docker guide, we went over how to set up a Docker container. You could put your code inside that container, but remember that I warned containers are ephemeral, meaning you could lose your changes inside the container. Second, you’d have to find a way to use your fancy dev tools on the code inside the container and that’s not simple.

It would be nice if you could develop your code on your dev machine and then automatically have changes reflected inside the container so it’s available for testing.

Well there’s a way.

Docker allows you to “mount” a volume from your host machine’s drive to your container’s drive.

> docker run -d -P -v /Users/myaccount/mywebserver:/var/www/public_html myname/myimage:1.0
  • docker run this creates a container from an image
  • -d runs it in the background
  • -P exposes all ports
  • -v to indicate a volume mount
  • /Users/myaccount/myewbserver source folder on host to mount
  • /var/www/public_html destination folder on container to mount to
  • myname/myimage:1.0 name of image to instantiate

If you’re not on a Linux machine, there’s something you should be cautious of.

If you are using Docker Machine on Mac or Windows, your Docker daemon has only limited access to your OS X or Windows filesystem. Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory.

According to Docker

Notice in my example above that I’m mounting a folder within my home folder /Users/myaccount.

Tagged , ,

Getting Started with Docker (using the docker-machine)

Docker is one of the newest kids on the block. Awhile ago, I posted about how you can get started with Vagrant to create isolated development environments. We’re going to do the same with Docker.

What’s the difference? To sum it up, Vagrant manages virtual machines and Docker manages application environments. Check out this Stackoverflow discussion with the authors of both systems chiming in.

TL;DR

  1. docker-machine create (optional) to create host machine
  2. docker build to build docker image
  3. docker run to instantiate a container from an image
  4. docker exec -i … bash to get into container
  5. docker commit (optional) to convert container to image
  6. docker push to upload image to Docker Hub
  7. docker pull to download an image
  8. docker save/load to save a committed image to a tar file
  9. docker stop to stop a container
  10. docker start to start a container

Ok so if you’re convinced you need to run Docker or you just want to add Docker to your skillset, let’s get started…

First get Docker. Go to docker’s website and download it. (I can’t link you directly to the download since I don’t know if you’re running Mac, Windows or Linux.)

It’s worth noting I have Mac OS X 10.10 and I ended up installing Docker 1.9.1 at the time of this writing. Your experience may be different but hopefully not by much.

Quick architecture lesson. Docker works off LinuX Containers (LXC). Mac and Windows are obviously not Linux so they work slightly differently. On my Mac, Docker requires a virtual machine, specifically VirtualBox to run a linux OS so Docker can work.

Here’s my understanding of the Docker architecture in my own image

docker-arch

At the foundation is the machine, or more precisely a Linux machine with a kernel that supports LXC’s. Docker lets you “build” an Image which contains all the layers of changes you made. A Container is an instance of an image. It’s the thing that’s runnable. A good description at Stackoverflow.

You get two tools after the installation

  • Docker Quickstart Terminal (docker’s command-line or CLI): gives you a terminal initialized with the “default” host virtual machine settings. (I’ll comment more about this below.)
  • Kitematic: a GUI to manage the running containers

Let’s talk about the Machine

docker-machine

Besides the two tools, you also get a Virtualbox (if you’re on a Mac or Windows). This is your docker-machine. You can see all the docker machines in your system by running this command in the Docker Quickstart Terminal CLI

> docker-machine ls

Your may get a listing that looks similar to this

NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   ERRORS

default   *        virtualbox   Running   tcp://192.168.99.100:2376           

You likely have one machine called “default”. It’s the active one. I believe what makes it active depends on the environment variables in your terminal. To see what environment variables are set for the “default” machine type

> docker-machine env default

To switch to an active machine, you simply set the environment variables for that machine. You can use the linux command eval to run the output of the last command like so

> eval "$(docker-machine env default)"

You can remove and create machines using the docker-machine command. See all available commands by typing

> docker-machine help

Why would you need a different machine or new machine? I’m not really sure. But I had to remove my “default” one and recreate it again to add some dns settings like so

> docker-machine create -d virtualbox --engine-opt dns=8.8.8.8 default

For some reason, my docker containers didn’t have access to the internet due to bad DNS settings and this has to be set at the machine level.

There may be a less destructive way of altering the DNS according to this thread.

Now that you’re familiar with the machines, let’s move on.

Let’s talk about the Image
docker-image

Open the Docker Quickstart Terminal CLI and run

> docker images

You probably don’t have any images yet. Let’s get an image from Docker Hub.

> docker pull linode/lamp

This command fetches a docker image called linode/lamp which contains a LAMP stack.

If you run “docker images” again, you should now see the fetched image.

At this time, you should get acquainted with Docker Hub and create an account on it (remember your username). Docker Hub is a place where you and others can push docker images to share with the rest of the world. This includes pre-built LAMP stack images or it could just be a simple database image. As a user of docker, you may be able to just get away with using these pre-built images…. but we’re going to get our hands dirty and create our own docker image.

Create an empty directory, and in it, create a file called Dockerfile. Here’s a sample

# inherit from the baseimage ‘linode/lamp’
FROM linode/lamp

# Just a note about who wrote this image
MAINTAINER myname@mydomain.com

# Since it’s a LAMP image, this exposes the the apache2 and mysql ports
EXPOSE 80 3306

Now in the same directory where the Dockerfile is, run

> docker build -t myname/myimage .
  • docker build: command to build an image
  • -t: flag to assign a repo and tag to the image
  • myname: should be the account name you just created on Docker Hub
  • myimage: is a name you want to assign to this image
  • .‘: to indicate the image should be built from the current directory where the Dockerfile is located

Check your list of images again with the “docker images” command and you should see your newly built image.

Now what?

Now let’s talk about containers

docker-container

Remember that a container is an instance of an image. So now that we’ve got an image, we can instantiate a container with the “docker run” command like so

> docker run -d -P myname/myimage
  • docker run: command to create a container from an image
  • -d runs the container in the background
  • -P publishes all exposed ports (ie port 80 and 3306) to arbitrary ports available on the host
  • myname/myimage is the image name that we’re instantiating

You should be able to see your running container by typing

> docker ps

You can also see your containers if you open Kitematic.

Inside Kitematic, look under Settings followed by Port.

There you’ll see the port mappings of the two exposed ports.

You should be able to point your mysql client and browser to them. (Since the image was based on linode/lamp, consult its documentation for mysql credentials)

You might ask, how do I get into the container to view the directory, edit the config files or execute scripts?

From Kitematic, you can enter the container via the Exec button

Screen Shot 2015-12-16 at 11.24.27 PM

This will open a shell script into the container. This shell lacks tab-completion, history and other basic shell niceties, but it gets you inside.

There’s another way in though. Open Docker CLI and type

> docker exec -it container_name bash
  • docker exec executes a command inside the container
  • -it runs the command in interactive mode with text console
  • container_name is arbitrary name given to your container when you created it
  • bash is the command you’re running which in this case is the bash shell

In summary, this (exec)utes an (-i)nteractive (bash) shell into your (container_name) container

This approach is superior since you get history and tab-complete in a familiar interactive bash shell.

Inside the container, go ahead and create a file or edit the filesystem in some way. Now you can take my word for it or try it yourself, but the data inside a container is essentially considered ephemeral. If you were to change the name of the container or some other minor change, everything would be wiped out. More importantly, you couldn’t transfer this container and its contents to anyone else unless you performed one additional step.

Commit your Docker container as an image

We’re going to convert our container into an image which can be persisted and transferred.

First we need the container ID (not the name but the nonsensical alphanumeric ID) from this command

> docker ps

Next commit your container

> docker commit CONTAINER_ID myname/myimage:version
  • docker commit converts the container to an image
  • CONTAINER_ID id of the container you’re converting
  • myname name that should match your Docker Hub account name
  • myimage name of image
  • version version of image (e.g. 1.0 or “latest”)

Check your set of images again with “docker images”. You should now have an image that encapsulates the container and any changes to it.

Now you’re ready to share this image.

Push your docker image to Docker Hub

> docker push myname/myimage:version
  • docker push uploads the image to Docker Hub
  • myname/myimage:version is the image you’re pushing

Check your Docker Hub account and you should see your image. Now anyone can pull your image like we pulled linode/lamp earlier.

Save your docker image to a tar file (and Load it back)

Alternatively, you can save your image to a tar file instead of pushing it to Docker Hub. If you have private, proprietary data, you may not want it in the public Hub, or you may just want to transfer this image internally or to a client

> docker save myname/myimage:version > myimage.tar

To reload it

> docker load < myimage.tar

You should be able to see your loaded image with this command

> docker images

Start and Stop a container

You can list the available containers with this command

> docker ps -a

To start a container

> docker start [CONTAINER_ID]

To stop a container

> docker stop [CONTAINER_ID]
Tagged ,