Dear Mom, Yours Truly, Program

Have you ever wanted to write an email to your mom… sent by your program? Of course! What respectable programmer hasn’t. Well, this tutorial will show you how to do it from Java.

import javax.mail.Message;
import javax.mail.PasswordAuthentication;
import javax.mail.Session;
import javax.mail.Transport;
import javax.mail.internet.InternetAddress;
import javax.mail.internet.MimeMessage;

  private String SMTP_HOST = "";
  private String SMTP_PORT = 587;
  private boolean DEBUG = true;

  private static void sendEmail(String contentType, final String login, final String password,
                                String fromEmail, String replyToEmail, String[] a_to, String a_subject,
                                String a_contents) {
    try {
      Properties props = new Properties();
      props.put("mail.smtp.auth", "true");
      props.put("mail.smtp.starttls.enable", "true");
      props.put("", SMTP_HOST);
      props.put("mail.smtp.port", SMTP_PORT);
      Session session = Session.getInstance(props,
          new javax.mail.Authenticator() {
        protected PasswordAuthentication getPasswordAuthentication() {
          return new PasswordAuthentication(login, password);
      if( DEBUG ) session.setDebug(true);

      MimeMessage message = new MimeMessage(session);
      message.setFrom(new InternetAddress(fromEmail));
      for (String toTarget : a_to) {
        message.addRecipient(Message.RecipientType.TO, new InternetAddress(
      message.setFrom(new InternetAddress(fromEmail));
      message.setReplyTo(new InternetAddress[] { new InternetAddress(replyToEmail) });
      message.setContent(a_contents, contentType);
      message.setHeader("Content-Transfer-Encoding", "7bit");

    } catch (Exception e) {
      throw new RuntimeException("Unable to send HTML mail", e);

I won’t explain the code. Just use it. You’re smart. You’ll figure it out.

So let’s say we wanted to send out from a gmail account. Just change SMTP_HOST to “” and we’re good right? No, not quite.

First of all, you need to create an App password as opposed to using your account login password.

Then you’ll test it on your local machine and exclaim “Yay! It works!”. Then you push it to your Amazon server and go to Friday Happy Hour. Then on Sat at 4am, you will get an error that wakes you up in the middle of the night saying Email failed like this

Caused by: javax.mail.MessagingException: [EOF]
        at com.sun.mail.smtp.SMTPTransport.issueCommand(
        at com.sun.mail.smtp.SMTPTransport.helo(
        at com.sun.mail.smtp.SMTPTransport.protocolConnect(
        at javax.mail.Service.connect(
        at javax.mail.Service.connect(
        at javax.mail.Service.connect(
        at javax.mail.Transport.send0(
        at javax.mail.Transport.send(

Why didn’t it work? Seems Google smtp servers have blocked emails from being sent from AWS machines. Why? Because they hate you, and also probably to prevent spam machines from polluting our internet.

So what can you do to send from a Gmail account? You have to use Google’s own brand of code. It looks the same, but take a close look at the imports.


  private static void sendEmail(String contentType, final String username, final String password,
                                String fromAddr, String replyToEmail, String[] toAddr, String subj,
                                String txt) {
    Properties props = new Properties();
    props.put("mail.smtp.auth", "true");
    props.put("mail.smtp.starttls.enable", "true");
    props.put("", "");
    props.put("mail.smtp.port", "587");
    Session session = Session.getInstance(props,
        new {
          protected PasswordAuthentication getPasswordAuthentication() {
            return new PasswordAuthentication(username, password);
    try {
      Message message = new MimeMessage(session);
      message.setFrom(new InternetAddress(username, fromAddr));
      for (String ta : toAddr) {
      message.setContent(txt, "text/html; charset=utf-8");
    } catch (Exception e) {
      throw new RuntimeException(e);

Ok, now go write your mom an email.

Tagged , , , ,

Maven is a weirdo when it comes to resolving dependencies

The other day, I was experimenting with maven dependency conflicts. That is, if your project uses the dependency with conflicting versions, maven will resolve/pick one for you depending on your rules and it’s heuristics.

For the record, I’m using maven 3.3

According to maven docs

“by default Maven resolves version conflicts with a nearest-wins strategy”

You’d think these heuristics are simple, but not really. Let’s look at some examples.

Let’s say you have a pom with two conflicting dependencies



You can run “mvn dependency:tree -Dverbose” to see which of the two commons-codec version it picks.

In this case, maven seems to prefer the last commons-codec in the list of dependencies. 

That makes some sense. Maybe developers have the habit of adding dependencies to the end of the list so maven prefers that one

Let’s suppose we have a dependency, such as hadoop-common, that depends on commons-codec 1.4 and we have a commons-codec 1.5 dependency at the top-level. Which version would it prefer then?



Maven prefers the top-level commons-codec 1.5 version here.

Even though the commons-codec 1.4 within hadoop-common comes later in the dependency list, it prefers the top-level one that the develop explicitly chose. This makes sense since the top-level dependency is explicitly chosen by the developer while the one within hadoop-common is somewhat more implicit. So maven seems to obey explicit top-level dependencies.

Here’s where it gets a little weird. What happens if we have two dependencies that depend on different versions of commons-codec?

poi depends on commons-codec 1.5 and hadoop-common depends on commons-codec 1.4



Maven will choose the FIRST version it sees, in this case, it will prefer commons-codec 1.5 found in the earlier poi dependency.

This is a bit counter-intuitive. Remember that previously, maven prefers the LAST version of commons-codec when both were listed in the top-level.

Let’s dive deeper. Does the depth at which commons-codec is found matter?

hadoop-client depends on hadoop-common which depends on commons-codec 1.4. And poi depends on commons-codec 1.5



Maven prefers poi’s common-codec 1.5 since it is found at the 2nd-level, whereas common-codec 1.4 is found at the 3rd-level of hadoop-client.

It seems that the closer to the top-level the dependency is, the more maven prefers it. This is probably consistent with the fact that maven picks explicit top-level dependencies over sub-dependencies at lower levels. You can try switching the order of hadoop-client and poi and you’ll see that the depth is more important than the dependency order here.

So do you think you have a good handle on how maven resolves dependencies?


Mysql tip: How to SELECT all the columns by name

Thanks to this stackoverflow answer
Here’s how you can create a SELECT all columns (except one) sql:

, (SELECT REPLACE(GROUP_CONCAT(COLUMN_NAME), '<columns_to_omit>,', '')
   WHERE TABLE_NAME = '<table>'
   AND TABLE_SCHEMA = '<database>'), ' FROM <table>');

And thanks to this stackoverfow answer

Here’s how to print that statement

select @sql;

S3’s Role in a Company

As a data company, we sometimes have some very odd ways of cataloguing data. In one instance, a 50GB file was referenced by a word doc. It said to look in someone’s old laptop under a cryptic folder structure. I dusted off and turned on the 5-year-old laptop. It made some uncomfortable churning noises but managed to let me in. I was lucky that this was my old laptop and I knew the login password and the directories well enough to know what it was referring to. If I weren’t here, I don’t know how it would have been found.

This is a problem.

Here’s the solution.

I decided to look into utilizing Amazon’s S3 service. We had already used it, mostly as a dropbox with console access. But I wanted to do more with it. I wanted to make it an integral part of our data management process.

My goal was to be able to programmatically access the data in the S3 cloud. I was building a vagrant VM so I needed it to be able to automatically pull the data and use it during its installation.

First, I had to create an IAM User with credentials. Simply log into AWS’s IAM Manangement Console. There, Create a New User. I called mine vagrantVM. Immediately after the user creation, it will give you this user’s AWS Access Key and AWS Secret Access Key. WRITE THESE DOWN BECAUSE IT WILL NEVER SHOW YOU THE SECRET ACCESS KEY AGAIN AND YOU’LL NEED IT LATER.

Click on the user you just created and go to the Permissions tab. You need to Attach a Policy. You can either give this user the AmazonS3ReadOnlyAccess policy or AmazonS3FullAccess policy depending on your needs. I want my vagrant machines to only have read-only access so I opted to only give it the former policy.

Screen Shot 2016-01-25 at 1.53.08 PM.png

Next we need to set up the Linux box with the Amazon CLI (command-line) tool and the right credentials.

The easiest way to install Amazon CLI is through pip.

  1. First, make sure you have pip, otherwise:
    “sudo apt-get install -y python-pip”
  2. Then install the CLI:
    “sudo pip install awscli”

Note: If the 2nd step above doesn’t work for you and you’re on Mac OSX El Capitan, then try this command instead:
“sudo pip install awscli –upgrade –ignore-installed six”
There’s an issue with El Capitan at the time of this writing.

Next, you’ll need to configure it.

A lot of articles encourage you to use the “aws configure” command, which works nicely. But this requires user interaction, which may not always be possible, especially if I want my vagrant machine to automatically set this up.

So the alternative is to create a “~/.aws” folder and under it add two files

  • “credentials” file
  • “config” file

I placed these two files in my resources/awscli folder under my root vagrant folder. And this is my puppet script inside manifests/site.pp

$awsDir = '/home/vagrant/.aws'
  file { $awsDir:
  ensure =&gt; 'directory'

file { '${awsDir}/credentials':
  require =&gt; File[$awsDir],
  source =&gt; '/vagrant/resources/awscli/credentials'

file { '${awsDir}/config':
  require =&gt; File[$awsDir],
  source =&gt; '/vagrant/resources/awscli/config'

You can now download (and upload) files to S3 assuming you have the right policies attached to the user.

If you prefer to upload via a console, you can do that through the AWS S3 Console.

You can also upload through the AWS CLI commands like so:

aws s3 cp [my_file] s3://[bucket]/[folder]/[filename]

You can now also download that same file using AWS CLI

aws s3 cp s3://[bucket]/[folder]/[filename] [my_file]



I need to make one thing clear when doing this with Vagrant. Everything said above is right, but when I use puppet to setup my vagrant instance, for some reason, it doesn’t recognize my .aws/credentials file. I don’t know enough about vagrant initiation, but the home directory must not be set or a different user must be doing the vagrant setup, but it can’t find ~/.aws/credentials or ~/.aws/config.

You get an error such as “Unable to locate credentials”

So you need to provide to the script that calls “aws s3 …” commands the path to the aws config. There’s a AWS_CONFIG_FILE environment variable that you can set, so in my .sh script file, I do this

export AWS_CONFIG_FILE=/home/vagrant/.aws/config
aws s3 cp s3://[bucket]/[folder]/[file] .

Note that I’m only pointing to the config file, which does not contain the access key id or secret access key according to my post above. There is no environment variable that points to the credentials file unfortunately. However, you can add these into your config file


This will allow your vagrant script to point to your AWS credentials and invoke the AWS CLI command

Tagged , ,

Writing Synchronous Code in Node

First of all, you should be writing asynchronous code in node since that is the philosophy behind it.

But I’ve been experimenting with using node to write non-webapp, throwaway code and sometimes it helps to write synchronous code. This is how you’d do it.

There’s a synchronous library called synchronize.js.

Install it in your development environment

npm install synchronize

Then use it. Here’s a sample

var db = require('./db.js');
var sync = require('synchronize');

sync(db.connection, 'query');

function getDBValues() {
  var sql =
      'SELECT value'
    + ' FROM table'
  var values = [];
  var rows = db.connection.query(sql);
  for(var i=0; i < rows.length; i++) {
    values[i] = rows[i].value;
  return values;

sync.fiber(function() {
  var values = getDBValues();


I’ll point out the necessary pieces.

First is the import of synchronize

var sync = require('synchronize');

Then declare the object and the functions within that object which you want to be synchronous. Here, I declare my db.connection object’s query() method to be synchronous

sync(db.connection, 'query');

Wrap your code in a sync.fiber() method.

sync.fiber(function() { ...} )

Now you can make synchronous calls like this

  var rows = db.connection.query(sql);

instead of asynchronous code like

  db.connection.query(sql, function(err, rows) {
Tagged , , , ,

A Docker Development Environment

In a development environment, you want two things. You want access to your shiny development tools when coding. And you want the full suite of production services at your disposal to test your code in.

In the Getting Started with Docker guide, we went over how to set up a Docker container. You could put your code inside that container, but remember that I warned containers are ephemeral, meaning you could lose your changes inside the container. Second, you’d have to find a way to use your fancy dev tools on the code inside the container and that’s not simple.

It would be nice if you could develop your code on your dev machine and then automatically have changes reflected inside the container so it’s available for testing.

Well there’s a way.

Docker allows you to “mount” a volume from your host machine’s drive to your container’s drive.

> docker run -d -P -v /Users/myaccount/mywebserver:/var/www/public_html myname/myimage:1.0
  • docker run this creates a container from an image
  • -d runs it in the background
  • -P exposes all ports
  • -v to indicate a volume mount
  • /Users/myaccount/myewbserver source folder on host to mount
  • /var/www/public_html destination folder on container to mount to
  • myname/myimage:1.0 name of image to instantiate

If you’re not on a Linux machine, there’s something you should be cautious of.

If you are using Docker Machine on Mac or Windows, your Docker daemon has only limited access to your OS X or Windows filesystem. Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory.

According to Docker

Notice in my example above that I’m mounting a folder within my home folder /Users/myaccount.

Tagged , ,

Getting Started with Docker (using the docker-machine)

Docker is one of the newest kids on the block. Awhile ago, I posted about how you can get started with Vagrant to create isolated development environments. We’re going to do the same with Docker.

What’s the difference? To sum it up, Vagrant manages virtual machines and Docker manages application environments. Check out this Stackoverflow discussion with the authors of both systems chiming in.


  1. docker-machine create (optional) to create host machine
  2. docker build to build docker image
  3. docker run to build container
  4. docker exec -i … bash to get into container
  5. docker commit (optional) to convert container to image
  6. docker push to upload image to Docker Hub
  7. docker pull to download an image

Ok so if you’re convinced you need to run Docker or you just want to add Docker to your skillset, let’s get started…

First get Docker. Go to docker’s website and download it. (I can’t link you directly to the download since I don’t know if you’re running Mac, Windows or Linux.)

It’s worth noting I have Mac OS X 10.10 and I ended up installing Docker 1.9.1 at the time of this writing. Your experience may be different but hopefully not by much.

Quick architecture lesson. Docker works off LinuX Containers (LXC). Mac and Windows are obviously not Linux so they work slightly differently. On my Mac, Docker requires a virtual machine, specifically VirtualBox to run a linux OS so Docker can work.

Here’s my understanding of the Docker architecture in my own image


At the foundation is the machine, or more precisely a Linux machine with a kernel that supports LXC’s. Docker lets you “build” an Image which contains all the layers of changes you made. A Container is an instance of an image. It’s the thing that’s runnable. A good description at Stackoverflow.

You get two tools after the installation

  • Docker Quickstart Terminal (docker’s command-line or CLI): gives you a terminal initialized with the “default” host virtual machine settings. (I’ll comment more about this below.)
  • Kitematic: a GUI to manage the running containers


Let’s talk about the Machine


Besides the two tools, you also get a Virtualbox (if you’re on a Mac or Windows). This is your docker-machine. You can see all the docker machines in your system by running this command in the Docker Quickstart Terminal CLI

> docker-machine ls

Your may get a listing that looks similar to this

NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   ERRORS

default   *        virtualbox   Running   tcp://           

You likely have one machine called “default”. It’s the active one. I believe what makes it active depends on the environment variables in your terminal. To see what environment variables are set for the “default” machine type

> docker-machine env default

To switch to an active machine, you simply set the environment variables for that machine. You can use the linux command eval to run the output of the last command like so

> eval "$(docker-machine env default)"

You can remove and create machines using the docker-machine command. See all available commands by typing

> docker-machine help

Why would you need a different machine or new machine? I’m not really sure. But I had to remove my “default” one and recreate it again to add some dns settings like so

> docker-machine create -d virtualbox --engine-opt dns= default

For some reason, my docker containers didn’t have access to the internet due to bad DNS settings and this has to be set at the machine level.

There may be a less destructive way of altering the DNS according to this thread.

Now that you’re familiar with the machines, let’s move on.

Let’s talk about the Image

Open the Docker Quickstart Terminal CLI and run

> docker images

You probably don’t have any images yet. Let’s get an image from Docker Hub.

> docker pull linode/lamp

This command fetches a docker image called linode/lamp which contains a LAMP stack.

If you run “docker images” again, you should now see the fetched image.

At this time, you should get acquainted with Docker Hub and create an account on it (remember your username). Docker Hub is a place where you and others can push docker images to share with the rest of the world. This includes pre-built LAMP stack images or it could just be a simple database image. As a user of docker, you may be able to just get away with using these pre-built images…. but we’re going to get our hands dirty and create our own docker image.

Create an empty directory, and in it, create a file called Dockerfile. Here’s a sample

# inherit from the baseimage ‘linode/lamp’
FROM linode/lamp

# Just a note about who wrote this image

# Since it’s a LAMP image, this exposes the the apache2 and mysql ports
EXPOSE 80 3306

Now in the same directory where the Dockerfile is, run

> docker build -t myname/myimage .
  • docker build: command to build an image
  • -t: flag to assign a repo and tag to the image
  • myname: should be the account name you just created on Docker Hub
  • myimage: is a name you want to assign to this image
  • .‘: to indicate the image should be built from the current directory where the Dockerfile is located

Check your list of images again with the “docker images” command and you should see your newly built image.

Now what?

Now let’s talk about containers


Remember that a container is an instance of an image. So now that we’ve got an image, we can instantiate a container with the “docker run” command like so

> docker run -d -P myname/myimage
  • docker run: command to create a container from an image
  • -d runs the container in the background
  • -P publishes all exposed ports (ie port 80 and 3306) to arbitrary ports available on the host
  • myname/myimage is the image name that we’re instantiating

You should be able to see your running container by typing

> docker ps

You can also see your containers if you open Kitematic.

Inside Kitematic, look under Settings followed by Port.

There you’ll see the port mappings of the two exposed ports.

You should be able to point your mysql client and browser to them. (Since the image was based on linode/lamp, consult its documentation for mysql credentials)

You might ask, how do I get into the container to view the directory, edit the config files or execute scripts?

From Kitematic, you can enter the container via the Exec button

Screen Shot 2015-12-16 at 11.24.27 PM

This will open a shell script into the container. This shell lacks tab-completion, history and other basic shell niceties, but it gets you inside.

There’s another way in though. Open Docker CLI and type

> docker exec -it container_name bash
  • docker exec executes a command inside the container
  • -it runs the command in interactive mode with text console
  • container_name is arbitrary name given to your container when you created it
  • bash is the command you’re running which in this case is the bash shell

In summary, this (exec)utes an (-i)nteractive (bash) shell into your (container_name) container

This approach is superior since you get history and tab-complete in a familiar interactive bash shell.

Inside the container, go ahead and create a file or edit the filesystem in some way. Now you can take my word for it or try it yourself, but the data inside a container is essentially considered ephemeral. If you were to change the name of the container or some other minor change, everything would be wiped out. More importantly, you couldn’t transfer this container and its contents to anyone else unless you performed one additional step.

Commit your Docker container as an image

We’re going to convert our container into an image which can be persisted and transferred.

First we need the container ID (not the name but the nonsensical alphanumeric ID) from this command

> docker ps

Next commit your container

> docker commit CONTAINER_ID myname/myimage:version
  • docker commit converts the container to an image
  • CONTAINER_ID id of the container you’re converting
  • myname name that should match your Docker Hub account name
  • myimage name of image
  • version version of image (e.g. 1.0 or “latest”)

Check your set of images again with “docker images”. You should now have an image that encapsulates the container and any changes to it.

Now you’re ready to share this image.

Push your docker image to Docker Hub

> docker push myname/myimage:version
  • docker push uploads the image to Docker Hub
  • myname/myimage:version is the image you’re pushing

Check your Docker Hub account and you should see your image. Now anyone can pull your image like we pulled linode/lamp earlier.

Tagged ,

Internal Git Project vs Open Source Git Project

We use git (bitbucket) internally at my company to manage code development. We trust each other, so we branch and merge back to master as we see fit following the procedure laid out in my earlier post. But managing an open source project with developers you do not know is a little different.

I’m using github to manage an open source project. In that environment, I add Collaborators (under Settings tab) to a project to give them full privileges to branch and merge back to master as they see fit. I can ask people to use the Pull Request feature but there’s nothing stopping anyone from merging back into master as they see fit or just developing directly on the master branch and pushing changes in.

In an open source environment though, we may way more control. As such, we should allow people to fork our project. When they want to merge their fork back, they must submit a Pull Request at which time I can verify the changes.

Tagged , , ,

Streaming/Scrollable ResultSet: for when you want results quickly and without taking up too much memory

I’m sure you’re already familiar with how to write a SELECT statement and execute it in java.

Connection conn = ...
Statement stmt = conn.createStatement();
Resultset rs = stmt.executeQuery("...");

But once in awhile, you might run into a very, very, VERY large table. By default, your mysql driver will read and buffer all the rows and then return it to you.

There are two problems you will encounter

  1. The reading takes a long time, and your stmt.executeQuery() method will hang for awhile before getting results. Other things could time-out during that time.
  2. The buffering of all the rows in-memory will consume a lot of heap and you may not have enough

So what if you want to get results quickly. ie. as soon as the first row is fetched, return it so you can process it.
Also, suppose you don’t want to hold all billion rows in memory. You just want to read one row, process it, and trash it.

Then you should create what’s called a streaming or scrollable result set like so

    String selectSQL =
        "SELECT * FROM verylargetable";
    ResultSet selectRs = null;
    try (
        Connection conn = BoneCPSQLUtil.getNewConnection();
        Statement  stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,

       ) {
      selectRs = stmt.executeQuery(selectSQL);

      while( {
    } catch(Exception e) {
    } finally {
      try {
      } catch(Exception e2) { }


You’ll find that results return right away. (Note: the overall throughput is lower because it’s faster to read everything first and give it to you, but the response time is faster here because you get each row right away).

You’ll also find that you don’t need much memory because as soon as you’re done with the row, it throws it away, and then you can fetch the next row.

Tagged ,

My First Nuget Package. Written. Packaged. Published.

I had to get my hands dirty building a powershell extension for .NET Visual Studios. As a java developer, .NET, powershell, nuget, are all terms unfamiliar to me. So best way to learn, is to just jump right into it.

Write a PowerShell script

Open Visual Studio and create a new project. Then add a new file (Add > New Item > Text File) and call it MyFirstScript.ps1

Let’s create a “Hello World” inside MyFirstScript.ps1

Write-Host "hello world"

Then in the “Package Manager Console” window, type

PM> .\MyFirstScript.ps1

You should get a ‘hello world’ output

Check out tutorials on the web to go beyond “hello world” but we’re moving on now…

Create a Nuget Package

Now, I’m going to have you modify MyFirstScript in a few ways.

First, change its extension to .psm1.

Next wrap the code in a function and add an Export-ModuleMember line to tell it what function to expose.

You should now have something like the following.


function Write-HelloWorld {
  Write-Host "hello world"

Export-ModuleMember Write-HelloWorld

Your exported function should follow appropriate naming scheme, otherwise it will yell at you later. For a list of proper verbs to use in your function name, go here:
Approved Verbs for Windows PowerShell Commands

Now create another file, that will import your powershell script as a module. Call this init.ps1


param($installPath, $toolsPath, $package, $project)

Import-Module (Join-Path $toolsPath MyFirstScript.psm1)

Just make sure the last parameter of Import-Module matches the name of your script.

Now I need you to put both MyFirstScript.psm1 and init.ps1 into a folder called tools.

Open a terminal to the folder where the tools folder resides.
Type the following

c:\> nuget spec MyFirstScript

This will create a MyFirstScript.nuspec.
Open this file and modify the values. Now it might look something like this


<?xml version="1.0"?>
<package >
    <description>prints hello world</description>
    <releaseNotes>alpha release</releaseNotes>
    <copyright>Copyright 2015</copyright>

On the terminal again, type

c:\> nuget pack MyFirstScript.nuspec

This should create a binary file called MyFirstScript.1.0.nupkg

You’ve just created a nupkg (nuget package).

Import your Nuget Package

You can import this local nupkg to your Visual Studio project.
From the “Package Manager Console” just type

PM> Install-Package MyFirstScript -Source C:\pathToMyFirstScriptFolder

Publish your Nuget Package for Others to Import

If you wanted to share this more widely with others, you’ll want to upload it to

To do that, register an account and go here to upload your MyFirstScript.nupkg

Nuget Upload Page
Screen Shot 2015-09-18 at 3.58.55 PM

After the upload, your nuget package will be hosted and be available for all to install

Screen Shot 2015-09-18 at 4.01.20 PM

To install it, go to the “Package Manager Console” in visual studios. (Make sure you have a project opened). And type

PM> Install-Package MyFirstScript

It’s that easy!

Tagged , , ,