Category Archives: Coding

Using AWS Glacier

This is a surprisingly difficult task and infinitely harder than using AWS S3. So why store stuff in Glacier? Because it’s cheap.

Here’s how much you will pay to store a 1TB (1000GB) file today.

Storage Cost per GB Monthly Cost
EBS SSD $0.10/GB $100/month
EBS Snapshot $0.05/GB $50/month
S3 $0.023/GB $23/month
Glacier $0.004/GB $4/month

What else do you need to know about it? Glacier retrieval is not instant. You make a request, it takes awhile to get fulfilled, and then you pick it up when it’s ready. That’s what makes it different from other storage models.
 

Anyways, here are the steps

1) In the AWS Console, go to the Glacier service and create a vault (e.g. my-vault)

2) Make sure you have a user that has permissions to the vault. Here’s a sample policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "glacier:*"
            ],
            "Sid": "Stmt1376667184000",
            "Resource": [
                "arn:aws:glacier:us-east-1:112233445566:vaults/my-vault"
            ],
            "Effect": "Allow"
        }
    ]
}

3) On your machine, ensure you have the awscli

4) Upload your files to the vault using the awscli

aws glacier upload-archive --vault-name my-vault --account-id - --body my-file.zip

Output will look like this:

{
    "checksum": "e5d002bf40...",
    "location": "/112233445566/vaults/my-vault/archives/KYKdL...",
    "archiveId": "KYKdL..."
}

5) To download, you have to make a request. Depending on the Tier service, it could take minutes to hours before your request is fulfilled.

First, create a request.json file like the following:

{
  "Type": "archive-retrieval",
  "ArchiveId": "KYKdL...",
  "Description": "Retrieve archive on 2015-07-17",
  "Tier":"Expedited",
  "SNSTopic":"arn:aws:sns:us-east-1:112233445566:glacier-alert"
}

Type defines the type of job. In this case, you want “archive-retrieval” to retrieve the archived file.

ArchiveId is the archiveId returned in the output when you uploaded the file.

Tier determines how quickly your request is fulfilled. It has several choices that vary in speed and price.
For the most up-to-date tier pricing and speeds, check Data Retrievals section of FAQ
Here’s the pricing and speeds as of this writing

Tier Price Fullfillment
Standard (default) $0.01/GB + $0.05/retrieval 3 – 5 hours
Bulk $0.01/GB + $0.05/retrieval 5 – 12 hours
Expedited $0.03/GB + $0.01/retrieval 1 – 5 minutes

SNSTopic allows you to register for alerts when the request is done.
You can create SNS Topics in the Simple Notification Service tab of the AWS Console and then copy the Topic ARN here.

Next, run this command to initiate the job request

aws glacier initiate-job --vault-name my-vault --account-id - --job-parameters file://request.json

You’ll get a job with a JobID which you’ll need later.

6) You can also check the status of the job like this:

aws glacier describe-job --vault-name my-vault --account-id - --job-id WI6sdXS...

7) Once the job is complete, you can get the output of the job

aws glacier get-job-output --vault-name my-vault --account-id - --job-id WI6sdXS... [MY-OUTPUT-FILE]

4.5) Yes, 4.5 because this should have gone between steps 4 and 5. You can list the files in your vault like so:

aws glacier initiate-job --account-id - --vault-name my-vault --job-parameters '{"Type": "inventory-retrieval"}'

So why didn’t I just tell you about this earlier? Well because, you don’t get back a list of archives after calling this. Notice you “initiate-job” again. Which means, you have to wait for the job to complete (step 5) and then get the output of the job (step 6). So you have to learn steps 5 and 6 before you can do step 4.5.

Clear as mud? Good.

Advertisements
Tagged , ,

Auth0: enriching the id token and access token

With a little auth0 experience under my belt, let’s dive further into a new topic. What are the ID Token and Access Token.

Once you log in, you get back an object that looks like this in angular

{
accessToken: “eyJ0eX…”
, idToken: “eyJ0eXA…”
, idTokenPayload: {
sub: “auth0|5adfa…”
, nickname: “kane”
, name: “ksee@inferlink.com”
, picture: “https://s.gravatar.com/…”
, email: “ksee@inferlink.com”
}
, …
}

At this point, you’ve got the idTokenPayload which stores all the info you would need about the user. But if you need more, you can create additional “claims”.
Reference

To do this, create a new Rule in auth0.com. Start with an empty rule and paste in something like this:

function (user, context, callback) {
  const namespace = 'https://myapp.example.com/';
  context.idToken[namespace + 'favorite_color'] = user.favorite_color;
  context.idToken[namespace + 'preferred_contact'] = user.user_metadata.preferred_contact;
  callback(null, user, context);
}

Now your payload should contain the favorite_color and preferred_contact fields.

But what about the backend? We only send the access Token in the Authorization header and you’ll notice that you don’t have these other neat user fields in there.

You can do the same thing with a Rule. Simply use context.accessToken instead like so:

function (user, context, callback) {
  const namespace = 'https://myapp.example.com/';
  context.accessToken[namespace + 'favorite_color'] = user.favorite_color;
  context.accessToken[namespace + 'preferred_contact'] = user.user_metadata.preferred_contact;
  callback(null, user, context);
}

Auth0 Authentication and NodeJs

If you follow this blog, you’ll know I posted a blog and starting kit for doing authentication in nodejs leveraging passportjs. In it, I rolled out my own authentication package which you can use to start your own projects.

What if you want a more robust solution with plenty more features? I’ve had the opportunity to switch my company to a 3rd party vendor’s solution: auth0. And it’s pretty good.

So what do you get with auth0 that you don’t get with my home-grown solution?

  • You can easily offer other types of Social logins without doing any extra work (e.g. Facebook, Google, Twitter logins)
  • You get security features such as email verification and multifactor authentication
  • You get a user management console to delete users, add users, etc.
  • You get Single Sign On (SSO). It’s what makes it possible for you to sign into gmail service and then use google calendars and google docs without having to sign in again. This is very useful if you have a suite of services to offer users across different domains.

What’s the cost?

  • Pricing (at the time of this post, they’re offering 7k free active users and unlimited logins)
  • There is a learning curve to understand how to use this very flexible but complex system. They have lots of examples and offer technical support. Fortunately for you, I’m going to simplify things for you by giving you a starter kit although it’s specific to our application model. You can probably set up a lot of other different authentication models with their service.

So what’s similar between auth0 and what we did here? For nodejs, they both leverage passportjs. They both use jwt to sign the payload to prevent tampering. But there are definitely big enough workflow differences that we need to build a different wrapper around it.

One big difference is that the user is redirected off your site to auth0.com for login and registration. Once complete, they are redirected back to your site. So if you hold any temporary variables/state, you’ll lose it. In the previous implementation, for example, we held the last url route in the $rootScope so we could redirect the user back after login. However, the $rootScope is wiped once you leave the site to auth0.com for login, so we have to leverage the browser’s web store instead.

You still have to build some code around auth0 even though it tries to abstract away most of the hard authentication stuff. For example, once you log in, it hands you an access_token and id_token. It’s up to you to store that for the duration of the user’s session. Fortunately, we build some of this infrastructure the last time we made our home-grown authentication starter kit. We just have to adapt it a bit to auth0.

For my starter kit, I started with auth0’s angular example which can be found here.

To integrate into our application model, I made the following changes and additions.

AuthService

Their AuthService was in auth.service.js and I merged it in with other services and options in a auth0.services.js file.

By default, the authResult.idTokenPayload (in handleAuthentication()) included only a sub field

{sub:”auth0|ea4d8b…”}

I modified the login call to angularAuth0.authorize() to include a scope

    angularAuth0.authorize({
      scope:'openid profile email'
    });

This returns much more info back to your app

{
sub:”auth0|ea4d8b…”,
nickname:”…”,
name:”…”,
email:”…”,
email_verified: true,

}

I also implemented a redirect so that after the user returns from auth0’s login page, they land back on the page they last visited.

function login() {
  $window.localStorage.setItem('nextPath', $location.path());
}

function handleAuthentication() {
  angularAuth0.parseHash(function(err, authResult) {
    var nextPath = $window.localStorage.nextPath;
    if( nextPath ) {
      $location.path(nextPath);
    }
}

UserSession

I added a UserSession service to store auth0’s tokens and user info. This service is the same as before.

authHandler

Just like before, there’s an authHandler service that handles redirecting the user to auth0’s login page by invoking AuthService.login() when it detects a 401 (Unauthorized).

It also attaches the Authorization header to every request once a login is established.

User Registration Hook

If you want to track the users on your app, then you can set up an auth0 Hook. Specifically, you can set up a Post User Registration hook so that when a new user registers on auth0, auth0 will invoke a hook with small piece of code you write to store that info on your app.

There are 2 parts to this. First set up your web hook. In the auth0 website, go to Hooks and then under Post User Registration, click Create New Hook

auth0_hook.png

Then you want to edit the code snippet and add something like this. Make sure to change the URL value

module.exports = function (user, context, cb) {
  var request = require('request');
  
  var url = '##INSERT_YOUR_URL##';
  var data = {
    userid: user.id,
    name: user.username,
    email: user.email
  }
  var options = {
    uri: url,
    method: 'POST',
    json: true,
    headers: {
        "content-type": "application/json",
        },
    body: data
  };
  console.log('logging registration')
  request(options, function(error, response, body) {
    if( error) {
      console.error(error)
    } else {
      console.log('sucessfully logged registration')
    }    
    cb();

  })
};

The second part of this is on your app server’s side. You’ll need to implement the api endpoint called by the hook to receive the user info and store it in your database. I’ve provided a sample in my starter kit but make sure you modify it to store it in your database table as you see fit.

So without further ado, here’s a link to the starter kit with all the pieces of the code you need to start an angular/nodejs project with auth0.
https://github.com/kanesee/ng-node-auth0-kit/tree/master/shared

This blog explains the components of the code but the README hopefully provides enough info for you to start using it.

Tagged , ,

Using supervisord to start Docker services

Automatically start you docker services

Alternative solutions: http://centos-vn.blogspot.com/2014/06/daemon-showdown-upstart-vs-runit-vs.html

supervisord: https://docs.docker.com/engine/articles/using_supervisord/

 

Javascript Debugging

This is gonna be a quicky on how to debug your standalone javascript. I assume you already know how to debug javascript using the browser’s inspector and are familiar with break points, scope vars, control flow, and everything else you can do in that inspector debugger.

The task here is to debug a standalone javascript program (not running in a browser).

Say you have a helloworld.js

You can run it like so

node helloworld.js

You can also debug it with like so

node --inspect-brk helloworld.js

The –inspect-brk option will set a breakpoint on the 1st line and wait for you to attach the debugger. Otherwise your program may run through thousands of lines of code before you’re quick enough to attach the debugger and stop it. This option is available in node version 7.x+. I didn’t have it in 7.2 though, so I upgraded to 8.9.4 which had it.

Then, open your chrome browser to about:inspect. You should see something like this

Screen Shot 2018-01-11 at 1.54.11 PM

Click on “Open dedicated DevTools for Node” and this will open the inspector debugger that you’re used to

Tagged ,

TypeScript

What is it?
It’s Typed-JavaScript. Javscript doesn’t have types, remember?
So you can do ridiculous things like:

var a = 2;
if( randomFunctionIsTrue() ) a = "2";
console.log( a+2 );

Sometimes, this might produce 4 and sometimes it produces “22”. Who knows.

So this is TypeScript

let a: number = 2;
a = "2" // throws compile time error

TypeScript prevents you from doing stupid stuff.

Notice I said TypeScript throws compile time error. TypeScript files (which end in .ts) compiles to .js files.

In the end, TypeScript is just another syntax for javascript and it creates javascript and runs anywhere that javascript can (frontend, backend, standalone).

But wait, there’s more
You may know that you can write classes in javascript, but they don’t look like classes. That’s because they’re functions. With TypeScript, you can write classes and interfaces that actually look like classes and interfaces.

interface Animal {
...
}

class Dog implements Animal {
...
}

This is not Java. This is TypeScript.

Why use it?
If you’re familiar with Java, you’ll adapt to TypeScript easily.

Because it’s typed, you catch errors early on in compile time, rather than at some arbitrary time during a production run.

Because it’s typed, IDE’s can leverage the compiler hints to give you heads up when there are potential errors. Think the power of Eclipse on java, applied to javascript.

It does everything javascript can because it compiles to javascript. This is just like scala compiling to java bytecode.

You can download a Starter kit example that shows a few interfaces and classes that performs a http request and mysql insert.

Tagged ,

Setting up your python environment

If you’ve worked with python but never used virtualenv, then you will probably have encountered this problem. You are working on 2 projects. One requires version X of a library; the other requires version Y of the same library.

So you use virtualenv to create isolated python environments so you don’t have messy and conflicting dev environments in your system.

But here’s another tool that’s a bit easier to use. It’s a wrapper for virtualenv called virtualenvwrapper.

Here’s the basics.

Download it

pip install virtualenvwrapper

Now add these 2 lines to your ~/.bash_profile (if you’re on OSX)

export WORKON_HOME=~/virtualenvs
source /usr/local/bin/virtualenvwrapper.sh

(Note, the 2nd line may be different depending on where virtualenvwrapper was installed)

Create a virtual env

mkvirtualenv env1

Exit from the virtual env

deactivate

Next time you want to work on that project again

workon env1

If you forgot what virtual envs you have created, you can list them

lsvirtualenv
Tagged , ,

Migrating Maven artifacts from Nexus to S3

Awhile ago, I set up a Nexus server to host my company’s private artifacts, both our in-house projects as well as jars that weren’t available on Maven Central. The system standardizes the dependencies we use. This has served us well.

But recently, our physical server has been going down, which means our developers can’t compile. We thought about hosting Nexus on Amazon EC2, which will incur a monthly charge for a 24/7 server that will be used a fraction of that time.

I discovered that we can host our artifacts (privately or publicly) on an Amazon S3 service. S3 is a storage service, basically a data repo for any kind of file. Maven artifacts are basically just files so it’s a perfect match.

There are several maven plugins out there that’ll let you deploy and download artifacts to/from S3. I used spring-projects/aws-maven.

The basic instructions are in the README of that github project, but it is missing a crucial configuration element and doesn’t show how you generate the access keys. I’ll provide all the instructions below for completeness.

Benefits of S3 repo vs Nexus

Before the How, let’s go into the Why.

First is cost. S3 storage is cheaper than a running EC2 instance. Current S3 pricing is at $0.023/GB. Say you’ve got 10GB of artifacts, that’s $0.23/month. Current on-demand EC2 pricing for a medium instance is at $0.0464/hr, so that’s $33.41/month. With reserved instances, you get an average savings of 40%, but that’s still $20.05, or 100x the price of S3 storage.

Second, but perhaps even more important is reliability and availability. Your own EC2 server could get hacked unless you’re diligent about keeping up with security patches. Your server can go down for maintenance or other things you forgot about. Also, you’d have to set up your own backup solution. With S3, I think it’s way less likely it will go down. You don’t have to worry about redundancy or backups. It will just work when you need it.

I should point out that Nexus has some benefits as well. It has a nice web console which has its own access controls and search feature. With S3, you’ll manage access through IAM services but I don’t believe there’s a way to search S3 Buckets. Nexus will also create the appropriate metafiles around each artifact, which I’ll talk about below. S3 is just a plain storage service and knows nothing about maven artifacts.

You’ll have to weigh the pros and cons yourself. If you decide on S3, continue reading…

Set up your S3 Bucket

In your AWS console, select the S3 service.
Select Create bucket and follow the instructions.
Once you have your bucket, create two top-level folders in it called “release” and “snapshot” like this
Screen Shot 2017-11-07 at 4.35.02 PM

Update your maven settings.xml

<settings>
  ...
  <servers>
    ...
    <server>
      <id>aws-release</id>
      <username>0123456789ABCDEFGHIJ</username>
      <password>0123456789abcdefghijklmnopqrstuvwxyzABCD</password>
    </server>
    <server>
      <id>aws-snapshot</id>
      <username>0123456789ABCDEFGHIJ</username>
      <password>0123456789abcdefghijklmnopqrstuvwxyzABCD</password>
    </server>
    ...
  </servers>
  ...
</settings>

The access key should be used to populate the username element, and the secret access key should be used to populate the password element.

Here’s how you get your access key and secret key (currently)

In your AWS console, select the IAM Service.
On the left panel, select Users.
Then select your user from the list.
In the tabs section, select Security credentials
Then click Create access key.
Screen Shot 2017-11-07 at 4.17.56 PM

Hitting that button will pop up a dialog with your Access key ID (for username) and Secret access key (for password).
Screen Shot 2017-11-07 at 4.23.01 PM

Next, set up the repository where artifacts/dependencies are to be downloaded

<settings>
  ...
  <profiles>
    ...
    <profile>
        <id>aws-release-profile</id>
        <activation>
            <activeByDefault>true</activeByDefault>
        </activation>
        <repositories>
            <repository>
                <id>aws-release</id>
                <url>s3://<BUCKET>/release</url>
            </repository>
            <repository>
                <id>aws-snapshot</id>
                <url>s3://<BUCKET>/snapshot</url>
            </repository>
        </repositories>
    </profile>
    ...
  </profiles>
  ...
</settings>

If you have any remnants of your old repository, you can remove or comment them out now. You may find them within the <servers>, <profiles>, or <mirrors> sections

Update your project’s pom.xml

Include the maven plugin that will do the work of communicating with S3, deploying and downloading artifacts.

<project>
  ...
  <build>
    ...
    <extensions>
      ...
      <extension>
        <groupId>org.springframework.build</groupId>
        <artifactId>aws-maven</artifactId>
        <version>5.0.0.RELEASE</version>
      </extension>
      ...
    </extensions>
    ...
  </build>
  ...
</project>

Add this so you can publish your project to S3

<project>
  ...
  <distributionManagement>
    <repository>
      <id>aws-release</id>
      <name>AWS Release Repository</name>
      <url>s3://<BUCKET>/release</url>
    </repository>
    <snapshotRepository>
      <id>aws-snapshot</id>
      <name>AWS Snapshot Repository</name>
      <url>s3://<BUCKET>/snapshot</url>
    </snapshotRepository>
  </distributionManagement>
  ...
</project>

Migrating your repository to S3

To migrate your artifacts to S3, simply copy the files along with its folder structure to S3. You grab the folders and files directly from your existing Nexus repository or from your local repository.

To copy the entire contents of your ~/.m2/repository folder into s3://<BUCKET>/release, go into the release folder and click Upload

When you need to add additional artifacts, simply create the folder structure under s3://<BUCKET>/release and upload the jar into it. This is one drawback compared with Nexus. Nexus will automatically create a bunch of metafiles, like a pom file to describe it and sha1 files to verify its integrity. You won’t have these files so others will get a warning, but you can ignore that. Or you can generate these files yourself.

Clear your local repository

You’ll need to clear out your old repository or maven will complain that the cache is stale. Locate your local repository (on my machine it was under ~/.m2/repository). The just delete it. Or if you’re not comfortable, you can rename or move it until you’re confident that your new setup is working as expected.

That’s it. Now try your mvn compile, package, deploy commands.

Tagged , ,

Deserializing json to java objects (but not all of it)

This post is about deserializing using Java and JAX-RS (Jersey) when you have a piece of json that’s amorphous (not well defined)

You have some json that generally looks like this

{
  id: 1
  config: {
    foo: "lalala"
  }
}

But the “config” is actually amorphous, so it might also look like this

{
  id: 1
  config: {
    bar: 15
  }
}

So you build a REST API like this

@POST
@Path("/api")
public void doStuff(Thing thing) {
  ...
}

And what does “Thing” look like?

class Thing {
  public int id;
  public ??? config;
}

How do you define config? Is it a class with foo or bar?
I needed to keep it as a string, since I’m just storing it in the database.

So let’s write it as

  public String config;

But this won’t work. You’ll likely get an error like this

Can not deserialize instance of java.lang.String out of START_OBJECT token at [Source: org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream@6ed35c92; line: 1, column: 39] (through reference chain: Thing["config"])

You will need to write a Deserializer for it like this

import java.io.IOException;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.core.TreeNode;
import com.fasterxml.jackson.databind.DeserializationContext;
import com.fasterxml.jackson.databind.JsonDeserializer;

public class KeepAsJsonDeserialzier extends JsonDeserializer<String> {

    @Override
    public String deserialize(JsonParser jp, DeserializationContext ctxt)
            throws IOException, JsonProcessingException {

        TreeNode tree = jp.getCodec().readTree(jp);
        return tree.toString();
    }
}

Credit goes to StackOverflow

And then annotate the field or setter method with this

  @JsonDeserialize(using=KeepAsJsonDeserializer.class)
  public String config;
Tagged , , , ,

Angular + Nodejs Authentication with Passport & JWT

The codebase for this lesson can be found at ng-node-passport.kit

This builds off of the earlier work of nodejs-starter-kit. You should have a firm grasp of angular and nodejs from this example before reading on.

The authentication is built from passportjs and jwt. Let’s first talk about these two.

PassportJs

This is an authentication middleware for Node.js. It has many ways to authenticate users (they call these “Strategies”). You can use it to authenticate users via their Facebook, Google, or Twitter account for example.

In this template, we use a basic authentication scheme where the user database is stored in house (specifically in a mysql database). We leverage the passport-http Strategy for this.

There are other strategies we could have employed with various trade-offs. Here is a comparison of a shortlist of them on StackOverflow.

JWT

JWT stands for JSON Web Tokens. It’s an open industry standard to represent claims between parties. For us, this simply means we can log in once and then pass a token to maintain our authentication session.

Authentication Model

Before we dive into the code, let’s understand what we will be doing.

Registration Workflow

The user submits their username and password and it gets submitted in plaintext across the wire (so please use SSL/HTTPS).

In the backend, we salt and encrypt the password before storing it into the database so the original plaintext password can (virtually) never be revealed.

Login Workflow

On the login page, the user submits their username and password and it gets submitted in plaintext across the wire (so please use SSL/HTTPS). In our implementation, this is done by creating an Authorization header and base64-encoding the username+”:”+password so it looks something like this:

Authorization: Basic dXNlcjp0ZXN0

The backend compares the credentials with the ones stored in the database. If there is a match, then we sign a JWT token, store a payload containing the username and send that back.

You should note that while you can put anything in the payload, you should never put anything sensitive here, like a password. The JWT token is signed but not encrypted. This means anyone can read it. The signing only prevents someone from modifying it. If you wish to put sensitive info in the payload, you should look into JWE, an encrypted implementation of JWT. See this article

When the frontend receives this token, it should store it somewhere. On future requests, it should include it as an Authorization header like so

Authorization: Bearer eyJdbGciOiJIU44I1NIsInR5cCI6IkXXVCJ9

When new requests are made, the backend first decodes the JWT token and pulls out the payload with the user info. At this time, you can be sure that no one has modified the token and the user is real since it can verify that it was signed by itself.

Backend Code

server.js

This is the main server file.

app.use(auth.passport.initialize());

This line is needed to initialize passport.

app.post('/auth/register'
        , auth.registerUser);
app.post('/auth/login'
//        , auth.passport.authenticate('basic',{session:false}) // causes challenge pop-up on 401
        , auth.authenticateViaPassport
        , auth.generateJWT
        , auth.returnAuthResponse
        );

These two routes define the two workflows we discussed above. We’ll dive into each of them below in shared/auth.js.

One thing I want to bring your attention to now is the /auth/login route. It has 3 express middlewares addressing the request: authenticateViaPassport, generateJWT, and returnAuthResponse. The way express middlewares work is that each layer handles the request and then the next layer continues the processing. Any layer can reject the request, stopping the processing at any point.

app.get('/stuff/:stuffId'
      , auth.ensureAuthenticatedElseError
      , myroute.getStuff);

To protect an API by requiring authentication, simply add the auth.ensureAuthenticatedElseError middleware. We’ll go into the code in the next section.

shared/auth.js

var jwt = require('jsonwebtoken');
const JWT_EXPIRATION = (60 * 60); // one hour
var uuidv4 = require('uuid/v4');
const SERVER_SECRET = uuidv4();

jsonwebtoken is the node module you’ll need to sign and verify jwt tokens. The expiration is encoded into the token and taken into account when verifying whether it’s still valid.

You need a SERVER_SECRET to sign and verify the tokens. It can be any string you’d like. It’s like a password so don’t give it out. So what I did here was to generate a random one every time the server starts up. (It means tokens are not valid after a restart, but you get a little more security)

exports.passport = require('passport');
var BasicStrategy = require('passport-http').BasicStrategy;

These lines bring in the passport dependencies.

exports.registerUser = function(req, res) {
  var userid = req.body.userid;
  var plaintextPassword = req.body.password;

  bcrypt.hash(plaintextPassword, saltRounds)
    .then(function(hash) {
      var sql = 'INSERT INTO user(userid,passhash) VALUES(?,?)';
      return dbp.pool.query(sql, [userid, hash]);
    })
...

The registration method takes the credentials, encrypts the password and stores it in the DB. Our user table consists of an userid and passhash column.

The next 4 snippets of code implement the login functions.

exports.authenticateViaPassport = function(req, res, next) {
  exports.passport.authenticate('basic',{session:false},
    function(err, user, info) {
      if(!user){
        res.set('WWW-Authenticate', 'x'+info); // change to xBasic
        res.status(401).send('Invalid Authentication');
      } else {
        req.user = user;
        next();
      }
    }
  )(req, res, next);
};

This first middleware is not always necessary. The /auth/login route could have directly called “auth.passport.authenticate(‘basic’,{session:false})” instead of this. But on authentication failure, it sends back a 401 HTTP status along with the header “WWW-Authenticate: Basic …” These two things cause browsers to pop up a username/password dialog. I wanted to handle 401’s in a custom way, so this layer simply changes the WWW-Authenticate header, preventing the dialog. We could also have changed the 401 status code.

The auth.passport.authenticate() method eventually calls this:

exports.passport.use(new BasicStrategy(
  function(userid, plainTextPassword, done) {
    var sql = 'SELECT *'
            +' FROM user'
            +' WHERE userid=?';

    dbp.pool.query(sql, [userid])
      .then(function(rows) {
        if( rows.length ) {
          var hashedPwd = rows[0].passhash;
          return bcrypt.compare(plainTextPassword, hashedPwd);
        } else {
          return false;
        }
      })
...

This takes the login credentials, compares its hashed form with the database entry and returns the user to the next layer.

exports.generateJWT = function(req, res, next) {
  var payload = {
      exp: Math.floor(Date.now() / 1000) + JWT_EXPIRATION
    , user: req.user,
//    , role: role
  };
  req.token = jwt.sign(payload, SERVER_SECRET);
  next();
}

If we get to the generateJWT layer, that means the credentials were valid. So we now sign and generate a JWT token. Notice we put the user into the payload. You can put anything you’d like in it, including permissions and roles.

exports.returnAuthResponse = function(req, res) {
  res.status(200).json({
    user: req.user,
    token: req.token
  });
}

This simply returns the user and JWT token to the login request. This completes the login implementation.

exports.ensureAuthenticatedElseError = function(req, res, next) {
  var token = getToken(req.headers);
  if( token ) {
    var payload = jwt.decode(token, SERVER_SECRET);
    if( payload ) {
//      console.log('payload: ' + JSON.stringify(payload));
      // check if user still exists in database if you'd like
      res.locals.user = payload.user;
      next();
...

This middleware is used whenever you want to protect an API. It parses the Authorization header’s jwt token and decodes it. If successful, you should have the payload that contains the user. I stick this in res.locals so the next request middleware/handler has access to it. If you look inside routes/sampleroute.js, specifically at the getStuff() method, you’ll see the line that accesses it. You may want to return resources specifically relevant to the given user.

With this backend, you can actually test it using Postman or curl. You can send the appropriate json messages and Authorization headers to simulate registration, login, and accessing protected API’s.

For example, you can try the following procedure:

  1. POST /auth/register with {userid:user, password:test}
    This should create an entry in your database with a hashed password
  2. POST /auth/login with an Authorization header “Basic dXNlcjpwYXNz”
    This should give you back a json response like
    {user:{userid: “user”}, token: “[JWT_TOKEN]”}
  3. GET /stuff/1 with Authorization header “Bearer [JWT_TOKEN]” (copy JWT_TOKEN from step 2.
    This should give you back a list of stuff

Frontend Code

We use angular to handle the authentication handshake. There are some angular mechanisms that make this easy to do.

I’ll skip the ng/register/controller.js and ng/login/controller.js. They’re trivial and pretty self-explanatory.

The core of the authentication is in assets/js/auth.js.

assets/js/auth.js

The AuthService factory implements the register, logIn, and logOut functions.

  authService.logIn = function(userid, password) {
    return $http({
      method: 'POST',
      url: '/auth/login',
      headers: {
        'Authorization': 'Basic ' + btoa(userid + ':' + password)
      }
    })
      .then(function(resp) {
        var user = null;
        if( resp.data ) {
          user = resp.data.user;
          UserSession.create(user, resp.data.token);
        }
        return user;
      })
  };

In logIn(), notice we set the Authorization header, passing a base64-encoded userid and password. Remember that on successful authentication, we get back a user and token. We store that in a UserSession service. For now, this is all you need to know. We’ll dive deeper into it later.

Next, I want to direct your attention to the http interceptor:

app.factory('authHandler', [
    '$q'
  , '$window'
  , '$location'
  , 'UserSession'
  , function($q
           , $window
           , $location
           , UserSession
           ) {
    return {
      request: function(config) {
        config.headers = config.headers || {};
        var token = UserSession.getToken();
        if( token
        &&  !config.headers.Authorization
        ) {
            config.headers.Authorization = 'Bearer ' + token;
        }
        return config;
      },
      responseError: function(rejection) {
        if (rejection != null && rejection.status === 401) {
          UserSession.destroy();
          $location.url("/login");
        }

        return $q.reject(rejection);
      }
    };
}]);

This code intercepts all http requests and responses that result in an error.

The request interceptor checks if we have a token from UserSession and attaches it as an Authorization header.

The response error interceptor looks for 401 authentication errors. Remember that our backend throws a 401 when the user is not authenticated when using a protected API. Here, we intercept the 401 and redirect the user to the /login page.

Now let’s jump back to the UserSession. First at the top of the auth.js file, you’ll notice this

const USER_ACTIVE_UNTIL = {
  pageRefresh: 'localvar',
  sessionExpires: 'sessionStorage',
  forever: 'localStorage'
}

const SESSION_PERSISTANCE = USER_ACTIVE_UNTIL.forever;

You can manually set how the login session should be persisted. What does this mean and why does it matter? So we can store the user and token either as local variables, in the session storage, or the local storage.

If you store it in local variables, as soon as you refresh the page, it all goes away. Certainly if you open another tab or close/reopen your browser, you will no longer be logged in because the user and token are gone. If you want this behavior, set SESSION_PERSISTANCE to USER_ACTIVE_UNTIL.pageRefresh

If you set SESSION_PERSISTANCE to USER_ACTIVE_UNTIL.sessionExpires, then as long as you’re in the browser tab, you’re logged in. You can refresh the page and still be logged in. However if you close the tab, or ctrl-click to open a new tab, it won’t carry the authentication data and you’ll have to relogin.

If you set SESSION_PERSISTANCE to USER_ACTIVE_UNTIL.forever, then you’ll always be logged in (until your token expires, currently one hour by default). You can close your browser and you’ll be logged in when you go back. You can ctrl-click to open other tabs and those will share your login session as well. This is the default behavior configured.

One last thing to note about this model, which is less about authentication and more about display. You’ll probably want to display the user and notify widgets on your page when the user is logged in and out.

I followed the technique here for this model. It involves broadcasting logins, logouts, and failures which can be seen in ng/userCtrl/controller.js. This controller also holds the user in its scope. And most importantly, this controller is bound to the top-level node, which is the <body> of the index.html page.

From this model, you can have various components listen for the broadcast messages and do something appropriate. e.g. once user logs in, expose user controls or perform some other action.

Having the controller at the top-level and a $scope.user means you can access the user info from any of your pages. (We could also have used $rootScope but I also try not to use global variables in my apps whenever possible.)

I hope this description has helped you understand and apply authentication to your application. You can use the template and build your app from it, or copy the relevant components over (primarily the backend auth.js and frontend auth.js). Keep in mind also that since the protocol is standard and over HTTP, there’s nothing that stops you from replacing the frontend or backend here. You can swap out the angular frontend with something else that adds in the Authorization headers appropriately. Similarly, you can swap out the nodejs backend with something that signs and verifies jwt tokens.

Tagged , , , ,