Migrating Maven artifacts from Nexus to S3

Awhile ago, I set up a Nexus server to host my company’s private artifacts, both our in-house projects as well as jars that weren’t available on Maven Central. The system standardizes the dependencies we use. This has served us well.

But recently, our physical server has been going down, which means our developers can’t compile. We thought about hosting Nexus on Amazon EC2, which will incur a monthly charge for a 24/7 server that will be used a fraction of that time.

I discovered that we can host our artifacts (privately or publicly) on an Amazon S3 service. S3 is a storage service, basically a data repo for any kind of file. Maven artifacts are basically just files so it’s a perfect match.

There are several maven plugins out there that’ll let you deploy and download artifacts to/from S3. I used spring-projects/aws-maven.

The basic instructions are in the README of that github project, but it is missing a crucial configuration element and doesn’t show how you generate the access keys. I’ll provide all the instructions below for completeness.

Benefits of S3 repo vs Nexus

Before the How, let’s go into the Why.

First is cost. S3 storage is cheaper than a running EC2 instance. Current S3 pricing is at $0.023/GB. Say you’ve got 10GB of artifacts, that’s $0.23/month. Current on-demand EC2 pricing for a medium instance is at $0.0464/hr, so that’s $33.41/month. With reserved instances, you get an average savings of 40%, but that’s still $20.05, or 100x the price of S3 storage.

Second, but perhaps even more important is reliability and availability. Your own EC2 server could get hacked unless you’re diligent about keeping up with security patches. Your server can go down for maintenance or other things you forgot about. Also, you’d have to set up your own backup solution. With S3, I think it’s way less likely it will go down. You don’t have to worry about redundancy or backups. It will just work when you need it.

I should point out that Nexus has some benefits as well. It has a nice web console which has its own access controls and search feature. With S3, you’ll manage access through IAM services but I don’t believe there’s a way to search S3 Buckets. Nexus will also create the appropriate metafiles around each artifact, which I’ll talk about below. S3 is just a plain storage service and knows nothing about maven artifacts.

You’ll have to weigh the pros and cons yourself. If you decide on S3, continue reading…

Set up your S3 Bucket

In your AWS console, select the S3 service.
Select Create bucket and follow the instructions.
Once you have your bucket, create two top-level folders in it called “release” and “snapshot” like this
Screen Shot 2017-11-07 at 4.35.02 PM

Update your maven settings.xml

<settings>
  ...
  <servers>
    ...
    <server>
      <id>aws-release</id>
      <username>0123456789ABCDEFGHIJ</username>
      <password>0123456789abcdefghijklmnopqrstuvwxyzABCD</password>
    </server>
    <server>
      <id>aws-snapshot</id>
      <username>0123456789ABCDEFGHIJ</username>
      <password>0123456789abcdefghijklmnopqrstuvwxyzABCD</password>
    </server>
    ...
  </servers>
  ...
</settings>

The access key should be used to populate the username element, and the secret access key should be used to populate the password element.

Here’s how you get your access key and secret key (currently)

In your AWS console, select the IAM Service.
On the left panel, select Users.
Then select your user from the list.
In the tabs section, select Security credentials
Then click Create access key.
Screen Shot 2017-11-07 at 4.17.56 PM

Hitting that button will pop up a dialog with your Access key ID (for username) and Secret access key (for password).
Screen Shot 2017-11-07 at 4.23.01 PM

Next, set up the repository where artifacts/dependencies are to be downloaded

<settings>
  ...
  <profiles>
    ...
    <profile>
        <id>aws-release-profile</id>
        <activation>
            <activeByDefault>true</activeByDefault>
        </activation>
        <repositories>
            <repository>
                <id>aws-release</id>
                <url>s3://<BUCKET>/release</url>
            </repository>
            <repository>
                <id>aws-snapshot</id>
                <url>s3://<BUCKET>/snapshot</url>
            </repository>
        </repositories>
    </profile>
    ...
  </profiles>
  ...
</settings>

If you have any remnants of your old repository, you can remove or comment them out now. You may find them within the <servers>, <profiles>, or <mirrors> sections

Update your project’s pom.xml

Include the maven plugin that will do the work of communicating with S3, deploying and downloading artifacts.

<project>
  ...
  <build>
    ...
    <extensions>
      ...
      <extension>
        <groupId>org.springframework.build</groupId>
        <artifactId>aws-maven</artifactId>
        <version>5.0.0.RELEASE</version>
      </extension>
      ...
    </extensions>
    ...
  </build>
  ...
</project>

Add this so you can publish your project to S3

<project>
  ...
  <distributionManagement>
    <repository>
      <id>aws-release</id>
      <name>AWS Release Repository</name>
      <url>s3://<BUCKET>/release</url>
    </repository>
    <snapshotRepository>
      <id>aws-snapshot</id>
      <name>AWS Snapshot Repository</name>
      <url>s3://<BUCKET>/snapshot</url>
    </snapshotRepository>
  </distributionManagement>
  ...
</project>

Migrating your repository to S3

To migrate your artifacts to S3, simply copy the files along with its folder structure to S3. You grab the folders and files directly from your existing Nexus repository or from your local repository.

To copy the entire contents of your ~/.m2/repository folder into s3://<BUCKET>/release, go into the release folder and click Upload

When you need to add additional artifacts, simply create the folder structure under s3://<BUCKET>/release and upload the jar into it. This is one drawback compared with Nexus. Nexus will automatically create a bunch of metafiles, like a pom file to describe it and sha1 files to verify its integrity. You won’t have these files so others will get a warning, but you can ignore that. Or you can generate these files yourself.

Clear your local repository

You’ll need to clear out your old repository or maven will complain that the cache is stale. Locate your local repository (on my machine it was under ~/.m2/repository). The just delete it. Or if you’re not comfortable, you can rename or move it until you’re confident that your new setup is working as expected.

That’s it. Now try your mvn compile, package, deploy commands.

Advertisements
Tagged , ,

Deserializing json to java objects (but not all of it)

This post is about deserializing using Java and JAX-RS (Jersey) when you have a piece of json that’s amorphous (not well defined)

You have some json that generally looks like this

{
  id: 1
  config: {
    foo: "lalala"
  }
}

But the “config” is actually amorphous, so it might also look like this

{
  id: 1
  config: {
    bar: 15
  }
}

So you build a REST API like this

@POST
@Path("/api")
public void doStuff(Thing thing) {
  ...
}

And what does “Thing” look like?

class Thing {
  public int id;
  public ??? config;
}

How do you define config? Is it a class with foo or bar?
I needed to keep it as a string, since I’m just storing it in the database.

So let’s write it as

  public String config;

But this won’t work. You’ll likely get an error like this

Can not deserialize instance of java.lang.String out of START_OBJECT token at [Source: org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream@6ed35c92; line: 1, column: 39] (through reference chain: Thing["config"])

You will need to write a Deserializer for it like this

import java.io.IOException;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.core.TreeNode;
import com.fasterxml.jackson.databind.DeserializationContext;
import com.fasterxml.jackson.databind.JsonDeserializer;

public class KeepAsJsonDeserialzier extends JsonDeserializer<String> {

    @Override
    public String deserialize(JsonParser jp, DeserializationContext ctxt)
            throws IOException, JsonProcessingException {

        TreeNode tree = jp.getCodec().readTree(jp);
        return tree.toString();
    }
}

Credit goes to StackOverflow

And then annotate the field or setter method with this

  @JsonDeserialize(using=KeepAsJsonDeserializer.class)
  public String config;
Tagged , , , ,

Angular + Nodejs Authentication with Passport & JWT

The codebase for this lesson can be found at ng-node-passport.kit

This builds off of the earlier work of nodejs-starter-kit. You should have a firm grasp of angular and nodejs from this example before reading on.

The authentication is built from passportjs and jwt. Let’s first talk about these two.

PassportJs

This is an authentication middleware for Node.js. It has many ways to authenticate users (they call these “Strategies”). You can use it to authenticate users via their Facebook, Google, or Twitter account for example.

In this template, we use a basic authentication scheme where the user database is stored in house (specifically in a mysql database). We leverage the passport-http Strategy for this.

There are other strategies we could have employed with various trade-offs. Here is a comparison of a shortlist of them on StackOverflow.

JWT

JWT stands for JSON Web Tokens. It’s an open industry standard to represent claims between parties. For us, this simply means we can log in once and then pass a token to maintain our authentication session.

Authentication Model

Before we dive into the code, let’s understand what we will be doing.

Registration Workflow

The user submits their username and password and it gets submitted in plaintext across the wire (so please use SSL/HTTPS).

In the backend, we salt and encrypt the password before storing it into the database so the original plaintext password can (virtually) never be revealed.

Login Workflow

On the login page, the user submits their username and password and it gets submitted in plaintext across the wire (so please use SSL/HTTPS). In our implementation, this is done by creating an Authorization header and base64-encoding the username+”:”+password so it looks something like this:

Authorization: Basic dXNlcjp0ZXN0

The backend compares the credentials with the ones stored in the database. If there is a match, then we sign a JWT token, store a payload containing the username and send that back.

You should note that while you can put anything in the payload, you should never put anything sensitive here, like a password. The JWT token is signed but not encrypted. This means anyone can read it. The signing only prevents someone from modifying it. If you wish to put sensitive info in the payload, you should look into JWE, an encrypted implementation of JWT. See this article

When the frontend receives this token, it should store it somewhere. On future requests, it should include it as an Authorization header like so

Authorization: Bearer eyJdbGciOiJIU44I1NIsInR5cCI6IkXXVCJ9

When new requests are made, the backend first decodes the JWT token and pulls out the payload with the user info. At this time, you can be sure that no one has modified the token and the user is real since it can verify that it was signed by itself.

Backend Code

server.js

This is the main server file.

app.use(auth.passport.initialize());

This line is needed to initialize passport.

app.post('/auth/register'
        , auth.registerUser);
app.post('/auth/login'
//        , auth.passport.authenticate('basic',{session:false}) // causes challenge pop-up on 401
        , auth.authenticateViaPassport
        , auth.generateJWT
        , auth.returnAuthResponse
        );

These two routes define the two workflows we discussed above. We’ll dive into each of them below in shared/auth.js.

One thing I want to bring your attention to now is the /auth/login route. It has 3 express middlewares addressing the request: authenticateViaPassport, generateJWT, and returnAuthResponse. The way express middlewares work is that each layer handles the request and then the next layer continues the processing. Any layer can reject the request, stopping the processing at any point.

app.get('/stuff/:stuffId'
      , auth.ensureAuthenticatedElseError
      , myroute.getStuff);

To protect an API by requiring authentication, simply add the auth.ensureAuthenticatedElseError middleware. We’ll go into the code in the next section.

shared/auth.js

var jwt = require('jsonwebtoken');
const JWT_EXPIRATION = (60 * 60); // one hour
var uuidv4 = require('uuid/v4');
const SERVER_SECRET = uuidv4();

jsonwebtoken is the node module you’ll need to sign and verify jwt tokens. The expiration is encoded into the token and taken into account when verifying whether it’s still valid.

You need a SERVER_SECRET to sign and verify the tokens. It can be any string you’d like. It’s like a password so don’t give it out. So what I did here was to generate a random one every time the server starts up. (It means tokens are not valid after a restart, but you get a little more security)

exports.passport = require('passport');
var BasicStrategy = require('passport-http').BasicStrategy;

These lines bring in the passport dependencies.

exports.registerUser = function(req, res) {
  var userid = req.body.userid;
  var plaintextPassword = req.body.password;

  bcrypt.hash(plaintextPassword, saltRounds)
    .then(function(hash) {
      var sql = 'INSERT INTO user(userid,passhash) VALUES(?,?)';
      return dbp.pool.query(sql, [userid, hash]);
    })
...

The registration method takes the credentials, encrypts the password and stores it in the DB. Our user table consists of an userid and passhash column.

The next 4 snippets of code implement the login functions.

exports.authenticateViaPassport = function(req, res, next) {
  exports.passport.authenticate('basic',{session:false},
    function(err, user, info) {
      if(!user){
        res.set('WWW-Authenticate', 'x'+info); // change to xBasic
        res.status(401).send('Invalid Authentication');
      } else {
        req.user = user;
        next();
      }
    }
  )(req, res, next);
};

This first middleware is not always necessary. The /auth/login route could have directly called “auth.passport.authenticate(‘basic’,{session:false})” instead of this. But on authentication failure, it sends back a 401 HTTP status along with the header “WWW-Authenticate: Basic …” These two things cause browsers to pop up a username/password dialog. I wanted to handle 401’s in a custom way, so this layer simply changes the WWW-Authenticate header, preventing the dialog. We could also have changed the 401 status code.

The auth.passport.authenticate() method eventually calls this:

exports.passport.use(new BasicStrategy(
  function(userid, plainTextPassword, done) {
    var sql = 'SELECT *'
            +' FROM user'
            +' WHERE userid=?';

    dbp.pool.query(sql, [userid])
      .then(function(rows) {
        if( rows.length ) {
          var hashedPwd = rows[0].passhash;
          return bcrypt.compare(plainTextPassword, hashedPwd);
        } else {
          return false;
        }
      })
...

This takes the login credentials, compares its hashed form with the database entry and returns the user to the next layer.

exports.generateJWT = function(req, res, next) {
  var payload = {
      exp: Math.floor(Date.now() / 1000) + JWT_EXPIRATION
    , user: req.user,
//    , role: role
  };
  req.token = jwt.sign(payload, SERVER_SECRET);
  next();
}

If we get to the generateJWT layer, that means the credentials were valid. So we now sign and generate a JWT token. Notice we put the user into the payload. You can put anything you’d like in it, including permissions and roles.

exports.returnAuthResponse = function(req, res) {
  res.status(200).json({
    user: req.user,
    token: req.token
  });
}

This simply returns the user and JWT token to the login request. This completes the login implementation.

exports.ensureAuthenticatedElseError = function(req, res, next) {
  var token = getToken(req.headers);
  if( token ) {
    var payload = jwt.decode(token, SERVER_SECRET);
    if( payload ) {
//      console.log('payload: ' + JSON.stringify(payload));
      // check if user still exists in database if you'd like
      res.locals.user = payload.user;
      next();
...

This middleware is used whenever you want to protect an API. It parses the Authorization header’s jwt token and decodes it. If successful, you should have the payload that contains the user. I stick this in res.locals so the next request middleware/handler has access to it. If you look inside routes/sampleroute.js, specifically at the getStuff() method, you’ll see the line that accesses it. You may want to return resources specifically relevant to the given user.

With this backend, you can actually test it using Postman or curl. You can send the appropriate json messages and Authorization headers to simulate registration, login, and accessing protected API’s.

For example, you can try the following procedure:

  1. POST /auth/register with {userid:user, password:test}
    This should create an entry in your database with a hashed password
  2. POST /auth/login with an Authorization header “Basic dXNlcjpwYXNz”
    This should give you back a json response like
    {user:{userid: “user”}, token: “[JWT_TOKEN]”}
  3. GET /stuff/1 with Authorization header “Bearer [JWT_TOKEN]” (copy JWT_TOKEN from step 2.
    This should give you back a list of stuff

Frontend Code

We use angular to handle the authentication handshake. There are some angular mechanisms that make this easy to do.

I’ll skip the ng/register/controller.js and ng/login/controller.js. They’re trivial and pretty self-explanatory.

The core of the authentication is in assets/js/auth.js.

assets/js/auth.js

The AuthService factory implements the register, logIn, and logOut functions.

  authService.logIn = function(userid, password) {
    return $http({
      method: 'POST',
      url: '/auth/login',
      headers: {
        'Authorization': 'Basic ' + btoa(userid + ':' + password)
      }
    })
      .then(function(resp) {
        var user = null;
        if( resp.data ) {
          user = resp.data.user;
          UserSession.create(user, resp.data.token);
        }
        return user;
      })
  };

In logIn(), notice we set the Authorization header, passing a base64-encoded userid and password. Remember that on successful authentication, we get back a user and token. We store that in a UserSession service. For now, this is all you need to know. We’ll dive deeper into it later.

Next, I want to direct your attention to the http interceptor:

app.factory('authHandler', [
    '$q'
  , '$window'
  , '$location'
  , 'UserSession'
  , function($q
           , $window
           , $location
           , UserSession
           ) {
    return {
      request: function(config) {
        config.headers = config.headers || {};
        var token = UserSession.getToken();
        if( token
        &&  !config.headers.Authorization
        ) {
            config.headers.Authorization = 'Bearer ' + token;
        }
        return config;
      },
      responseError: function(rejection) {
        if (rejection != null && rejection.status === 401) {
          UserSession.destroy();
          $location.url("/login");
        }

        return $q.reject(rejection);
      }
    };
}]);

This code intercepts all http requests and responses that result in an error.

The request interceptor checks if we have a token from UserSession and attaches it as an Authorization header.

The response error interceptor looks for 401 authentication errors. Remember that our backend throws a 401 when the user is not authenticated when using a protected API. Here, we intercept the 401 and redirect the user to the /login page.

Now let’s jump back to the UserSession. First at the top of the auth.js file, you’ll notice this

const USER_ACTIVE_UNTIL = {
  pageRefresh: 'localvar',
  sessionExpires: 'sessionStorage',
  forever: 'localStorage'
}

const SESSION_PERSISTANCE = USER_ACTIVE_UNTIL.forever;

You can manually set how the login session should be persisted. What does this mean and why does it matter? So we can store the user and token either as local variables, in the session storage, or the local storage.

If you store it in local variables, as soon as you refresh the page, it all goes away. Certainly if you open another tab or close/reopen your browser, you will no longer be logged in because the user and token are gone. If you want this behavior, set SESSION_PERSISTANCE to USER_ACTIVE_UNTIL.pageRefresh

If you set SESSION_PERSISTANCE to USER_ACTIVE_UNTIL.sessionExpires, then as long as you’re in the browser tab, you’re logged in. You can refresh the page and still be logged in. However if you close the tab, or ctrl-click to open a new tab, it won’t carry the authentication data and you’ll have to relogin.

If you set SESSION_PERSISTANCE to USER_ACTIVE_UNTIL.forever, then you’ll always be logged in (until your token expires, currently one hour by default). You can close your browser and you’ll be logged in when you go back. You can ctrl-click to open other tabs and those will share your login session as well. This is the default behavior configured.

One last thing to note about this model, which is less about authentication and more about display. You’ll probably want to display the user and notify widgets on your page when the user is logged in and out.

I followed the technique here for this model. It involves broadcasting logins, logouts, and failures which can be seen in ng/userCtrl/controller.js. This controller also holds the user in its scope. And most importantly, this controller is bound to the top-level node, which is the <body> of the index.html page.

From this model, you can have various components listen for the broadcast messages and do something appropriate. e.g. once user logs in, expose user controls or perform some other action.

Having the controller at the top-level and a $scope.user means you can access the user info from any of your pages. (We could also have used $rootScope but I also try not to use global variables in my apps whenever possible.)

I hope this description has helped you understand and apply authentication to your application. You can use the template and build your app from it, or copy the relevant components over (primarily the backend auth.js and frontend auth.js). Keep in mind also that since the protocol is standard and over HTTP, there’s nothing that stops you from replacing the frontend or backend here. You can swap out the angular frontend with something else that adds in the Authorization headers appropriately. Similarly, you can swap out the nodejs backend with something that signs and verifies jwt tokens.

Tagged , , , ,

AngularJS When processing the DOM takes a long time

You already know that you can add a progress spinner when an asynchronous process takes a long time. For example, say you’re loading data from a REST call.

$http.get(url)
  .then(function(res) {
    $scope.data = res.data;
  });

<div ng-hide="data">
  <i class="fa fa-spinner fa-pulse fa-3x fa-fw"></i>
</div>

<div ng-show="data">
  {{data}}
</div>

But let’s say you need to reload that data, like paging. And suppose the data is HUGE and it takes a really long time for the DOM to be updated even if you already have the data in memory. What happens is that when you initiate action, the page seems to stall with the old data still there. This is bad UI because it makes the user feel like nothing is happening and may cause them to click around.

Ideally, you want the old data to be cleared so that the spinner can appear again. But clearing the data before setting it won’t do what you’d expect.

function() {
  $scope.data = null;
  $scope.data = ...
}

What happens here is that $scope.data is set to null, then to the new data, but the page will not change until the new data is painted and ready to show to the user. Why didn’t setting $scope.data to null immediately make the spinner appear?

Angularjs only sees the changes at the end of the function. Form Angular’s view, $scope.data went from the old data to the new. It never saw the update to null.

Here’s how I get around that.

function() {
  $scope.data = null;

  setTimeout(function() {
    $scope.$apply(function() {
      $scope.data = ...
    })
  }, 100)
}

In the main thread of the code, I set $scope.data to null. Then I create a 2nd thread with the setTimeout() which is asynchronous, so the function ends and angularjs updates. It now sees that $scope.data is null so it erases the DOM and shows the spinner.

Eventually, the timer kicks off (set for 100ms in this case), and it applies the function that sets $scope.data to the new data. While this is happening and the DOM is painted in the background, the user sees the spinning icon. Once it’s complete, the spinner goes away and the data is shown.

Tagged

docker-compose wait for dependencies

Sometimes, when you have an over-optimized system leveraging asynchronization, you have to jump through hoops to synchronize things again to make them work in practice. This is the case with docker-compose currently.

First, let me mention that this is with the docker-compose file format 3.x. Things are different with other versions.

Here’s the issue. Say you’ve got 2 microservices you want to dockerize, a tomcat container and a mysql container. docker-compose will start the two containers in arbitrary order, but you and I know that the tomcat webapp probably needs the database to be available. How do you do it?

The docker compose file allows for an argument called depends_on.
depends_on argument orders the services but it doesn’t actually wait for a service to be “ready”. I believe it just waits for the container’s service port to be exposed. The database may not have started up yet even if the ports are exposed.

Stackoverflow has this solution
Unfortunately, the depends_on’s condition argument that was introduced in compose file version 2.x is deprecated in 3.x.

I believe Docker’s official philosophy is that your application is to be created in a robust manner. That is, if the database is not available (either it’s not started yet or it has gone down temporarily or permanently since starting), the application should handle it gracefully. It’s also very difficult for them to know what it means for a service to be started. e.g. What does it mean for mysql to be started, vs postgres, vs rabbitmq, vs tomcat, vs nodejs, vs some random service you wrote?

Nevertheless, I believe there ought to be a standard way that allows users to order container startup based on readiness.

So let’s talk about the solution that my coworker and I put together, built on other open source utilities.

The first thing I’d like you to take a look at is ufoscout’s docker-compose-wait project. His wait.sh script will wait for a service to be ready before returning. It uses the netcat util to ping the service’s port until it’s available.

In your docker-compose.yml file, you just need to specify the container names and port of the other services that you depend on. See the WAIT_HOSTS environment variable. For example

version: '3'
services:

  mysql:
    image: "mysql:5.7"
    container_name: mysql
    ports:
      - "3306:3306"

  my_app:
    image: "mySuperApp:latest"
    environment:
      WAIT_HOSTS: mysql:3306

The wait.sh file is the key, but there are some caveats to using it. For instance, your container needs the nc (netcat) util. So unless your image is ubuntu, you’re not likely to have that in the standard tomcat image from Docker Hub. You’ll also have to make sure you call the wait.sh script before calling the service startup script. e.g. “CMD /wait.sh && /MySuperApp.sh”

Here’s what I did for the tomcat container. First, I located the the tomcat image’s Dockerfile. It’s referenced in the README for me so that’s nice.

Then I made several modifications to it

ADD https://raw.githubusercontent.com/ufoscout/docker-compose-wait/1.0.0/wait.sh /$CATALINA_HOME/wait.sh
RUN chmod +x /$CATALINA_HOME/wait.sh
RUN apt-get update && apt-get install -y netcat 
# CMD ["catalina.sh", "run"]
CMD ./wait.sh && catalina.sh run

Now I build the image.

docker build -t kanesee/tomcat-wait:9.0 .

The entire set of files and modifications can be found in this github repo.

So now we have a version of tomcat that will wait for other services to be ready before it starts. We just need to add the WAIT_HOSTS argument to docker-compose.yml to specify the name of the database container and port and then it’s ready to go.
If you need longer than the default 30 seconds, you can set WAIT_HOSTS_TIMEOUT to a longer period.

For your benefit, I pushed my tomcat-wait image to Docker Hub so that you don’t need to create your own image if you’re looking for a tomcat v9.0 that does this.

Otherwise, please follow this recipe and create your own “waiting” services. If you create one, I’d love to know about it. Please leave a link to your image in the comments section.

Tagged , , ,

Dockerizing a Nodejs app

A quick tutorial on how to dockerize your nodejs app.

I have a nodeapp contained in a folder called myNodeApp. Here’s what the folder structure looks like

Screen Shot 2017-07-26 at 2.35.41 PM

Under myNodeApp, create a file called Dockerfile. Here are the contents of that file

FROM node:latest
WORKDIR /app
COPY . /app
CMD node server.js
EXPOSE 3333

If this directory is a git repo, like mine is, then you will want to create a file called .dockerignore to prevent your git info from being dockerized. Here are the contents of .dockerignore

.git
.gitignore
.DS_Store

My app takes configuration settings so one way to do that is through environment variables which you can pass in when you create the container. You can use the -e flag to pass in environment parameters but I prefer to use a properties file.

So I created a docker.env file with my environment properties.
e.g.

MY_PROPERTY=myValue

Next we create the image. In the myNodeApp folder run:

docker build -t myNodeApp .

Now that you have the docker image, you can instantiate a container like so

docker run -t -p 3333:3333 --env-file ./docker.env myNodeApp

Remember you can start and stop your container with this command:

docker start/stop [container_id]

Angular: Controls between parent and directives

If you need the parent controller to call a directive’s scope function, then this is how you need to set everything up. Stackoverflow

If you need a directive to call the parent’s scope function, then this is how you set them up. Stackoverflow

Tagged

Tomcat ContextPath and Angularjs

An interesting obstacle arose when I tried to use Angularjs (NG) in Tomcat, specifically in regards to the path.

Typically, I’ve been using NG assuming everything starts at the root path. So my index.html is at http://domain.com/index.html.

However, in Tomcat, each webapp has its own starting path. So you may have a weather webapp that is at http://domain.com/weather/ and a clock webapp at http://domain.com/clock/.

How do you configure NG within tomcat?

You could hardcode your starting path. But imagine if you wanted to have two clock webapps in tomcat, renaming one to http://domain.com/clock1/ and the other http://domain.com/clock2.

Now you would have to change all the references of “clock” to “clock1” and “clock2” respectively. And there are many references…

In the index.html, you have to change all your <link …css> as well as all <script …js>. You might think that you can outsmart yourself by changing the index.html to a index.jsp and using <%=request.getContextPath()%>.

But then you still have to change references of the hardcoded path in your .js files, such as your app.js that defines your module and routes.

And don’t even get me started on all your templates that reference .js and .css.

Well, let me jump straight to the short answer.

In your index.html, simply add this within your <head> tag

<head>
    <base href="/clock/" />
</head>

That’s it. Now all your relative paths will have that base href prefixed to them automatically.

So <link href=”mystyle.css”> will automatically fetch from http://domain.com/clock/mystyle.css

<script src=”myscripts.js>” will automatically fetch from http://domain.com/clock/myscripts.js

Even app.js will leverage this base when using relative url’s.
e.g. $routeProvider.when(‘/’, {templateurl: ‘home.html’})
will automatically reference http://domain.com/clock/home.html

This html standard can even be used on jsp pages. Anything that is html will take advantage of it

Tagged , , ,

The power of Shadow DOM

I recently had to inject another site’s webpage into a div in our own site. Back in early 2000, you’d use something like an but this is 2017.The requirements are the following:

  1. Pull the html source of the other site’s webpage
  2. Display it in our own webpage
  3. Make sure all styles are applied
    1. Ensure that styles don’t spill over to our own page’s DOM

Requirement 1: Pull the html source of the other site’s webpage

If you’ve done any web programming in the last decade, you know that simply pulling the html from another site from the frontend javascript is a big no-no, and blocked by cross-site-scripting restrictions. The trick is to have the backend webserver fetch the data and send it to the frontend. In a way, you’re proxying the data. This is a lesson for another time and not the focus of this post.

Requirement 2: Display it in our own webpage

This is actually pretty trivial once your backend webserver sends you the data. If you’re using node, make sure you’re “trusting” the source using $sce.trustAsHtml(htmlString).

Requirement 3: Make sure all styles are applied

But once you do this, you’ll soon realize that while the content is shown, the styling is missing. That’s because the .css files aren’t being fetched, and there’s really no good way of doing that.

You could parse the html and manually fetch the .css. But you’ll run into another problem. That site’s css now conflicts with your own site’s css and you get some weird mixup of styles.

If only there was a way to fetch the css and restrict it to the specific subtree of the DOM…

Shadow DOM

Here’s where this new (yet to be official) W3C standard comes in. As of this writing, Shadow DOM is only supported on Chrome 53.0+ and Safari 10.0+, and Android Webview 53.0+.

The idea is that you can create a subtree within the DOM, but it’s a special shadow subtree. What makes it special is that you can restrict css to just within this DOM.

Why is this such a big deal? Well, it solves the problem at hand. But it also solves a much more general problem.

Have you ever added an id attribute in a page only to find that you or someone else created that same id value on another page? That wasn’t such a big deal back in 2010, but many popular frontend web frameworks (e.g. angularjs, backbone)  consolidate all your pages into one and routes between them. What that means is that id’s can conflict. Even class-based css styles can conflict between two pages (how often have you created a class=”value”, no? just me?)

Anyways, Shadow DOM creates a true separation of DOM subtrees, meaning developers can build webpages as individual modules and not worry about conflicting id’s, classes, and styles. And with websites getting increasingly more complex, a finer more modular approach is needed.

Shadow DOM isn’t official yet, but I would bet my right arm that it will be.

Here’s some angular code to see how simple it is in action.

<div id="shadowSubtree"></div>
...
var shadow = angular.element('#shadowSubtree')[0].attachShadow({mode:'open'});
shadow.innerHTML='<link type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"></link><a class="btn">Bootstrap styled button</a>';

Tagged

When you need things to line up

Suppose you have the following html

<div class="container">
  <div class="thing"></div>
  <div class="thing"></div>
  <div class="thing"></div>
</div>

If you want things to line up horizontally, you could use a tried and true method

.thing {
    float:left;
}

But if the width of the container is not big enough, these things will wrap to the next line.

e.g.

.container {
    width: 50px;
}

Sometimes, you want them to line up and then just cut off when the container ends. For these cases, substitute it with these settings.

.container {
    white-space: nowrap;
}
.thing {
    display: inline-block;
}

If you actually wanted to see the rest of the things, then you could add a horizontal scroll bar

.container {
    overflow-x: scroll;
}
Tagged