AWS Beanstalk Worker with Node.js and SQS

First I created a project for the node.js worker. The first steps for this are identical to that of creating the Hapi.js site that publishes messages to the queue. Go through these three steps for the worker and then I’ll continue from there.

  1. First create the web application which will act as our worker service. I gave mine the name of testing-aws-sqs-worker, the site publishing to the queue I called testing-aws-sqs-site.
  2. Next add dependencies needed, like mocha.
  3. Finally make sure the AWS environment variables are set appropriately.

…and now on to the security, configuration and worker specific parts of this series…

Continue reading

Setting up a Hapi.js App that sends work to a Node.js AWS Worker via SQS


First I created a project for the node.js web application.

$ npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sane defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install &pkg& --save` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
name: (testing-aws-sqs-site)
version: (0.0.0) 0.0.1
description: This project that will feed data to the queue for the AWS SQS sample.
entry point: (index.js) server.js
test command: mocha
git repository: (https://github.com/Adron/testing-aws-sqs-site.git)
keywords: aws, sqs, elastic, elastic beanstalk, queue, worker
author: Adron Hall
license: (ISC) Apache 2.0

Continue reading

Setting up an AWS SQS Queue for Use With Node.js Beanstalk Worker Instances

Before diving straight in, I’m going to outline the specific goals and what I am using to accomplish these goals. The goal is to have a simple web application, that will get some element of data posted to a queue. The queue will then have data that a worker service needs to then process. As I step through each of these requirements I’ll determine the actual push and pull mechanisms that will get the job done.

Continue reading

Wintersmith Creating Documentation

I set out a few days ago to put together a documentation site. I had a few criteria for this site:

  1. A static site that I could push to Github to use with their github pages feature.
  2. The static site is generated from markdown.
  3. It just works. It’s easy to get it into a workflow without breaking the tool or breaking a solid workflow.

That was it, what I’d consider some pretty straight forward criteria. However it wasn’t that easy, until it was. Here’s a few of the issues I ran through on the way to getting a solid tool with a solid workflow working together. Beware however if you have fickle reading eyes, the following is a rant about what does and does not work.

[rant on]

Middleman Broken Ruby and Broken Gems

I have a Mac Book Pro Retina 15″. The machine runs OS-X Mavericks. I’ve had zero issue with this OS. It comes with Ruby 2 and some version of gems. My first attempt was to take a stab with middleman, the same static site builder used by many companies including Basho. Even though I ran into problems which I detailed in “Basho – First Week Coding & Research Adventures…” and “Un-breaking OS-X Mountain Lion” eventually middleman mostly worked.

Well, I didn’t get to a working app very fast. Immediately Ruby 2 had issues and gemsets puked middleman everywhere. I then ran into some confusing permissions errors. About 15 minutes into this process of troubleshooting middleman I had flashbacks of the first few days at Basho and thought, “this is bullshit, something has to work better than this catastrofuck of software version conflicts“. So I dropped middleman dead.

Assemble, Assemble, Assemble…    ??!?#@$%! WTF!

I attempted assemble next for the node.js stack. It looked to have a lot of promise. It uses grunt.js and a bunch of other tools to manage a static site generating, bootstrap using stack. The more I looked at it however it seemed busy. Busy as in “I’m going to do more than three things so I’ll maybe do none of them right“.

Reading about assemble I turned to another hacker slinging some code at the bar I sat at. She looked at the project and asked, “what’s it supposed to do exactly? I get that it’s a framework of tools but it doesn’t’ exactly lay out what it is supposed to be doing besides arbitrarily managing some parts of the stack.” That seemed reasonable to me.

Before I just tossed assemble.io to the trash heap of options I wanted to ask at least one more person. So the next day I asked my good friend and super genius Troy Howard. It was a short verdict, “drop that shit”.

That was enough for me, assemble was officially dead for this project.

Slate, This Seems Slick But…

I then took a stab at Slate. Orchestrate.io just created some excellent documentation using the Slate solution. So I dove into this, getting a test site up and running rapidly. It seemed like a mostly viable solution until I started running into issues with how and where I wanted things displayed for the code samples and other material. It appeared, if I were going to use Slate, I’d be using it almost exactly as is. I might borrow pieces of it in the future, even the layout to some degree, but for now I wanted something else that I could incorporate my themes as needed. Alas, I was super happy with Slate, it just wasn’t a great fit for now.

Where The Hell Are My Options, Jekyll?

At this point I was getting a little frustrated. I then went to a tried and true solution in jekyll. Jekyll is a pretty solid solution, with some bugs and oddball issues but nothing major. I started working with it and even transitioning a jekyll project into my theme. Hacking a jekyll blog into a reasonable documentation solution this seemed like the way to go.

But then I got a wild urge to see if there was anything else in Node.js land that I was missing. I really didn’t want to sling a Ruby project if I didn’t have to. I’d rather keep all the stacks around JavaScript for this particular set of projects. No reason to diverge when I’m just dealing with such simple straight forward web projects. I’ll diverge when something truly validate diverging, like doing some real math with a real functional language or something. Trading Node.js for one single project to go with a pseudo Ruby project for static site generation just didn’t seem appealing. So I started looking around one more time.

Made in -34°C

Yup, -34 Celsius. That's about as cold as it gets. Click for the full size chart!

Yup, -34 Celsius. That’s about as cold as it gets. Click for the full size chart!

The next solution I tried was Wintersmith. This solution appeared to have everything that I’d been looking for feature wise. It was a node.js project, it generated static content, could generate blogs but other things too, was simple, had plugins, was straight forward and more. I was a little paranoid after the solutions I’d fought my way through earlier so I went to the only place that would insure that I’d have a solution I could be confident in. I went straight to the source!

I’ll admit I took a peak at the package.json file before going head long into the source. A quick perusal of the dependencies list looked ok.

  dependencies: {
    marked: ~0.3.0,
    coffee-script: ~1.6.3,
    async: ~0.2.9,
    highlight.js: ~8.0.0,
    jade: ~1.1.5,
    ncp: ~0.5.0,
    rimraf: ~2.2.6,
    winston: ~0.7.2,
    colors: ~0.6.2,
    optimist: ~0.6.0,
    minimatch: ~0.2.14,
    mime: ~1.2.11,
    js-yaml: ~3.0.1,
    mkdirp: ~0.3.5,
    chokidar: ~0.8.1,
    server-destroy: ~1.0.0,
    npm: ~1.3.24,
    slugg: ~0.1.2
  },
  devDependencies: {
    shelljs: 0.1.x
  }

I immediately took note of a few things. The first was that there was actually a breakout of dev dependencies versus actual project dependencies. That’s a good first sign. The second thing I just went through the list and checked the various library dependencies, there were a few that I’ve played around with before that I trusted; highlight.js, coffee-script, async, js-yaml and npm were all cool by me. It didn’t seem to crazy out of whack. With that I went forth into the code with zero expectations…

The first files I dug into were the config.coffee file, which pointed out a few things I’d want to possibly tweak a little later such as the port number and other things the wintersmith server would use when running the preview server.

class Config
  ### The configuration object ###

  @defaults =
    # path to the directory containing content's to be scanned
    contents: './contents'
    # list of glob patterns to ignore
    ignore: []
    # context variables, passed to views/templates
    locals: {}
    # list of modules/files to load as plugins
    plugins: []
    # modules/files loaded and added to locals, name: module
    require: {}
    # path to the directory containing the templates
    templates: './templates'
    # directory to load custom views from
    views: null
    # built product goes here
    output: './build'
    # base url that site lives on, e.g. '/blog/'
    baseUrl: '/'
    # preview server settings
    hostname: null # INADDR_ANY
    port: 8080
    # options prefixed with _ are undocumented and should generally not be modified
    _fileLimit: 40 # max files to keep open at once
    _restartOnConfChange: true # restart preview server on config change

Second code file that looked interesting, the renderer.coffee code file.

fs = require 'fs'
util = require 'util'
async = require 'async'
path = require 'path'
mkdirp = require 'mkdirp'
{Stream} = require 'stream'

{ContentTree} = require './content'
{pump, extend} = require './utils'

if not setImmediate?
  setImmediate = process.nextTick

renderView = (env, content, locals, contents, templates, callback) ->
  setImmediate ->
    # add env and contents to view locals
    _locals = {env, contents}
    extend _locals, locals

    # lookup view function if needed
    view = content.view
    if typeof view is 'string'
      name = view
      view = env.views[view]
      if not view?
        callback new Error "content '#{ content.filename }' specifies unknown view '#{ name }'"
        return

    # run view
    view.call content, env, _locals, contents, templates, (error, result) ->
      error.message = "#{ content.filename }: #{ error.message }" if error?
      callback error, result

render = (env, outputDir, contents, templates, locals, callback) ->
  ### Render *contents* and *templates* using environment *env* to *outputDir*.
      The output directory will be created if it does not exist. ###

  env.logger.info "rendering tree:\n#{ ContentTree.inspect(contents, 1) }\n"
  env.logger.verbose "render output directory: #{ outputDir }"

  renderPlugin = (content, callback) ->
    ### render *content* plugin, calls *callback* with true if a file is written; otherwise false. ###
    renderView env, content, locals, contents, templates, (error, result) ->
      if error
        callback error
      else if result instanceof Stream or result instanceof Buffer
        destination = path.join outputDir, content.filename
        env.logger.verbose "writing content #{ content.url } to #{ destination }"
        mkdirp.sync path.dirname destination
        writeStream = fs.createWriteStream destination
        if result instanceof Stream
          pump result, writeStream, callback
        else
          writeStream.end result, callback
      else
        env.logger.verbose "skipping #{ content.url }"
        callback()

  items = ContentTree.flatten contents
  async.forEachLimit items, env.config._fileLimit, renderPlugin, callback

module.exports = {render, renderView}

Fairly straight forward code. Puts together the rendered content and I noted a few key things. There was a solid process order that was repeated; env, content, locals, contents, templates, callback. Because of this it looked like local variables were set to statically set certain things based on configuration instead of dynamic location. This could bite me, but with this quick glance, at least I knew where and what was happening with the order of generation.

I then did a scan of the templates.coffee and a few other code files. Having gotten a fair idea of where and what was being done, I went looking for a quick start. Things looked pretty good, so I crossed my fingers and my rant ends here…

[/rant off]

So now that the rant mode was over, here’s what I did to make wintersmith my documentation solution. Most of this is in a state of flux as I automate and put more into the project to simplify the workflow.

Here’s how I got started super fast.

Step #1 Get Wintersmith running.

npm install wintersmith -g

Note that you’ll need to install it globally (thus the -g) and may need to install Wintersmith with sudo prepended to that command.

The next thing that I did was create a directory that I’d use to build the static generated contents. This material I’d put into a git repository on github (namely the deconstructed gh-pages repo). I’ll call this generically the root directory.

mkdir rootDirectory

After that I navigated into the rootDirectory and created a new Wintersmith Application.

wintersmith new myAppName

That now gives me a directory structure like this

  • rootDirectory
    • myAppName

Now that I have this, the app content, markdown, views and related templates are in myAppName. To view the app, I changed directories into myAppName and ran wintersmith preview like this

wintersmith preview

Opening up a browser I can navigate to http://localhost:8080 and see the fully rendered site. To publish the site however one needs to run wintersmith build, however there’s one problem. I want the site to publish to the rootDirectory where the application content currently sites. To do this I have to edit the config.json file. Just above the locals code settings shown below…

{
  locals: {
    url: http://localhost:8080,
    name: The Wintersmith's blog,
    owner: Someone,
    description: Ramblings of an immor(t)al demigod
}

I added an output key value property to the file as shown. It merely takes the results and shifts them back a directory so they end up in the rootDirectory.

{
  output:../,
  locals: {
    url: http://docs.deconstructed.io,
    name: Deconstructed Docs,
    owner: Adron Hall,
    description: This site provides the documentation around the Deconstructed API Services.
  },
  plugins: [
    ./plugins/paginator.coffee
  ],
  require: {
    moment: moment,
    _: underscore,
    typogr: typogr
  },
  jade: {
    pretty: true
  },
  markdown: {
    smartLists: true,
    smartypants: true
  },
  paginator: {
    perPage: 6
  }
}

I also changed the perPage setting to 6, just so I could get a little more content on the main page eventually. There is also the change for the domain name and a few other parameters that I’ll catch up on with the next blog entry.

Summary

In my next blog entry I’ll cover a quick how-to on how to setup the CNAME in github pages to get the static wintersmith site up at a subdomain/domain name. I’ll also dive into setup with AWS Route 53, which generically applies to setting a gh-pages site up with any DNS provider. So subscribe and I’ll have that post in the next 1-2 days.

Mapping Domain Names with name.com, Elastic Beanstalk, Elastic Load Balancer and AWS Route 53

I finally wrapped up my name server and DNS mapping needs with Name.com, Route 53 and Elastic Beanstalk. Since this was a little confusing I thought a short write up was in order. Thanks to Evan @evandbrown for helping out!

The first thing needed is a delegation set of name servers for your DNS and name server provider. These can be found by creating a hosted zone. The way to do this is open up the AWS Management Console and navigate into the Route 53 management area. The Route 53 icon is under the Compute & Networking section on the management console.

Beanstalk, Route 53 - Click for full size image

Beanstalk, Route 53 – Click for full size image

Upon navigating to the Route 53 console area click on the Create Hosted Zones button.

Create Hosted Zone

Create Hosted Zone – Click for full size image

When the zone is created then the delegation set can be found under the Hosted Zone Details. This delegation set now needs setup as the name servers for whoever, in this case name.com, is the domain provider.

Delegation Set - Click for full size image.

Delegation Set – Click for full size image.

Open up the management console for the name server administration.

Upon adding them the list should look something like this.

Name servers list built from the delegation set of the hosted zone. Click for full size image.

Name servers list built from the delegation set of the hosted zone. Click for full size image.

Once the name servers are setup, those will need time to propagate. Likely this could take a good solid chunk of time, somewhere in the hours range likely, and don’t be surprised if it takes a little bit more than a day.

While the propagation starts navigate back to the AWS Management Console and open up the EC2 section of the console. On the right hand side of the Resources list there is a Load Balancers section. Click it.

Load Balancers - Click for full size image.

Load Balancers – Click for full size image.

In this section there is a listing of all load balancers that have been created manually or by Elastic Beanstalk.

Load Balancers - Click for full size image.

Load Balancers – Click for full size image.

Make note of the Load Balancer Name for selection in Route 53. This is what Route 53 needs in order to point an alias at for incoming traffic to that particular Elastic Beanstalk application. In this particular image above there are 4 load balancers listed, the easiest way to prevent confusion is to take note of the load balancer name at the time of creation, but this is the easiest way to find them otherwise.

Record Set - Click for full size image

Record Set – Click for full size image

Now when going back to the hosted zone to set it up with the appropriate information, create a new record with the appropriate name, in this case I was setting up the admin.deconstructed.io (no it isn’t live yet, I just set it up to test it out) to point to an alias target. Just leave the Type set to A – IPv4 address and click the radio control so that Alias is set to Yes. In the alias target select the appropriate load balancer for the Elastic Beanstalk (or whatever it points to) application.

That’s it, give it a few hours (or a day) and eventually the domain or subdomain will be pointed appropriately at the Elastic Beanstalk load balanced application.

Learning About Docker

Over the next dozen or so few days I’ll be ramping up on Docker, where my gaps are and where the project itself is going. I’ve been using it on and off and will have more technical content, but today I wanted to write a short piece about what, where, who and how Docker came to be.

As an open source engine Docker automates deployment of lightweight, portable, resilient and self-sufficient containers that run primarily on Linux. Docker containers are used to contain a payload, encapsulate that and consistently run it on a server.

This server can be virtual, on AWS or OpenStack, in clusters, public instances or private, bare-metal servers or wherever one can get an operating system to run. I’d bet it would show up on an Arduino cluster one of these days.  ;)

User cases for Docker include taking packaging and deployment of applications and automating it into a simple container bundle. Another is to build PaaS style environments, lightweight that scale up and down extremely fast. Automate testing and continuous integration and deployment, because we all want that. Another big use case is simply building resilient, scalable applications that then can be deployed to Docker containers and scaled up and down rapidly.

A Little History

The creators of Docker formed a company called dotCloud that provided PaaS Services. On October 29th, 2013 however they changed the name from dotCloud to Docker Inc to emphasize the focus change from the dotCloud PaaS Technology to the core of dotCloud, Docker itself. As Docker became the core of a vibrant ecosystem the founders of dotCloud chose to focus on this exciting new technology to help guide and deliver on an ever more robust core.

Docker Ecosystem from the Docker Blog. Hope they don't mind I linked it, it shows the solid lifecycle of the ecosystem. (Click to go view the blog entry that was posted with the image)

Docker Ecosystem from the Docker Blog. Hope they don’t mind I linked it, it shows the solid lifecycle of the ecosystem. (Click to go view the blog entry that was posted with the image)

The community of docker has been super active with a dramatic number of contributors, well over 220 now, most who don’t work for Docker and they’ve made a significant percentage of the commits to the code base. As far as the repo goes, it has been downloaded over a 100,000 times, yup, over a hundred. thousand. times!!! It’s container tech, I’m still impressed just by this fact! On Github the repo has thousands of starred observers and over 15,000 people are using Docker. One other interesting fact is the slice of languages, with a very prominent usage of Go.

Docker Language Breakout on Github

Docker Language Breakout on Github

Overall the Docker project has exploded in popularity, which I haven’t seen since Node.js set the coder world on fire! It’s continuing to gain steam in how and in which ways people deploy and manage their applications – arguably more effectively in many ways.

Portland Docker Meetup. Click image for link to the meetup page.

Portland Docker Meetup. Click image for link to the meetup page.

The community is growing accordingly too, not just a simple push by Docker/dotCloud itself, but actively by grass roots efforts. One is even sprung up in Portland in the Portland Docker Meetup.

So Docker, Getting Operational

The Loading Bay

The Loading Bay

One of the best ways to describe docker (which the Docker team often uses, hat tip to the analogy!) and containers in general is to use a physical parallel. One of the best stories that is a great example is that of the shipping and freight industry. Before containers ships, trains,

Manually Guiding Freight, To Hand Unload Later.

Manually Guiding Freight, To Hand Unload Later.

trucks and buggies (ya know, that horses pulled) all were loaded by hand. There wasn’t any standardization around movement of goods except for a few, often frustrating tools like wooden barrels for liquids, bags for grains and other assorted things. They didn’t mix well and often were stored in a way that caused regular damage to good. This era is a good parallel to hosting applications on full hypervisor virtual machines or physical machines with one operating system. The operating system kind of being the holding bay or ship, with all the freight crammed inside haphazardly.

Shipping Yards, All of a Sudden Organized!

Shipping Yards, All of a Sudden Organized!

When containers were introduced like the shiny blue one shown here, everything began a revolutionary change. The manpower dramatically

A Flawlessly Rendered Container

A Flawlessly Rendered Container

dropped, injuries dropped, shipping became more modular and easy to fit the containers together. To put it simply, shipping was revolutionized through this invention. In the meantime we’ve all benefitted in some way from this change. This can be paralleled to the change in container technology shifting the way we deploy and host applications.

Next post, coming up in just a few hours “Docker, Containers Simplified!”

Getting Distributed – BOOM! The Top 3 Course Selections

A few months ago I posted a poll to ask what courses I should put together next. I just wrapped up and am putting the final edits and finishing touches on a Pluralsight Course on distributed databases, focusing on Riak. On the poll the top three courses, by a decent percentage of votes included the following:

  1. Node.js Distributed Systems – Bringing the Node.js Nodes together for Distributed Noes of Availability and Compute @ 12.14% of the vote.
    1. A Quick Intro to Node.js
    2. Introduction to Relevant Distributed Patterns
    3. How Does Node.js Fit Into the Distribution
    4. Working With Distributed Systems (AKA Avoiding a Big Ball of Mud)
    5. Build a Demo
  2. Distributed Systems Programming with Javascript @ 10.4% of the vote.
    1. Patterns for Distributed Programming
    2. …and I’m figuring the other sections out still for this one…  got ideas? It needs to encompass the client side as well as the non-client code side of things. So it’s sort of like the above course, but I’m focusing more on the periphery of what one deals with when dealing with developing on and around distributed systems as well as distributed systems themselves.
  3. Vagrant OS-X, Windows and Linux – how to build, manage and ship machines to use for development and recreation of production environments.
    1. Vagrant, What is it?
    2. OS-X, Linux and Windows
    3. Using Vagrant Machines
    4. Building Vagrant Dev Machines
    5. Vagrant the Universe!

Now I might flip this list, but either way they’re all going to be super cool. So stay tuned and I’ll be working up these into courses. So far here’s the sub-bullets above are the basics of the curriculum I intend to put forward. Am I missing anything? Would you like to see anything specifically? Leave a comment and I’ll be sure to get everything as packed in there as possible!!