Wintersmith Creating Documentation

I set out a few days ago to put together a documentation site. I had a few criteria for this site:

  1. A static site that I could push to Github to use with their github pages feature.
  2. The static site is generated from markdown.
  3. It just works. It’s easy to get it into a workflow without breaking the tool or breaking a solid workflow.

That was it, what I’d consider some pretty straight forward criteria. However it wasn’t that easy, until it was. Here’s a few of the issues I ran through on the way to getting a solid tool with a solid workflow working together. Beware however if you have fickle reading eyes, the following is a rant about what does and does not work.

[rant on]

Middleman Broken Ruby and Broken Gems

I have a Mac Book Pro Retina 15″. The machine runs OS-X Mavericks. I’ve had zero issue with this OS. It comes with Ruby 2 and some version of gems. My first attempt was to take a stab with middleman, the same static site builder used by many companies including Basho. Even though I ran into problems which I detailed in “Basho – First Week Coding & Research Adventures…” and “Un-breaking OS-X Mountain Lion” eventually middleman mostly worked.

Well, I didn’t get to a working app very fast. Immediately Ruby 2 had issues and gemsets puked middleman everywhere. I then ran into some confusing permissions errors. About 15 minutes into this process of troubleshooting middleman I had flashbacks of the first few days at Basho and thought, “this is bullshit, something has to work better than this catastrofuck of software version conflicts“. So I dropped middleman dead.

Assemble, Assemble, Assemble…    ??!?#@$%! WTF!

I attempted assemble next for the node.js stack. It looked to have a lot of promise. It uses grunt.js and a bunch of other tools to manage a static site generating, bootstrap using stack. The more I looked at it however it seemed busy. Busy as in “I’m going to do more than three things so I’ll maybe do none of them right“.

Reading about assemble I turned to another hacker slinging some code at the bar I sat at. She looked at the project and asked, “what’s it supposed to do exactly? I get that it’s a framework of tools but it doesn’t’ exactly lay out what it is supposed to be doing besides arbitrarily managing some parts of the stack.” That seemed reasonable to me.

Before I just tossed assemble.io to the trash heap of options I wanted to ask at least one more person. So the next day I asked my good friend and super genius Troy Howard. It was a short verdict, “drop that shit”.

That was enough for me, assemble was officially dead for this project.

Slate, This Seems Slick But…

I then took a stab at Slate. Orchestrate.io just created some excellent documentation using the Slate solution. So I dove into this, getting a test site up and running rapidly. It seemed like a mostly viable solution until I started running into issues with how and where I wanted things displayed for the code samples and other material. It appeared, if I were going to use Slate, I’d be using it almost exactly as is. I might borrow pieces of it in the future, even the layout to some degree, but for now I wanted something else that I could incorporate my themes as needed. Alas, I was super happy with Slate, it just wasn’t a great fit for now.

Where The Hell Are My Options, Jekyll?

At this point I was getting a little frustrated. I then went to a tried and true solution in jekyll. Jekyll is a pretty solid solution, with some bugs and oddball issues but nothing major. I started working with it and even transitioning a jekyll project into my theme. Hacking a jekyll blog into a reasonable documentation solution this seemed like the way to go.

But then I got a wild urge to see if there was anything else in Node.js land that I was missing. I really didn’t want to sling a Ruby project if I didn’t have to. I’d rather keep all the stacks around JavaScript for this particular set of projects. No reason to diverge when I’m just dealing with such simple straight forward web projects. I’ll diverge when something truly validate diverging, like doing some real math with a real functional language or something. Trading Node.js for one single project to go with a pseudo Ruby project for static site generation just didn’t seem appealing. So I started looking around one more time.

Made in -34°C

Yup, -34 Celsius. That's about as cold as it gets. Click for the full size chart!

Yup, -34 Celsius. That’s about as cold as it gets. Click for the full size chart!

The next solution I tried was Wintersmith. This solution appeared to have everything that I’d been looking for feature wise. It was a node.js project, it generated static content, could generate blogs but other things too, was simple, had plugins, was straight forward and more. I was a little paranoid after the solutions I’d fought my way through earlier so I went to the only place that would insure that I’d have a solution I could be confident in. I went straight to the source!

I’ll admit I took a peak at the package.json file before going head long into the source. A quick perusal of the dependencies list looked ok.

  dependencies: {
    marked: ~0.3.0,
    coffee-script: ~1.6.3,
    async: ~0.2.9,
    highlight.js: ~8.0.0,
    jade: ~1.1.5,
    ncp: ~0.5.0,
    rimraf: ~2.2.6,
    winston: ~0.7.2,
    colors: ~0.6.2,
    optimist: ~0.6.0,
    minimatch: ~0.2.14,
    mime: ~1.2.11,
    js-yaml: ~3.0.1,
    mkdirp: ~0.3.5,
    chokidar: ~0.8.1,
    server-destroy: ~1.0.0,
    npm: ~1.3.24,
    slugg: ~0.1.2
  },
  devDependencies: {
    shelljs: 0.1.x
  }

I immediately took note of a few things. The first was that there was actually a breakout of dev dependencies versus actual project dependencies. That’s a good first sign. The second thing I just went through the list and checked the various library dependencies, there were a few that I’ve played around with before that I trusted; highlight.js, coffee-script, async, js-yaml and npm were all cool by me. It didn’t seem to crazy out of whack. With that I went forth into the code with zero expectations…

The first files I dug into were the config.coffee file, which pointed out a few things I’d want to possibly tweak a little later such as the port number and other things the wintersmith server would use when running the preview server.

class Config
  ### The configuration object ###

  @defaults =
    # path to the directory containing content's to be scanned
    contents: './contents'
    # list of glob patterns to ignore
    ignore: []
    # context variables, passed to views/templates
    locals: {}
    # list of modules/files to load as plugins
    plugins: []
    # modules/files loaded and added to locals, name: module
    require: {}
    # path to the directory containing the templates
    templates: './templates'
    # directory to load custom views from
    views: null
    # built product goes here
    output: './build'
    # base url that site lives on, e.g. '/blog/'
    baseUrl: '/'
    # preview server settings
    hostname: null # INADDR_ANY
    port: 8080
    # options prefixed with _ are undocumented and should generally not be modified
    _fileLimit: 40 # max files to keep open at once
    _restartOnConfChange: true # restart preview server on config change

Second code file that looked interesting, the renderer.coffee code file.

fs = require 'fs'
util = require 'util'
async = require 'async'
path = require 'path'
mkdirp = require 'mkdirp'
{Stream} = require 'stream'

{ContentTree} = require './content'
{pump, extend} = require './utils'

if not setImmediate?
  setImmediate = process.nextTick

renderView = (env, content, locals, contents, templates, callback) ->
  setImmediate ->
    # add env and contents to view locals
    _locals = {env, contents}
    extend _locals, locals

    # lookup view function if needed
    view = content.view
    if typeof view is 'string'
      name = view
      view = env.views[view]
      if not view?
        callback new Error "content '#{ content.filename }' specifies unknown view '#{ name }'"
        return

    # run view
    view.call content, env, _locals, contents, templates, (error, result) ->
      error.message = "#{ content.filename }: #{ error.message }" if error?
      callback error, result

render = (env, outputDir, contents, templates, locals, callback) ->
  ### Render *contents* and *templates* using environment *env* to *outputDir*.
      The output directory will be created if it does not exist. ###

  env.logger.info "rendering tree:\n#{ ContentTree.inspect(contents, 1) }\n"
  env.logger.verbose "render output directory: #{ outputDir }"

  renderPlugin = (content, callback) ->
    ### render *content* plugin, calls *callback* with true if a file is written; otherwise false. ###
    renderView env, content, locals, contents, templates, (error, result) ->
      if error
        callback error
      else if result instanceof Stream or result instanceof Buffer
        destination = path.join outputDir, content.filename
        env.logger.verbose "writing content #{ content.url } to #{ destination }"
        mkdirp.sync path.dirname destination
        writeStream = fs.createWriteStream destination
        if result instanceof Stream
          pump result, writeStream, callback
        else
          writeStream.end result, callback
      else
        env.logger.verbose "skipping #{ content.url }"
        callback()

  items = ContentTree.flatten contents
  async.forEachLimit items, env.config._fileLimit, renderPlugin, callback

module.exports = {render, renderView}

Fairly straight forward code. Puts together the rendered content and I noted a few key things. There was a solid process order that was repeated; env, content, locals, contents, templates, callback. Because of this it looked like local variables were set to statically set certain things based on configuration instead of dynamic location. This could bite me, but with this quick glance, at least I knew where and what was happening with the order of generation.

I then did a scan of the templates.coffee and a few other code files. Having gotten a fair idea of where and what was being done, I went looking for a quick start. Things looked pretty good, so I crossed my fingers and my rant ends here…

[/rant off]

So now that the rant mode was over, here’s what I did to make wintersmith my documentation solution. Most of this is in a state of flux as I automate and put more into the project to simplify the workflow.

Here’s how I got started super fast.

Step #1 Get Wintersmith running.

npm install wintersmith -g

Note that you’ll need to install it globally (thus the -g) and may need to install Wintersmith with sudo prepended to that command.

The next thing that I did was create a directory that I’d use to build the static generated contents. This material I’d put into a git repository on github (namely the deconstructed gh-pages repo). I’ll call this generically the root directory.

mkdir rootDirectory

After that I navigated into the rootDirectory and created a new Wintersmith Application.

wintersmith new myAppName

That now gives me a directory structure like this

  • rootDirectory
    • myAppName

Now that I have this, the app content, markdown, views and related templates are in myAppName. To view the app, I changed directories into myAppName and ran wintersmith preview like this

wintersmith preview

Opening up a browser I can navigate to http://localhost:8080 and see the fully rendered site. To publish the site however one needs to run wintersmith build, however there’s one problem. I want the site to publish to the rootDirectory where the application content currently sites. To do this I have to edit the config.json file. Just above the locals code settings shown below…

{
  locals: {
    url: http://localhost:8080,
    name: The Wintersmith's blog,
    owner: Someone,
    description: Ramblings of an immor(t)al demigod
}

I added an output key value property to the file as shown. It merely takes the results and shifts them back a directory so they end up in the rootDirectory.

{
  output:../,
  locals: {
    url: http://docs.deconstructed.io,
    name: Deconstructed Docs,
    owner: Adron Hall,
    description: This site provides the documentation around the Deconstructed API Services.
  },
  plugins: [
    ./plugins/paginator.coffee
  ],
  require: {
    moment: moment,
    _: underscore,
    typogr: typogr
  },
  jade: {
    pretty: true
  },
  markdown: {
    smartLists: true,
    smartypants: true
  },
  paginator: {
    perPage: 6
  }
}

I also changed the perPage setting to 6, just so I could get a little more content on the main page eventually. There is also the change for the domain name and a few other parameters that I’ll catch up on with the next blog entry.

Summary

In my next blog entry I’ll cover a quick how-to on how to setup the CNAME in github pages to get the static wintersmith site up at a subdomain/domain name. I’ll also dive into setup with AWS Route 53, which generically applies to setting a gh-pages site up with any DNS provider. So subscribe and I’ll have that post in the next 1-2 days.

I’ve Got a JavaScript & Node.js Webinar, Webstorm Tutorial Videos, Work & Flow With JavaScript Development and More…

Webinar: Node.js Development Workflow in WebStorm

This coming week I’m doing an intro to work and flow with Node.js JavaScript Programming that I’m working with JetBrains on. In the webinar I’ll be covering the following key topics in the webinar:

  • Open an existing project & getting WebStorm configured for running, testing and related working tasks.
  • A quick tour of other IDE features that help with daily work. Some in pretty huge ways.
  • Running WebStorm & debugging Node.js JavaScript applications.
  • Checking out Mocha, how it works and what it gives WebStorm the power to do. Then we’ll write a few tests & implement that code too.

All this will include Q & A throughout and at the end of the webinar. Be sure to register soon!

WebStorm Tutorials: Learning Shortcuts, Customizing Layout and Others

These WebStorm Tutorials have been put together by John Lindquist @johnlindquist for JetBrains. There solid, quick snippets of useful WebStorm usage. Two that I’ve found really useful I’ve included here:

John also has a lot of other great totally kick ass material out there. So check out his blog @ http://johnlindquist.com/ and follow his youtube channel too.

Coming Up in the Near Future, The Work & Flow of JavaScript Development

I have a new course I’m working on right now for Pluralsight, that will take these basic precepts and dive even deeper into the daily workflow of the JavaScript Developer. Whether it’s client side hacking or server side coding, I’ll be diving into a whole lot of JavaScript goodness. If you’d like me to ping you when the course is done, hit me up on Twitter @adron and just let me know. In the meantime get a Pluralsight subscription (free to sign up and at least give it a try) and check out these courses by myself and others.

History of Symphonize.js – JavaScript Client Pivot to Data Generation Library

…the history of symphonize.js So Far!

NOTE: If you just want to check out the code bits, scroll down to the sub-title #symphonize #hacking. Also important to note I’m putting the library through a fairly big refactor at the moment so that everything aligns with the documentation that I’ve recently created. So many things may not be implemented, but we’re moving toward v0.1.0, which will be a functional implementation of the library available via npm based entirely on the documentation and specs that I outline after the history.

A Short History

I started the symphonize.js project back on the 1st of November. Originally I started the project as a client driver library for Orchestrate.io, but within a day Chris Molozian commented and pointed out that there was already a client driver library for Orchestrate.io available that Steve Kaliski (Github @sjkaliski and Twitter @stevekaliski and http://stevekaliski.com/) had coded called logically orchestrate.js. Since this was available I did a pivot to symphonize.js being a data generation project instead.

The comment that enabled symphonize.js to pivot from client driver to data generation library.

The comment that made me realize symphonize.js should pivot from client driver to data generation library.

The Official Start of Symphonize.js

After that start and quick pivot I posted a blog with Orchestrate.io titled “Test Data Builder Symphonize.js With Chance.js (1/3)” to officially start the project. In that post I covered key value and graph basics, with a dive into using chance.js and orchestrate.js with examples. Near the same time I also posted a related blog on publishing an NPM module, which is the deployment focus of Symphonize.js.

Reasons Reasoning

There are two main reasons why I chose Orchestrate.io and a data generation library as the two things I wanted to combine. The first, is I knew the orchestrate.io team and really dug what they were building. I wanted to work with it and check out how well it would work for my use cases in the future. The ability to go sit down, discuss with them what they were building was great (which I interviewed Matt Heitzenroder @roder that you can watch Orchestrate.io, Stop Dealing With the Database Infrastructure!) The second reason is that my own startup that I’m co-founding with Aaron Gray (@agray) needed to use key value and graph data storage of some type, somewhere. Orchestrate.io looked like a perfect fit. After some research, giving it a go, it fit very well into what we are building.

CRUD, cURL Hacking & Next Steps

Early December I knocked out two support articles about testing APIs with cURL in Some JavaScript API Coding With Restify & Express & Hacking it With cURL …Segment #1 (with some Webstorm to boot) and Some JavaScript API Coding With Restify & Express & Hacking it With cURL …Segment #2 and an article on the Orchestrate.io Blog for part 2 of that series titled Symphonize Some Create, Read, Update & Delete [CRUD] via Orchestrate.js (2/3).

December then rolled into the standard holiday doldrums and slowdowns. So fast forward to January post a few rounds of beer and good tidings and I got the 3rd in the series published titled Getting Serious With Symphony.js – JavaScript TDD/BDD Coding Practices (3/3). The post doesn’t speak too much to symphony.js usage but instead my efforts to use TDD or BDD practices in trying to write the library.

Slowly I made progress in building the library and finally it’s in a mostly releasable state now. I use this library daily in working with the code base for Deconstructed and imagine I’ll use it ongoing for many other projects. I hope others might be able to find uses for it too and maybe even add capabilities or ideas. Just ping me via Twitter @adron or Github @adron, add an issue on Github and I’ll be happy to accept pull requests for new features, code refactoring, add you to the project or whatever else you’re interested in.

#symphonize #hacking

Now for the nitty gritty. If you’re up for using or contributing to the project check out the symphonize.js github pages site first. It’s got all the information to help get you kick started. However, you can keep reading as I’ve included much of the information there along with the examples from the README.md below.

NOTE: As I mentioned at the top of this blog entry, the funcitonal implementation of code isn’t available via npm just yet, myself and some others are ripping through a good refactor to align the implementation fo the library with the rewritten and newly available documentation – included blow and at the github pages.

How to use this project in one of your projects.

npm install symphonize

How to setup this project for development.

First fork the repository located at https://github.com/Adron/symphonize.

git clone git@github.com:YourUserName/symphonize.git
cd symphonize
npm install

Using The Library

The intended usage is to invocate the JavaScript object and then call generate. That’s it, a super simple process. The code would look like this:

var Symphonize = require('../bin/symphonize');
var symphonize = new Symphonize();

The basic constructor invocation like this utilizes the generate.json file to generate data from. To inject the json configuration programmatically just inject the json configuration information via the constructor.

var configJson = {"schema":"keyvalue"};

var Symphonize = require('../bin/symphonize');
var symphonize = new Symphonize();

Once the Symphonize data generator has been created call the generate() method as shown.

symphonize.generate();

That’s basically it. But you say, it’s supposed to do X, Y or Z. Well that’s where the json configuration data comes into play. In the configuration data you can set the data fields and what they’ll generate, what type of data will be generated, the specific schema, how many records to create and more.

generate.json

The library comes with the generate.json file already setup with a working example. Currently the generation file looks like this:

{
    "schema": "keyvalue", /* keyvalue, graph, event, geo */
        "count": 20, /* X values to generate. */
    "write_source": "console", /* console, orchestrateio and whatever other data sources that might come up. */
    "fields": {
            /* generates a random name. */
            "fieldName": "name",
            /* generates a random dice roll of a d20. */
            "fieldTwo": "d20",
            /* A single lorum ipsum random statement is genereated. */
            "fieldSentence": "sentence",
            /* A random guid is generated. */
            "fieldGuid": "guid"    }
}

Configuration File Definitions

Each of the configuration options that are available have a default in the configuration file. The default is listed in italics with each definition of the configuration option listed below.

  • schema” : This is used to select what type of data structure type is going to be generated. The default iskeyvalue for this option.
  • count” : This provides the total records that are to be generated by the library. The default is 1 for this option.
  • write_source” : This provides the location to output the generated data to. The default is console for this option.
  • fields” : This is a JSON field within the JSON configuration file that provides configuration options around the fields, number of fields and their respective data to generate. The default is one field, with a default data type of guid. Each of the respective entries in this JSON option is a self contained JSON name and value pair. This then looks simply like this (which is also shown above in part):
    {
        "someBoolean": "boolean",
        "someChar": "character",
        "aFloat": "float",
        "GetAnInt": "integer",
        "fieldTwo": "d20",
        "diceRollD10": "d10",
        "_string": {
            "fieldName": "NameOfFieldForString",
            "length": 5,
            "pool": "abcdefgh"
        },
        "_sentence": {
            "fieldName": "NameOfFiledOfSentences",
            "sentence": "5"
        },
        "fieldGuid": "guid"
    }
    
  • Fields Configuration: For each of the fields you can either set the field to a particular data type or leave it empty. If the field name and value pair is left empty then the field defaults to guid. The types of data to generate for fields are listed below. These listed are all simple field and data generation types. More complex nested generation types are listed below under Complex Field Configuration below the simple section.
    • boolean“: This generates a boolean value of true or false.
    • character“: This generates a single character, such as ‘1’, ‘g’ or ‘N’.
    • float“: This generates a float value, similar to something like -211920142886.5024.
    • integer“: This generates an integer value, similar to something like 1, 14 or 24032.
    • d4“: This generates a random integer value based on a dice roll of one four sided dice. The integer range being 1-10.
    • d6“: This generates a random integer value based on a dice roll of one six sided dice. The integer range being 1-10.
    • d8“: This generates a random integer value based on a dice roll of one eight sided dice. The integer range being 1-10.
    • d10“: This generates a random integer value based on a dice roll of one ten sided dice. The integer range being 1-10.
    • d12“: This generates a random integer value based on a dice roll of one twelve sided dice. The integer range being 1-10.
    • d20“: This generates a random integer value based on a dice roll of one twenty sided dice. The integer range being 1-20.
    • d30“: This generates a random integer value based on a dice roll of one thirty sided dice. The integer range being 1-10.
    • d100“: This generates a random integer value based on a dice roll of one hundred sided dice. The integer range being 1-10.
    • guid“: This generates a random globally unique identifier. This value would be similar to ‘F0D8368D-85E2-54FB-73C4-2D60374295E3′, ‘e0aa6c0d-0af3-485d-b31a-21db00922517′ or ‘1627f683-efeb-4db8-8174-a5f2e3378c87′.
  • Complex Field Configuration: Some fields require more complex configuration for data generation, simply because the data needs some baseline of what the range or length of the values need to be. The following list details each of these. It is also important to note that these complex field configurations do not have defaults, each value must be set in the JSON configuration or an error will be thrown detailing that a complex field type wasn’t designated. Each of these complex field types is a JSON name and value parameter. The name is the passed in data type with a preceding underscore ‘_’ to generate with the value having the configuration parameters for that particular data type.
    • _string“: This generates string data based on a length and pool parameters. Required fields for this include fieldNamelength and pool. The JSON would look like this:
      "_string": {
          "fieldName": "NameOfFieldForString",
          "length": 5,
          "pool": "abcdefgh"
      }
      

      Samples of the result would look like this for the field; ‘abdef’, ‘hgcde’ or ‘ahdfg’.

    • _hash“: This generates a hash based on the length and upper parameters. Required fields for this included fieldNamelength and upper. The JSON would look like this:
      "_hash": {
          "fieldName": "HashFieldName",
          "length": 25,
          "casing": 'upper'
      }
      

      Samples of the result would look like this for the field: ‘e5162f27da96ed8e1ae51def1ba643b91d2581d8′ or ‘3F2EB3FB85D88984C1EC4F46A3DBE740B5E0E56E’.

    • _name”: This generates a name based on the middle, *middleinitial* and prefix parameters. Required fields for this included fieldNamemiddlemiddle_initial and prefix. The JSON would look like this:
      "_name": {
          "fieldName": "nameFieldName",
          "middle": true,
          "middle_initial": true,
          "prefix": true
      }
      

      Samples of the result would look like this for the field: ‘Dafi Vatemi’, ‘Nelgatwu Powuku Heup’, ‘Ezme I Iza’, ‘Doctor Suosat Am’, ‘Mrs. Suosat Am’ or ‘Mr. Suosat Am’.

So that covers the kick start of how eventually you’ll be able to setup, use and generate data. Until then, jump into the project and give us a hand.

After this, more examples on the way, cheers!

Plotting Good Things in Portland :: pdxbridge.js / WTF Databases /

Several people got together yesterday to start planning things for 2014 in PDX. It ranged from coding workshops to PDX Node to Node PDX to what kind of food to eat at for lunch. Ya know, daily tactical things that come along with the big picture items. ;)

bridge.js badge.

bridge.js badge.

Two things that I want to bring up to the community out there. One is a workshop that I’ll likely lead efforts to organize and the other is something I’ll just call pdxbridge.js for now. The workshop will cover the topics of which and what databases to use for what data and how to implement. The pdxbridge.js project is about determining the raised or lowered state of the bridges here in Portland.

Some of the other projects, workshops and other topics we discussed included getting a workshop put together around unit, integration and testing code from a behavioral, test driven development or other approaches. This workshop we don’t have anyone to teach, but we’d (ok, so I really really would love to attend a workshop on this) really like to find somebody who would be willing to teach a workshop of this sort, with a focus on Javascript as the language. On that same topic however, if you’re into Java, Erlang, Scala, Haskell or others and would like to teach a TDD, BDD or related testing workshop please get in touch with me. We will work on making that happen! Ping me at adron at composite code dot com. ;)

Workshop: Intro to Databases & Data

(Relational, Key/Value, Distributed, Graph, Event Series, etc.)

This is a course I’ll lead and others will work with me on to put something extra useful together. We will then teach the workshop as a group, kind of a team paired programming teaching workshop. If there is anything in particular that you’d like to learn about, any questions that you have about data and usage in applications or otherwise add your two cents on this blog entries comments. Over the next month we’ll be putting together the material and have the course available sometime early this year. So if you’d like to attend, jump in at any time with the conversation or just keep a read here and I’ll have more information about the course as we get it put together.

Let’s Make pdxbridge.js Happen!

The pdxbridge.js project is all about determining if a bridge in Portland is up or down. Right now there are  several bridges that matter, that are on this list;

If we add other information to track about the bridges we might add the other 3 that exist and the new bridge that is being built. however the five listed are the only bridges that have a raised and lowered state, and in one case the Steel Bridge has a lowered, partially raised and fully raised state. As shown on the pdxbridge.js badge I threw together (shown above).

To get involved with pdxbridge.js go add your input on this issue I started to discuss our first meet, plan and hack.

Some JavaScript API Coding With Restify & Express & Hacking it With cURL …Segment #2

Ah, part 2! If you’re looking for part 1, click this link.

Review: In the last blog entry I went through more than a few examples of using cURL to issue GET requests against various end points using Node.js & Restify. I also covered the basics on where to go to find cURL in case it isn’t installed. The last part I covered was a little bit of WebStorm info to boot. In this part of the series I’m now going to dive into the HTTP verbs beyond GET.

POST

The practice around issuing a command via http verb to save data is via a post. When you issue a post via cURL use the -X followed by POST to designate a post verb, then -H to assign the content type parameter. In this particular example I’ve set it to application/json since my payload of data will be JSON format. Then add the final data with a -d option, followed by the actual data.

curl -X POST -H "Content-Type: application/json" -d '{"uuid":"79E5591A-1E54-4562-A276-AFC266F54390","webid":"56E62C3A-D6BC-4F4F-B72A-E6CE081190B6"}' http://localhost:3000/ident

Other data types can be sent, which the content type can be appropriately set for including; html, json, script, text or html. One example of this same command, issued with jQuery on the client side would actually look like this.

var data = {"uuid":"79E5591A-1E54-4562-A276-AFC266F54390","webid":"56E62C3A-D6BC-4F4F-B72A-E6CE081190B6"};

$.post( "http://localhost:3000/ident", function( data ) {
  $( ".result" ).html( data );
});

When building post end points via express one of the things you may run into is the following message being displayed in the console.

/usr/local/bin/node app.js
connect.multipart() will be removed in connect 3.0
visit https://github.com/senchalabs/connect/wiki/Connect-3.0 for alternatives
connect.limit() will be removed in connect 3.0

The immediate fix for this, until the changes are made (which may or may not mean to just alwasy  is to replace this line

app.use(express.bodyParser());

with these lines

app.use(express.json());
app.use(express.urlencoded());

So here’s some common examples for use from a great write up on writing basic RESTful APIs with Node.js and Express from the Modulus blog.

var express = require('express');
var app = express();

app.use(express.json());
app.use(express.urlencoded());

var quotes = [
    { author : 'Audrey Hepburn', text : "Nothing is impossible, the word itself says 'I'm possible'!"},
    { author : 'Walt Disney', text : "You may not realize it when it happens, but a kick in the teeth may be the best thing in the world for you"},
    { author : 'Unknown', text : "Even the greatest was once a beginner. Don't be afraid to take that first step."},
    { author : 'Neale Donald Walsch', text : "You are afraid to die, and you're afraid to live. What a way to exist."}
];

app.get('/', function(req, res) {
    res.json(quotes);
});

app.get('/quote/random', function(req, res) {
    var id = Math.floor(Math.random() * quotes.length);
    var q = quotes[id];
    res.json(q);
});

app.get('/quote/:id', function(req, res) {
    if(quotes.length <= req.params.id || req.params.id < 0) {
        res.statusCode = 404;
        return res.send('Error 404: No quote found');
    }

    var q = quotes[req.params.id];
    res.json(q);
});

app.post('/quote', function(req, res) {
    if(!req.body.hasOwnProperty('author') ||
        !req.body.hasOwnProperty('text')) {
        res.statusCode = 400;
        return res.send('Error 400: Post syntax incorrect.');
    }

    var newQuote = {
        author : req.body.author,
        text : req.body.text
    };

    quotes.push(newQuote);
    res.json(true);
});

app.listen(process.env.PORT || 3412);

This is a great little snippet of code to use for testing your curling against just to check out.

References: