Andy Piper & Troy Howard, Now Twitter is up to Something!

Twitter is up to something. I’m betting it’s something good.

In the last 2 weeks I’ve found out two fellow coders are rolling into the Twitter family. These two people are top tier talent, so I’m just assuming Twitter had their act together when they went after these two new recruits. So who are these two individuals? Andy Piper and Troy Howard, two people everybody keeps track of. Wait, you do keep track of these guys right? Hmmm, if you don’t it might be high time you need to get in gear and follow them! Here are their deets, so you’re in the loop.

Andy Piper

Andy Piper

Andy Piper @andypiper, heading over to become Developer Advocate in London. Andy has been a great advocate over at Cloud Foundry. I only assume, as many who have used the Cloud Foundry Platform, he’ll continue to be an advocate for it. I’m super excited to see the efforts Andy leads forward with in this new role with Twitter. I’ll be keeping an eye out and hopefully this year landing in London to visit for a few lines of code and a brew or two.

Troy Howard

Troy Howard

Troy Howard @thoward37 is heading over to become the Technical Documentation Super Genius (my label) to which he humbly refers to as Documentarian. He’s helped lead projects like Node PDX Conf (which he and I stumbled ourselves into 2+ years ago) and he’s since knocked out work with organizing Write the Docs,

hujs

hujs

Hujs (check out Glenn Block’s write up) and others! Besides being a mad awesome conference organizer he’s all over the Portland tech community, code space & devops world.

For other trend setters and coders that get shit done and make waves, check out my Awesome Coders category. I’ve introduced more than a few top tier amazing people over the years that I’m totally stoked to have worked along side, hacked with, coded with or otherwise been involved with in the software & hardware industry!

Summary => References =>

So begs the question, “what’s Twitter up to eh?

Portland Docker Meetup & Olympia Stuff That Isn’t RDBMS

The next Portland Docker Meetup is going to be at New Relic offices in big pink. It looks to be a solid line up of topics, which I’ll be kicking off with a Docker intro. There will also be a Q & A session, lightning talks and some topic coverage of drone.io (I’m definitely looking forward to hearing about this topic).

Database Stuff that aint RDBMS in Olympia

Fast forward to April 10th and I’ll be boarding the flanged wheels to come and visit Olympia, Washington to speak at the Olympia South Sound Developers User Group. The topic will be on “Database Stuff that aint RDBMS“! I’ll have more on that presentation, the deck and any other things that come up related to this. I *might* even have a video of the presentation afterwards.

WebStorm JavaScripting & Noding Workflow Webinar Recording

Today the JetBrains team wrapping up final processing for my webinar from last week. You can check out the webinar via their JetBrains Youtube Channel:

JavaScriptFor even more information be sure to check out the questions and answers on the JetBrain WebStorm IDE blog entry. Some of the questions include:

  • Q: How to enable Node.js support in PhpStorm (PyCharm, IntelliJ IDEA, RubyMine)?
  • Q:How to enable autocompletion for Express, Mocha and other libraries?
  • Q: Is it possible to debug a Node.js application that runs remotely? Is it possible to debug when your node and the rest of the dependencies (database, etc.) are running in a VM environment like Vagrant?
  • Q: Does the debugger support cluster mode?

…and others all here.

Working in -34c, Wintersmith Customization & Github Hosting

Getting Wintersmith customized, building and deployed to Github and a domain name pointed takes a few extra steps. So let’s roll…

Step #1

Setup Wintersmith. See my previous blog entry “Wintersmith Creating Documentation” for this information.

Step #2

Now it’s time to get things deployed to Github. This takes a few interesting, non-intuitive steps, but once done things work extremely well. To get the appropriate git branch setup I worked with an existing git repo. This repo is the same repo that I’ve used for the public facing Deconstructed Site. The code repo is located @ Deconstructed Github Repo. I added a github pages branch to this repository, for more information on how to do this check out my Jekyll how-to “Bringing to Life an Open Source Project via Github & Jekyll – Part 1” which I detail at the beginning how to get a Github Pages site running.

Once the site is up and running I switched over to it and cleared out that path. I kept a few things I’d need like the .gitignore, README.md and a few other files. I then put the repo directory that I detailed in “Wintersmith Creates Documentation” right here in the gh-pages branch. With that in place I then just committed and pushed this code to the gh-pages repository. That gave me the initial baseline for the site.

Step #3

Get the customizations done and site domain/subdomain redirected. The steps to get the domain setup to have a custom domain pointed at your gh-pages github site is as follows:

  1. Create a file named CNAME in the root of your gh-pages branch and in that CNAME file add one line with the domain that is being directed to this gh-pages site. My CNAME file looks simply like this:
    docs.deconstructed.io
  2. Next setup either an DNS A record or cname record. The cname will give you the advantage of having Github manage which IPs are in use in their system, so if there is any failover, DDOS or IP changes then you’re protected from that. To setup an A record add the A record to point to 204.232.175.78 or setup a cname to point to your github .io account, which in my case is http://adron.github.io/. The following is what the record looked like in my Route 53 settings.
  3. Last but not least the configuration settings that need to be made in Wintersmith.
    1. First set the locals url setting to the appropriate domain or subdomain. In my case that meant changing the value from http://localhost:8080/ to http://docs.deconstructed.io/.
      "locals": {
       "url": "http://docs.deconstructed.io",
       "name": "Deconstructed Docs",
       "owner": "Adron Hall",
       "description": "This site provides the documentation around the Deconstructed API Services."
      }
      
    2. In the root of the project (where the Wintersmith build ends up) add a .nojekyll file so that Jekyll won’t be used unnecessarily to try and build the Wintersmith project.

…and with that, I’ve covered the bases for getting a Wintersmith site (blog or whatever you’re like to use it for) up and running. Feel free to ask any questions in the comments and I’ll help work through any issues you’ve encountered. Cheers!

Wintersmith Creating Documentation

I set out a few days ago to put together a documentation site. I had a few criteria for this site:

  1. A static site that I could push to Github to use with their github pages feature.
  2. The static site is generated from markdown.
  3. It just works. It’s easy to get it into a workflow without breaking the tool or breaking a solid workflow.

That was it, what I’d consider some pretty straight forward criteria. However it wasn’t that easy, until it was. Here’s a few of the issues I ran through on the way to getting a solid tool with a solid workflow working together. Beware however if you have fickle reading eyes, the following is a rant about what does and does not work.

[rant on]

Middleman Broken Ruby and Broken Gems

I have a Mac Book Pro Retina 15″. The machine runs OS-X Mavericks. I’ve had zero issue with this OS. It comes with Ruby 2 and some version of gems. My first attempt was to take a stab with middleman, the same static site builder used by many companies including Basho. Even though I ran into problems which I detailed in “Basho – First Week Coding & Research Adventures…” and “Un-breaking OS-X Mountain Lion” eventually middleman mostly worked.

Well, I didn’t get to a working app very fast. Immediately Ruby 2 had issues and gemsets puked middleman everywhere. I then ran into some confusing permissions errors. About 15 minutes into this process of troubleshooting middleman I had flashbacks of the first few days at Basho and thought, “this is bullshit, something has to work better than this catastrofuck of software version conflicts“. So I dropped middleman dead.

Assemble, Assemble, Assemble…    ??!?#@$%! WTF!

I attempted assemble next for the node.js stack. It looked to have a lot of promise. It uses grunt.js and a bunch of other tools to manage a static site generating, bootstrap using stack. The more I looked at it however it seemed busy. Busy as in “I’m going to do more than three things so I’ll maybe do none of them right“.

Reading about assemble I turned to another hacker slinging some code at the bar I sat at. She looked at the project and asked, “what’s it supposed to do exactly? I get that it’s a framework of tools but it doesn’t’ exactly lay out what it is supposed to be doing besides arbitrarily managing some parts of the stack.” That seemed reasonable to me.

Before I just tossed assemble.io to the trash heap of options I wanted to ask at least one more person. So the next day I asked my good friend and super genius Troy Howard. It was a short verdict, “drop that shit”.

That was enough for me, assemble was officially dead for this project.

Slate, This Seems Slick But…

I then took a stab at Slate. Orchestrate.io just created some excellent documentation using the Slate solution. So I dove into this, getting a test site up and running rapidly. It seemed like a mostly viable solution until I started running into issues with how and where I wanted things displayed for the code samples and other material. It appeared, if I were going to use Slate, I’d be using it almost exactly as is. I might borrow pieces of it in the future, even the layout to some degree, but for now I wanted something else that I could incorporate my themes as needed. Alas, I was super happy with Slate, it just wasn’t a great fit for now.

Where The Hell Are My Options, Jekyll?

At this point I was getting a little frustrated. I then went to a tried and true solution in jekyll. Jekyll is a pretty solid solution, with some bugs and oddball issues but nothing major. I started working with it and even transitioning a jekyll project into my theme. Hacking a jekyll blog into a reasonable documentation solution this seemed like the way to go.

But then I got a wild urge to see if there was anything else in Node.js land that I was missing. I really didn’t want to sling a Ruby project if I didn’t have to. I’d rather keep all the stacks around JavaScript for this particular set of projects. No reason to diverge when I’m just dealing with such simple straight forward web projects. I’ll diverge when something truly validate diverging, like doing some real math with a real functional language or something. Trading Node.js for one single project to go with a pseudo Ruby project for static site generation just didn’t seem appealing. So I started looking around one more time.

Made in -34°C

Yup, -34 Celsius. That's about as cold as it gets. Click for the full size chart!

Yup, -34 Celsius. That’s about as cold as it gets. Click for the full size chart!

The next solution I tried was Wintersmith. This solution appeared to have everything that I’d been looking for feature wise. It was a node.js project, it generated static content, could generate blogs but other things too, was simple, had plugins, was straight forward and more. I was a little paranoid after the solutions I’d fought my way through earlier so I went to the only place that would insure that I’d have a solution I could be confident in. I went straight to the source!

I’ll admit I took a peak at the package.json file before going head long into the source. A quick perusal of the dependencies list looked ok.

  dependencies: {
    marked: ~0.3.0,
    coffee-script: ~1.6.3,
    async: ~0.2.9,
    highlight.js: ~8.0.0,
    jade: ~1.1.5,
    ncp: ~0.5.0,
    rimraf: ~2.2.6,
    winston: ~0.7.2,
    colors: ~0.6.2,
    optimist: ~0.6.0,
    minimatch: ~0.2.14,
    mime: ~1.2.11,
    js-yaml: ~3.0.1,
    mkdirp: ~0.3.5,
    chokidar: ~0.8.1,
    server-destroy: ~1.0.0,
    npm: ~1.3.24,
    slugg: ~0.1.2
  },
  devDependencies: {
    shelljs: 0.1.x
  }

I immediately took note of a few things. The first was that there was actually a breakout of dev dependencies versus actual project dependencies. That’s a good first sign. The second thing I just went through the list and checked the various library dependencies, there were a few that I’ve played around with before that I trusted; highlight.js, coffee-script, async, js-yaml and npm were all cool by me. It didn’t seem to crazy out of whack. With that I went forth into the code with zero expectations…

The first files I dug into were the config.coffee file, which pointed out a few things I’d want to possibly tweak a little later such as the port number and other things the wintersmith server would use when running the preview server.

class Config
  ### The configuration object ###

  @defaults =
    # path to the directory containing content's to be scanned
    contents: './contents'
    # list of glob patterns to ignore
    ignore: []
    # context variables, passed to views/templates
    locals: {}
    # list of modules/files to load as plugins
    plugins: []
    # modules/files loaded and added to locals, name: module
    require: {}
    # path to the directory containing the templates
    templates: './templates'
    # directory to load custom views from
    views: null
    # built product goes here
    output: './build'
    # base url that site lives on, e.g. '/blog/'
    baseUrl: '/'
    # preview server settings
    hostname: null # INADDR_ANY
    port: 8080
    # options prefixed with _ are undocumented and should generally not be modified
    _fileLimit: 40 # max files to keep open at once
    _restartOnConfChange: true # restart preview server on config change

Second code file that looked interesting, the renderer.coffee code file.

fs = require 'fs'
util = require 'util'
async = require 'async'
path = require 'path'
mkdirp = require 'mkdirp'
{Stream} = require 'stream'

{ContentTree} = require './content'
{pump, extend} = require './utils'

if not setImmediate?
  setImmediate = process.nextTick

renderView = (env, content, locals, contents, templates, callback) ->
  setImmediate ->
    # add env and contents to view locals
    _locals = {env, contents}
    extend _locals, locals

    # lookup view function if needed
    view = content.view
    if typeof view is 'string'
      name = view
      view = env.views[view]
      if not view?
        callback new Error "content '#{ content.filename }' specifies unknown view '#{ name }'"
        return

    # run view
    view.call content, env, _locals, contents, templates, (error, result) ->
      error.message = "#{ content.filename }: #{ error.message }" if error?
      callback error, result

render = (env, outputDir, contents, templates, locals, callback) ->
  ### Render *contents* and *templates* using environment *env* to *outputDir*.
      The output directory will be created if it does not exist. ###

  env.logger.info "rendering tree:\n#{ ContentTree.inspect(contents, 1) }\n"
  env.logger.verbose "render output directory: #{ outputDir }"

  renderPlugin = (content, callback) ->
    ### render *content* plugin, calls *callback* with true if a file is written; otherwise false. ###
    renderView env, content, locals, contents, templates, (error, result) ->
      if error
        callback error
      else if result instanceof Stream or result instanceof Buffer
        destination = path.join outputDir, content.filename
        env.logger.verbose "writing content #{ content.url } to #{ destination }"
        mkdirp.sync path.dirname destination
        writeStream = fs.createWriteStream destination
        if result instanceof Stream
          pump result, writeStream, callback
        else
          writeStream.end result, callback
      else
        env.logger.verbose "skipping #{ content.url }"
        callback()

  items = ContentTree.flatten contents
  async.forEachLimit items, env.config._fileLimit, renderPlugin, callback

module.exports = {render, renderView}

Fairly straight forward code. Puts together the rendered content and I noted a few key things. There was a solid process order that was repeated; env, content, locals, contents, templates, callback. Because of this it looked like local variables were set to statically set certain things based on configuration instead of dynamic location. This could bite me, but with this quick glance, at least I knew where and what was happening with the order of generation.

I then did a scan of the templates.coffee and a few other code files. Having gotten a fair idea of where and what was being done, I went looking for a quick start. Things looked pretty good, so I crossed my fingers and my rant ends here…

[/rant off]

So now that the rant mode was over, here’s what I did to make wintersmith my documentation solution. Most of this is in a state of flux as I automate and put more into the project to simplify the workflow.

Here’s how I got started super fast.

Step #1 Get Wintersmith running.

npm install wintersmith -g

Note that you’ll need to install it globally (thus the -g) and may need to install Wintersmith with sudo prepended to that command.

The next thing that I did was create a directory that I’d use to build the static generated contents. This material I’d put into a git repository on github (namely the deconstructed gh-pages repo). I’ll call this generically the root directory.

mkdir rootDirectory

After that I navigated into the rootDirectory and created a new Wintersmith Application.

wintersmith new myAppName

That now gives me a directory structure like this

  • rootDirectory
    • myAppName

Now that I have this, the app content, markdown, views and related templates are in myAppName. To view the app, I changed directories into myAppName and ran wintersmith preview like this

wintersmith preview

Opening up a browser I can navigate to http://localhost:8080 and see the fully rendered site. To publish the site however one needs to run wintersmith build, however there’s one problem. I want the site to publish to the rootDirectory where the application content currently sites. To do this I have to edit the config.json file. Just above the locals code settings shown below…

{
  locals: {
    url: http://localhost:8080,
    name: The Wintersmith's blog,
    owner: Someone,
    description: Ramblings of an immor(t)al demigod
}

I added an output key value property to the file as shown. It merely takes the results and shifts them back a directory so they end up in the rootDirectory.

{
  output:../,
  locals: {
    url: http://docs.deconstructed.io,
    name: Deconstructed Docs,
    owner: Adron Hall,
    description: This site provides the documentation around the Deconstructed API Services.
  },
  plugins: [
    ./plugins/paginator.coffee
  ],
  require: {
    moment: moment,
    _: underscore,
    typogr: typogr
  },
  jade: {
    pretty: true
  },
  markdown: {
    smartLists: true,
    smartypants: true
  },
  paginator: {
    perPage: 6
  }
}

I also changed the perPage setting to 6, just so I could get a little more content on the main page eventually. There is also the change for the domain name and a few other parameters that I’ll catch up on with the next blog entry.

Summary

In my next blog entry I’ll cover a quick how-to on how to setup the CNAME in github pages to get the static wintersmith site up at a subdomain/domain name. I’ll also dive into setup with AWS Route 53, which generically applies to setting a gh-pages site up with any DNS provider. So subscribe and I’ll have that post in the next 1-2 days.

I’ve Got a JavaScript & Node.js Webinar, Webstorm Tutorial Videos, Work & Flow With JavaScript Development and More…

Webinar: Node.js Development Workflow in WebStorm

This coming week I’m doing an intro to work and flow with Node.js JavaScript Programming that I’m working with JetBrains on. In the webinar I’ll be covering the following key topics in the webinar:

  • Open an existing project & getting WebStorm configured for running, testing and related working tasks.
  • A quick tour of other IDE features that help with daily work. Some in pretty huge ways.
  • Running WebStorm & debugging Node.js JavaScript applications.
  • Checking out Mocha, how it works and what it gives WebStorm the power to do. Then we’ll write a few tests & implement that code too.

All this will include Q & A throughout and at the end of the webinar. Be sure to register soon!

WebStorm Tutorials: Learning Shortcuts, Customizing Layout and Others

These WebStorm Tutorials have been put together by John Lindquist @johnlindquist for JetBrains. There solid, quick snippets of useful WebStorm usage. Two that I’ve found really useful I’ve included here:

John also has a lot of other great totally kick ass material out there. So check out his blog @ http://johnlindquist.com/ and follow his youtube channel too.

Coming Up in the Near Future, The Work & Flow of JavaScript Development

I have a new course I’m working on right now for Pluralsight, that will take these basic precepts and dive even deeper into the daily workflow of the JavaScript Developer. Whether it’s client side hacking or server side coding, I’ll be diving into a whole lot of JavaScript goodness. If you’d like me to ping you when the course is done, hit me up on Twitter @adron and just let me know. In the meantime get a Pluralsight subscription (free to sign up and at least give it a try) and check out these courses by myself and others.

Going Live, Data & Pricing @ Orchestrate

Over the last few months while working on the prototype around Deconstructed I’ve been using the Orchestrate service offering exclusively. With their service around key value and graph store easily accessible via API it was a no brainer to get started building ASAP. Today, that service goes full beta! You can get the full lowdown at the Orchestrate site.

You might recall that I mentioned Orchestrate a while back when they lept into the PIE Class a few months ago. So here’s a few quick thoughts on the release and what Orchestrate is.

The basic premise is Orchestrate provides full-text search, time ordered events, graph, key value storage and a lot more. All of these capabilities are offered via an API that create a product that’s extremely easy to get started. Think about what you’d need to do to get full-text search against a key value setup. Really think about it. Yeah? That’s a lot of steps. With Orchestrate you just sign up and start using it. Think about setting up a graph store and managing it on production systems. Yeah? Lot’s of work once it gets used. Again, just sign up, it’s all there, the graph to the key value to the event series and more. All the NoSQL juice you need located in a single service so you’re not fighting and maintaining multiple databases, nodes or whatever you’re working with.

Sing up. Use.

I will copy one thing from the press release….

  • Ad hoc search queries with Lucene
  • Event and time-ordered storage for activity feeds, sensor data
  • Create and query graph relationships
  • Easy to understand pricing
  • Data export at will – no lock-in
  • Standards compliant data security protocols
  • Daily data backups
  • Bulk data loading
  • Daily and hourly usage monitoring
  • A single, simple interface – JSON data in/out
  • Designed to complement existing databases and MBaaS services
  • Client libraries for Java, Node.js, and Go. More on the way!

Using Orchestrate

There are quotes in the press release, but I’ve got a few myself. I’m working to build out a prototype service that I and Aaron Gray will be releasing soon. Our startup is called Deconstructed, but more on that later. Without Orchestrate my dev cycle would be longer each day, as I battle with maintaining the data sources that I need. Without it I would have spent another 2-3 weeks setting up and staging nosql database technology. All things I didn’t really need to do. I needed to focus on the service, the value that we’ll soon bring to our customers.

It really boils down to this, and don’t get me wrong, I’m a total data nerd. But when it comes to building a product or service, the last thing I want to do is fight with managing the data anymore than I have to. That notion inspired me to write “Sorry Database Nerds, Nobody Actually Gives a Shit” which still holds true. I can’t think of a single business that wants to sit around and grok how an index works in a key value or what the spline of text-search queries is going to be.

Pricing

Pricing is sweet, for many that want to try it out things are free. Prices go up a bit more from there, but if you fall into the pricing you’re doing some business and ought to be rolling in a few bucks eh!

The interesting thing to me about pricing is that they’ve structured it around MOp, which stands for MegaOps. More specifically that’s one million API calls or one million operations.

Summary

If you write code, even a little or if you manage data you should do yourself the service and check out what Orchestrate has built. It’s a solid investment of time. I’ll have a lot more on Orchestrate and how we’re using the service for Deconstructed and more on using the service with JavaScript in the coming months. Keep your eyes peeled and I might even have some Dart and C# magic thrown in there to boot! Check em’ out, until later, happy hacking.