History of Symphonize.js – JavaScript Client Pivot to Data Generation Library

…the history of symphonize.js So Far!

NOTE: If you just want to check out the code bits, scroll down to the sub-title #symphonize #hacking. Also important to note I’m putting the library through a fairly big refactor at the moment so that everything aligns with the documentation that I’ve recently created. So many things may not be implemented, but we’re moving toward v0.1.0, which will be a functional implementation of the library available via npm based entirely on the documentation and specs that I outline after the history.

A Short History

I started the symphonize.js project back on the 1st of November. Originally I started the project as a client driver library for Orchestrate.io, but within a day Chris Molozian commented and pointed out that there was already a client driver library for Orchestrate.io available that Steve Kaliski (Github @sjkaliski and Twitter @stevekaliski and http://stevekaliski.com/) had coded called logically orchestrate.js. Since this was available I did a pivot to symphonize.js being a data generation project instead.

The comment that enabled symphonize.js to pivot from client driver to data generation library.

The comment that made me realize symphonize.js should pivot from client driver to data generation library.

The Official Start of Symphonize.js

After that start and quick pivot I posted a blog with Orchestrate.io titled “Test Data Builder Symphonize.js With Chance.js (1/3)” to officially start the project. In that post I covered key value and graph basics, with a dive into using chance.js and orchestrate.js with examples. Near the same time I also posted a related blog on publishing an NPM module, which is the deployment focus of Symphonize.js.

Reasons Reasoning

There are two main reasons why I chose Orchestrate.io and a data generation library as the two things I wanted to combine. The first, is I knew the orchestrate.io team and really dug what they were building. I wanted to work with it and check out how well it would work for my use cases in the future. The ability to go sit down, discuss with them what they were building was great (which I interviewed Matt Heitzenroder @roder that you can watch Orchestrate.io, Stop Dealing With the Database Infrastructure!) The second reason is that my own startup that I’m co-founding with Aaron Gray (@agray) needed to use key value and graph data storage of some type, somewhere. Orchestrate.io looked like a perfect fit. After some research, giving it a go, it fit very well into what we are building.

CRUD, cURL Hacking & Next Steps

Early December I knocked out two support articles about testing APIs with cURL in Some JavaScript API Coding With Restify & Express & Hacking it With cURL …Segment #1 (with some Webstorm to boot) and Some JavaScript API Coding With Restify & Express & Hacking it With cURL …Segment #2 and an article on the Orchestrate.io Blog for part 2 of that series titled Symphonize Some Create, Read, Update & Delete [CRUD] via Orchestrate.js (2/3).

December then rolled into the standard holiday doldrums and slowdowns. So fast forward to January post a few rounds of beer and good tidings and I got the 3rd in the series published titled Getting Serious With Symphony.js – JavaScript TDD/BDD Coding Practices (3/3). The post doesn’t speak too much to symphony.js usage but instead my efforts to use TDD or BDD practices in trying to write the library.

Slowly I made progress in building the library and finally it’s in a mostly releasable state now. I use this library daily in working with the code base for Deconstructed and imagine I’ll use it ongoing for many other projects. I hope others might be able to find uses for it too and maybe even add capabilities or ideas. Just ping me via Twitter @adron or Github @adron, add an issue on Github and I’ll be happy to accept pull requests for new features, code refactoring, add you to the project or whatever else you’re interested in.

#symphonize #hacking

Now for the nitty gritty. If you’re up for using or contributing to the project check out the symphonize.js github pages site first. It’s got all the information to help get you kick started. However, you can keep reading as I’ve included much of the information there along with the examples from the README.md below.

NOTE: As I mentioned at the top of this blog entry, the funcitonal implementation of code isn’t available via npm just yet, myself and some others are ripping through a good refactor to align the implementation fo the library with the rewritten and newly available documentation – included blow and at the github pages.

How to use this project in one of your projects.

npm install symphonize

How to setup this project for development.

First fork the repository located at https://github.com/Adron/symphonize.

git clone git@github.com:YourUserName/symphonize.git
cd symphonize
npm install

Using The Library

The intended usage is to invocate the JavaScript object and then call generate. That’s it, a super simple process. The code would look like this:

var Symphonize = require('../bin/symphonize');
var symphonize = new Symphonize();

The basic constructor invocation like this utilizes the generate.json file to generate data from. To inject the json configuration programmatically just inject the json configuration information via the constructor.

var configJson = {"schema":"keyvalue"};

var Symphonize = require('../bin/symphonize');
var symphonize = new Symphonize();

Once the Symphonize data generator has been created call the generate() method as shown.

symphonize.generate();

That’s basically it. But you say, it’s supposed to do X, Y or Z. Well that’s where the json configuration data comes into play. In the configuration data you can set the data fields and what they’ll generate, what type of data will be generated, the specific schema, how many records to create and more.

generate.json

The library comes with the generate.json file already setup with a working example. Currently the generation file looks like this:

{
    "schema": "keyvalue", /* keyvalue, graph, event, geo */
        "count": 20, /* X values to generate. */
    "write_source": "console", /* console, orchestrateio and whatever other data sources that might come up. */
    "fields": {
            /* generates a random name. */
            "fieldName": "name",
            /* generates a random dice roll of a d20. */
            "fieldTwo": "d20",
            /* A single lorum ipsum random statement is genereated. */
            "fieldSentence": "sentence",
            /* A random guid is generated. */
            "fieldGuid": "guid"    }
}

Configuration File Definitions

Each of the configuration options that are available have a default in the configuration file. The default is listed in italics with each definition of the configuration option listed below.

  • schema” : This is used to select what type of data structure type is going to be generated. The default iskeyvalue for this option.
  • count” : This provides the total records that are to be generated by the library. The default is 1 for this option.
  • write_source” : This provides the location to output the generated data to. The default is console for this option.
  • fields” : This is a JSON field within the JSON configuration file that provides configuration options around the fields, number of fields and their respective data to generate. The default is one field, with a default data type of guid. Each of the respective entries in this JSON option is a self contained JSON name and value pair. This then looks simply like this (which is also shown above in part):
    {
        "someBoolean": "boolean",
        "someChar": "character",
        "aFloat": "float",
        "GetAnInt": "integer",
        "fieldTwo": "d20",
        "diceRollD10": "d10",
        "_string": {
            "fieldName": "NameOfFieldForString",
            "length": 5,
            "pool": "abcdefgh"
        },
        "_sentence": {
            "fieldName": "NameOfFiledOfSentences",
            "sentence": "5"
        },
        "fieldGuid": "guid"
    }
    
  • Fields Configuration: For each of the fields you can either set the field to a particular data type or leave it empty. If the field name and value pair is left empty then the field defaults to guid. The types of data to generate for fields are listed below. These listed are all simple field and data generation types. More complex nested generation types are listed below under Complex Field Configuration below the simple section.
    • boolean“: This generates a boolean value of true or false.
    • character“: This generates a single character, such as ’1′, ‘g’ or ‘N’.
    • float“: This generates a float value, similar to something like -211920142886.5024.
    • integer“: This generates an integer value, similar to something like 1, 14 or 24032.
    • d4“: This generates a random integer value based on a dice roll of one four sided dice. The integer range being 1-10.
    • d6“: This generates a random integer value based on a dice roll of one six sided dice. The integer range being 1-10.
    • d8“: This generates a random integer value based on a dice roll of one eight sided dice. The integer range being 1-10.
    • d10“: This generates a random integer value based on a dice roll of one ten sided dice. The integer range being 1-10.
    • d12“: This generates a random integer value based on a dice roll of one twelve sided dice. The integer range being 1-10.
    • d20“: This generates a random integer value based on a dice roll of one twenty sided dice. The integer range being 1-20.
    • d30“: This generates a random integer value based on a dice roll of one thirty sided dice. The integer range being 1-10.
    • d100“: This generates a random integer value based on a dice roll of one hundred sided dice. The integer range being 1-10.
    • guid“: This generates a random globally unique identifier. This value would be similar to ‘F0D8368D-85E2-54FB-73C4-2D60374295E3′, ‘e0aa6c0d-0af3-485d-b31a-21db00922517′ or ’1627f683-efeb-4db8-8174-a5f2e3378c87′.
  • Complex Field Configuration: Some fields require more complex configuration for data generation, simply because the data needs some baseline of what the range or length of the values need to be. The following list details each of these. It is also important to note that these complex field configurations do not have defaults, each value must be set in the JSON configuration or an error will be thrown detailing that a complex field type wasn’t designated. Each of these complex field types is a JSON name and value parameter. The name is the passed in data type with a preceding underscore ‘_’ to generate with the value having the configuration parameters for that particular data type.
    • _string“: This generates string data based on a length and pool parameters. Required fields for this include fieldNamelength and pool. The JSON would look like this:
      "_string": {
          "fieldName": "NameOfFieldForString",
          "length": 5,
          "pool": "abcdefgh"
      }
      

      Samples of the result would look like this for the field; ‘abdef’, ‘hgcde’ or ‘ahdfg’.

    • _hash“: This generates a hash based on the length and upper parameters. Required fields for this included fieldNamelength and upper. The JSON would look like this:
      "_hash": {
          "fieldName": "HashFieldName",
          "length": 25,
          "casing": 'upper'
      }
      

      Samples of the result would look like this for the field: ‘e5162f27da96ed8e1ae51def1ba643b91d2581d8′ or ’3F2EB3FB85D88984C1EC4F46A3DBE740B5E0E56E’.

    • _name”: This generates a name based on the middle, *middleinitial* and prefix parameters. Required fields for this included fieldNamemiddlemiddle_initial and prefix. The JSON would look like this:
      "_name": {
          "fieldName": "nameFieldName",
          "middle": true,
          "middle_initial": true,
          "prefix": true
      }
      

      Samples of the result would look like this for the field: ‘Dafi Vatemi’, ‘Nelgatwu Powuku Heup’, ‘Ezme I Iza’, ‘Doctor Suosat Am’, ‘Mrs. Suosat Am’ or ‘Mr. Suosat Am’.

So that covers the kick start of how eventually you’ll be able to setup, use and generate data. Until then, jump into the project and give us a hand.

After this, more examples on the way, cheers!

Plotting Good Things in Portland :: pdxbridge.js / WTF Databases /

Several people got together yesterday to start planning things for 2014 in PDX. It ranged from coding workshops to PDX Node to Node PDX to what kind of food to eat at for lunch. Ya know, daily tactical things that come along with the big picture items. ;)

bridge.js badge.

bridge.js badge.

Two things that I want to bring up to the community out there. One is a workshop that I’ll likely lead efforts to organize and the other is something I’ll just call pdxbridge.js for now. The workshop will cover the topics of which and what databases to use for what data and how to implement. The pdxbridge.js project is about determining the raised or lowered state of the bridges here in Portland.

Some of the other projects, workshops and other topics we discussed included getting a workshop put together around unit, integration and testing code from a behavioral, test driven development or other approaches. This workshop we don’t have anyone to teach, but we’d (ok, so I really really would love to attend a workshop on this) really like to find somebody who would be willing to teach a workshop of this sort, with a focus on Javascript as the language. On that same topic however, if you’re into Java, Erlang, Scala, Haskell or others and would like to teach a TDD, BDD or related testing workshop please get in touch with me. We will work on making that happen! Ping me at adron at composite code dot com. ;)

Workshop: Intro to Databases & Data

(Relational, Key/Value, Distributed, Graph, Event Series, etc.)

This is a course I’ll lead and others will work with me on to put something extra useful together. We will then teach the workshop as a group, kind of a team paired programming teaching workshop. If there is anything in particular that you’d like to learn about, any questions that you have about data and usage in applications or otherwise add your two cents on this blog entries comments. Over the next month we’ll be putting together the material and have the course available sometime early this year. So if you’d like to attend, jump in at any time with the conversation or just keep a read here and I’ll have more information about the course as we get it put together.

Let’s Make pdxbridge.js Happen!

The pdxbridge.js project is all about determining if a bridge in Portland is up or down. Right now there are  several bridges that matter, that are on this list;

If we add other information to track about the bridges we might add the other 3 that exist and the new bridge that is being built. however the five listed are the only bridges that have a raised and lowered state, and in one case the Steel Bridge has a lowered, partially raised and fully raised state. As shown on the pdxbridge.js badge I threw together (shown above).

To get involved with pdxbridge.js go add your input on this issue I started to discuss our first meet, plan and hack.

Some JavaScript API Coding With Restify & Express & Hacking it With cURL …Segment #2

Ah, part 2! If you’re looking for part 1, click this link.

Review: In the last blog entry I went through more than a few examples of using cURL to issue GET requests against various end points using Node.js & Restify. I also covered the basics on where to go to find cURL in case it isn’t installed. The last part I covered was a little bit of WebStorm info to boot. In this part of the series I’m now going to dive into the HTTP verbs beyond GET.

POST

The practice around issuing a command via http verb to save data is via a post. When you issue a post via cURL use the -X followed by POST to designate a post verb, then -H to assign the content type parameter. In this particular example I’ve set it to application/json since my payload of data will be JSON format. Then add the final data with a -d option, followed by the actual data.

curl -X POST -H "Content-Type: application/json" -d '{"uuid":"79E5591A-1E54-4562-A276-AFC266F54390","webid":"56E62C3A-D6BC-4F4F-B72A-E6CE081190B6"}' http://localhost:3000/ident

Other data types can be sent, which the content type can be appropriately set for including; html, json, script, text or html. One example of this same command, issued with jQuery on the client side would actually look like this.

var data = {"uuid":"79E5591A-1E54-4562-A276-AFC266F54390","webid":"56E62C3A-D6BC-4F4F-B72A-E6CE081190B6"};

$.post( "http://localhost:3000/ident", function( data ) {
  $( ".result" ).html( data );
});

When building post end points via express one of the things you may run into is the following message being displayed in the console.

/usr/local/bin/node app.js
connect.multipart() will be removed in connect 3.0
visit https://github.com/senchalabs/connect/wiki/Connect-3.0 for alternatives
connect.limit() will be removed in connect 3.0

The immediate fix for this, until the changes are made (which may or may not mean to just alwasy  is to replace this line

app.use(express.bodyParser());

with these lines

app.use(express.json());
app.use(express.urlencoded());

So here’s some common examples for use from a great write up on writing basic RESTful APIs with Node.js and Express from the Modulus blog.

var express = require('express');
var app = express();

app.use(express.json());
app.use(express.urlencoded());

var quotes = [
    { author : 'Audrey Hepburn', text : "Nothing is impossible, the word itself says 'I'm possible'!"},
    { author : 'Walt Disney', text : "You may not realize it when it happens, but a kick in the teeth may be the best thing in the world for you"},
    { author : 'Unknown', text : "Even the greatest was once a beginner. Don't be afraid to take that first step."},
    { author : 'Neale Donald Walsch', text : "You are afraid to die, and you're afraid to live. What a way to exist."}
];

app.get('/', function(req, res) {
    res.json(quotes);
});

app.get('/quote/random', function(req, res) {
    var id = Math.floor(Math.random() * quotes.length);
    var q = quotes[id];
    res.json(q);
});

app.get('/quote/:id', function(req, res) {
    if(quotes.length <= req.params.id || req.params.id < 0) {
        res.statusCode = 404;
        return res.send('Error 404: No quote found');
    }

    var q = quotes[req.params.id];
    res.json(q);
});

app.post('/quote', function(req, res) {
    if(!req.body.hasOwnProperty('author') ||
        !req.body.hasOwnProperty('text')) {
        res.statusCode = 400;
        return res.send('Error 400: Post syntax incorrect.');
    }

    var newQuote = {
        author : req.body.author,
        text : req.body.text
    };

    quotes.push(newQuote);
    res.json(true);
});

app.listen(process.env.PORT || 3412);

This is a great little snippet of code to use for testing your curling against just to check out.

References:

Some JavaScript API Coding With Restify & Express & Hacking it With cURL …Segment #1 (with some Webstorm to boot)

So often I end up putting together some RESTful services (or the intent is to at least build them with that premise, but we all know how that ends up). The API URIs routing gets put together and one wants to take a crack at the service as soon as possible. Here’s a quick guide for using cURL to take some basic actions against the services and understand what you’re getting back.

The first thing to do is make sure you can run JavaScript, which means you have a computer. The second thing is to get cURL, which means you’re running some variant of Linux or UNIX. In most scenarios one would be running OS-X. The easiest way to determine if it is installed on your computer just open up a terminal and type ‘curl –help’. You should get a result with all the switches, which is almost always a bit of overload.

$ curl --help
Usage: curl [options...]
Options: (H) means HTTP/HTTPS only, (F) means FTP only
     --anyauth       Pick "any" authentication method (H)
 -a, --append        Append to target file when uploading (F/SFTP)
     --basic         Use HTTP Basic Authentication (H)
     --cacert FILE   CA certificate to verify peer against (SSL)
     --capath DIR    CA directory to verify peer against (SSL)
 -E, --cert CERT[:PASSWD] Client certificate file and password (SSL)
     --cert-type TYPE Certificate file type (DER/PEM/ENG) (SSL)
     --ciphers LIST  SSL ciphers to use (SSL)
     --compressed    Request compressed response (using deflate or gzip)
 -K, --config FILE   Specify which config file to read
     --connect-timeout SECONDS  Maximum time allowed for connection
 -C, --continue-at OFFSET  Resumed transfer offset
 -b, --cookie STRING/FILE  String or file to read cookies from (H)
 -c, --cookie-jar FILE  Write cookies to this file after operation (H)
     --create-dirs   Create necessary local directory hierarchy
     --crlf          Convert LF to CRLF in upload
     --crlfile FILE  Get a CRL list in PEM format from the given file
 -d, --data DATA     HTTP POST data (H)
     --data-ascii DATA  HTTP POST ASCII data (H)
     --data-binary DATA  HTTP POST binary data (H)
     --data-urlencode DATA  HTTP POST data url encoded (H)
     --delegation STRING GSS-API delegation permission
     --digest        Use HTTP Digest Authentication (H)
     --disable-eprt  Inhibit using EPRT or LPRT (F)
     --disable-epsv  Inhibit using EPSV (F)
 -D, --dump-header FILE  Write the headers to this file
     --egd-file FILE  EGD socket path for random data (SSL)
     --engine ENGINE  Crypto engine (SSL). "--engine list" for list
 -f, --fail          Fail silently (no output at all) on HTTP errors (H)
 -F, --form CONTENT  Specify HTTP multipart POST data (H)
     --form-string STRING  Specify HTTP multipart POST data (H)
     --ftp-account DATA  Account data string (F)
     --ftp-alternative-to-user COMMAND  String to replace "USER [name]" (F)
     --ftp-create-dirs  Create the remote dirs if not present (F)
     --ftp-method [MULTICWD/NOCWD/SINGLECWD] Control CWD usage (F)
     --ftp-pasv      Use PASV/EPSV instead of PORT (F)
 -P, --ftp-port ADR  Use PORT with given address instead of PASV (F)
     --ftp-skip-pasv-ip Skip the IP address for PASV (F)
     --ftp-pret      Send PRET before PASV (for drftpd) (F)
     --ftp-ssl-ccc   Send CCC after authenticating (F)
     --ftp-ssl-ccc-mode ACTIVE/PASSIVE  Set CCC mode (F)
     --ftp-ssl-control Require SSL/TLS for ftp login, clear for transfer (F)
 -G, --get           Send the -d data with a HTTP GET (H)...

Don’t get intimidated! It goes on and on and on, but just know it’s installed if you see all these goodies. If you don’t get the results above, then installing cURL is the next step. I’ll leave that to you. Here’s some links to download and get started however.

Next you’ll of course need Node.js and Restify installed. I’ll assume you have Node.js installed. Create a directory and in that directory just run the following command.

npm install restify

Next create a file called server.js in that directory you’ve just installed restify in. Here’s the initial JavaScript code for that file that I’ve used to put together for the first few examples of using cURL.

var restify = require('restify');

function respond(req, res, next) {
    res.send('hello ' + req.params.name);
}

var server = restify.createServer();
server.get('/hello/:name', respond);
server.head('/hello/:name', respond);

server.listen(8080, function() {
    console.log('%s listening at %s', server.name, server.url);
});

Ok, now to run this with node.js just issue the command to launch node.js with this file that was just created.

node server.js
restify listening at http://0.0.0.0:8080

Getting Get

Now the service is running on port 8080 against 0.0.0.0. To check out what a standard GET verb will do in a browser, open up a browser and navigate to http://0.0.0.0:8080.

Browsing the GET response via Chrome.

Browsing the GET response via Chrome.

You’ll see this in the browser window. Just straight plain text too. If you look at source, this is all you get back. Now open up a terminal and run the following cURL command to execute a GET against the URI & port. This is the most basic cURL command one can make. It is simply issuing a GET request against the URI and will display the body of the response.

curl 0.0.0.0:8080

The response will be similar to this for the particular request.

{"code":"ResourceNotFound","message":"/ does not exist"}

Your terminal will probably stick the subsequent prompt at the end of the result too, because the result doesn’t end in a newline. Beware of that, your prompt hasn’t disappeared. ;)

To get a little more information you can get the header of the response dumped into the terminal with a -i. The -i option stands for –include, to include the header. Issue the command as either line shown below.

curl -i http://0.0.0.0:8080
curl --include http://0.0.0.0:8080

The response will be provide a little bit more about what is going on.

HTTP/1.1 404 Not Found
Content-Type: application/json
Content-Length: 56
Date: Wed, 27 Nov 2013 00:27:36 GMT
Connection: keep-alive

{"code":"ResourceNotFound","message":"/ does not exist"}

With this response the actual response error code number is shown. In this case we have a 404, which points us to the problem with this curl request. The server isn’t returning anything to our curl request. If we look at the code, we can see that the ‘get’ route is setup as ‘/hello/:name’ which means that the domain root is only looking at http://url_root/hello/someName for a request to be made in order to return a response.

var server = restify.createServer();
server.get('/hello/:name', respond);
server.head('/hello/:name', respond);

Issue a command against the server now with the following curl request.

curl -i http://0.0.0.0:8080/hello/Adron

The response should come back as an actual response with content.

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 13
Date: Wed, 27 Nov 2013 00:34:04 GMT
Connection: keep-alive

"hello Adron"

Here the content is returned as “hello Adron” and the header returns a 200. The content type is application/json format with the length returned as 13. Note also the connection is set to keep-alive. Let’s dive into that.

If we change the connection type, which is important for many scenarios, we have to send extra header information to ask for the response to be returned accordingly. In order to do that we can pass the -H or –header option in with the curl request. If the command is issued with an -i and -H as shown below the result will be as follows.

curl -iH "connection: close" http://0.0.0.0:8080/hello/Adron
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 13
Date: Wed, 27 Nov 2013 00:41:07 GMT
Connection: close

"hello Adron"

If we take away the -i we’ll just get the response, which is “hello Adron” and wouldn’t get the header, which now returns Connection: close in the response. By default, curl sets the connection as keep-alive, but in order to make the request return right away the connection needs to be issued a request for it to close. By setting the -H or –header value of connection to close, we get the response immediately. With restify, it is also important to note that it checks if the user agent is curl.

If it is curl the connection header to close and removes the content-length header. However I’ve experienced that restify is not doing this in all circumstances or that the use of curl is being changed in some of my usage. So don’t always assume that this will be the case. The safest bet is to set the connection closed when done. Thus, adding -H or –header and setting connection to close with a “Connection: close”.

Beyond Basic Get

Ok, so that’s a pretty solid use of GET with cURL. Let’s dive into some puts and deletes with a get or two thrown in for comparison. Change the executing code to the code shown in the server.js file below.

var restify = require('restify');

function send(req, res, next) {
    res.send('hello ' + req.params.name);
    return next();
}

var server = restify.createServer();
server.post('/hello', function create(req, res, next) {
    res.send(201, Math.random().toString(36).substr(3, 8));
    return next();
});
server.put('/hello', send);
server.get('/hello/:name', send);
server.head('/hello/:name', send);
server.del('hello/:name', function rm(req, res, next) {
    res.send(204);
    return next();
});

server.listen(8080, function() {
    console.log('%s listening at %s', server.name, server.url);
});

The first section of code to check out is around the function send.

function send(req, res, next) {
    res.send('hello ' + req.params.name);
    return next();
}

This function is setup to take req, res, and then handle next. The req is the request, the res is the response and the next is for issuing to return and continue with the result. The next bit of code starts the server with the restify.createServer();. Just below that there are several handlers that are setup.

server.post('/hello', function create(req, res, next) {
    res.send(201, Math.random().toString(36).substr(3, 8));
    return next();
});
server.put('/hello', send);
server.get('/hello/:name', send);
server.head('/hello/:name', send);
server.del('hello/:name', function rm(req, res, next) {
    res.send(204);
    return next();
});

Now at this point I got a little sidetracked writing this blog entry. But I thought to myself, “hell, I’m just figuring out some parts of Webstorm, I ought to blog a little about it!” So, here’s…

A Little Webstorm Love

Webstorm and cURL. Click the image for a full size image.

Webstorm and cURL. Click the image for a full size image.

Before continuing on I wanted to cover a few tidbits of the Jetbrains Webstorm IDE. I often switch back and forth between the Sublime/Terminal combo and the Webstorm IDE. The really cool thing about this IDE is that it actually has a Terminal built in, color coding and autocomplete of the code, refactoring, and file and folder viewer and a whole slew of other features. In the image above that I’ve included there are four neon pointers that are displaying some of the key functionality that I’m using to work through this blog entry with cURL and Restify.

The arrows, from left to right are pointing to the following IDE elements. The first is pointing to the javascript files storgie.js and starter.js which I added specifically to show the git status colors. Each color reflect if the file is new (green), has changes (light blue) or is committed with no changes (white). The second arrow is just pointing to the general folder structure. Here you can see the hidden .* files like the .gitignore and .npmignore and also easy to dig through the node_modules directory. Webstorm also uses the node_modules directory to provide extra information and autocomplete to the code as you work through your coding session. The next arrow is pointing out the terminal in the editor, which is where I’m working up the curl examples in this blog entry. Then of course the color coded starter.js file that is one of the working examples. Webstorm, simply, is pretty sweet. I’m looking to do some more walk throughs and work sessions with the editor in the near future. So if interested, be sure to keep reading and subscribe, I’ll be sure to post any links to wherever the material ends up right here.

Now, back to the cURLing. ;)

After I toyed around with Webstorm and bit to get it work in a way that was efficient for me to use it for developing these APIs I stumbled into an idea. I’d provide a page for the APIs that could be located at the root of the API service such as http://api.blagh.com. The APIs would still be a restful type schema like http://api.blagh.com/thing/create or http://api.blagh.com/thing/destroy but at the very root would be a kind of docs. Maybe this could just be a status page even. Whatever the case, there needs to be something at http://api.blagh.com so I decided right then and there I’d switch to express.js to build the rest of the API services. Restify is fine and all but for this, it seemed like express would have all of the pieces I need for this.

Just to boot, I then read a few articles about express being faster such as this one. But then I read this issue on github and almost thought, “maybe I should keep using restify” but then I thought, “dammit, just get it done the way you want it built” so it was back to express. It’s easy enough to change this later so I just got back to coding, albeit with express now. So keep reading and in the next day or two I’ll have part two of this series on using cURL to hack at your APIs.

Enjoy the composite coding & cheers!

References:

How to Build an NPM Package, Beginning the Symphonize Project

NPM has helped to build on the massive Node.js popularity and drive JavaScript from a simple scripting language in the web browser to a powerful and capable back-end server language. A quick refresher, NPM stands for Node.js Package Manager and each package is made up of:

  1. a folder containing a program described by a package.json file.
  2. a gzipped tarball containing [1]
    1. a url that resolves to [2]
    2. a <name>@<version> that is published on the registry with [A]
    3. a <name>@<tag> that points to [B]
    4. a <name> that has a “latest” tag satisfying [C]
    5. a git url that, when cloned, results in [1]
Path structure view in Jetbrains Webstorm IDE.

Path structure view in Jetbrains Webstorm IDE.

With that basic understanding of what a module is that NPM provides, let’s jump through the steps to build a module that provides some basic functionality. I won’t cover too many parts in detail yet, just the happy path to getting an NPM library running.

First let’s create an appropriate folder and file structure to get started with. Here’s the commands I ran to get started.

mkdir bin
mkdir lib

With these two directories created I then created the following files in the designated paths. In bin I created the symphonize.js file and in lib I created a main.js file.

Now, I added the following code to the symphonize.js file.

exports.Coupling = function (searchThis, forThis) {
    var returnValue = 'no';
    if (searchThis.indexOf(forThis) > -1) {
        returnValue = 'yes';
    }
    return returnValue;
}

In the main.js file I added the following.

(function () {
    var couple = require('../bin/symphonize');
    couple.Coupling("Sample text", "Sample");
}).call(this)

There are a number of issues with this code, I know, but it’s just a sample of the minimal amount of code, folder structure and packages.json that I need to get this package installed and ready for iteration as I move forward with the actual code base and what functionality will actually be added. Speaking of the packages.json file, I created one and added the following configuration settings to it.

{
    "author": "Adron Hall",
    "name": "symphonize",
    "description": "Prints out data to the console! Will be iterating soon for real functionality!",
    "version": "0.1.0",
    "repository": {
        "url": "git@github.com:Adron/symphonize.git"
    },
    "main": "./lib/main",
    "bin": {
        "replaceme": "./bin/symphonize"
    },
    "dependencies": {},
    "devDependencies": {},
    "optionalDependencies": {},
    "engines": {
        "node": "*"
    }
}

That is now enough for me to at least get the module added to the global NPM repository, get things pointed back to Github appropriately and move forward with actual coding. I might even setup some continuous builds and delivery at some point, since I’ve now got the end point of where the libraries will be going. The commands to get a module uploaded to the NPM Repository are as follows. This command of course assumes I’ve already added a user using npm adduser or I’ve added one via the web site interface at https://npmjs.org/.

npm publish

I’ve now got everything prepared and uploaded to NPM there is now a symphonize module library ready for use.

My NPM Page for Symphonize. Click to go to the actual NPM page.

My NPM Page for Symphonize. Click to go to the actual NPM page.

Here’s a few quick references to where everything is: