More on Using the Nools DSL and Engine…

Nools Objects

In the helloworld.nools object there is a single object defined called Message. This object has two elements defined; the text property and the constructor. The specific object definition is shown below.

define Message {
    text : '',
    constructor : function(message){
        this.text = message;
    }
}

This object can now be referred to by name throughout the nools file. The other way to reference this object is to call the getDefined function from the flow object that is being used in code processing the business rules. In the nools language any javascript has can be put inside the define blog. By defining the constructor as shown, it overrides the default constructor behavior.

Nools Rules

In the nools DSL writing a rule consists of the following format.

rule <name> {
   when {<constraints>}
   then {<action>}
}

Let’s take a look at some of the example rules from the previous blog entry Learning “nools” Rules Engine and break those down. The first example is the Hello rule.

rule Hello {
    when {
        m : Message m.text =~ /^hello(\s*world)?$/;
    }
    then {
        modify(m, function(){this.text += &quot; goodbye&quot;;});
    }
}

In this rule when the content starts with the word hello then the rule is set to modify the text of the object to append goodbye. So if I sent a message of ‘hello’ the message would become ‘hello goodbye’. As an example, modify the code in server.js as I’ve done below.

var nools = require ('nools');
var ruleFilePath = __dirname + &quot;/rules/helloworld.nools&quot;;
var flow = nools.compile (ruleFilePath);
var session = flow.getSession ();

var Message = flow.getDefined (&quot;message&quot;);

session.assert (new Message (&quot;hello&quot;));
session.assert (new Message (&quot;hello or goodbye&quot;));
session.assert (new Message (&quot;hello world&quot;));
session.assert (new Message (&quot;goodbye&quot;));

session.match();

In the code above I’ve added four new message objects to have the rules process against. In each of these the Hello rules would process these four messages and produce output with a call of the match() function. When executing the code to compile and process the nools rules run node server.js command. The output then returns the following.

$ node server.js
goodbye
hello world goodbye
hello or goodbye
hello goodbye

Each of the messages, even though entered in a particular order, returns the process results of the objects in reverse order. In this case the message ‘hello’ becomes ‘hello world’, ‘hello or goodbye’ remains ‘hello or goodbye’, ‘hello world’ becomes ‘hello world goodbye’ and last ‘goodbye’ is left untouched since it doesn’t start with ‘hello’. The other rule Goodbye, shown below, is also processed for each match.

rule Goodbye {
    when {
        m : Message m.text =~ /.*goodbye$/;
    }
    then {
        console.log(m.text);
    }
}

But since the first rule Hello processes, rule two actually runs every time (since post the Hello rule every message ends with goodbye) and displays the results based on the Goodbye rule.

That covers nools objects and rules. Stay tuned and there will be more on the nools from me in the near future. Cheers!

Learning “nools” Rules Engine

Recently I sat down to work up a solution around a rules engine. There were a few things I noticed right off.

  1. When there is a request to implement or build a rules engine it is very often (I’m guessing a solid 40-60% of the time) reasoned that there is a need solely based on a lack of understanding around what the problem space is that actually needs a solution. The simple assumption, is 40-60% of the time somebody says “let’s implement a rules engine to solve these unknown problems” really translates to “we really don’t know much about this domain so let’s implement something arbitrary as a stop gap”.
  2. Implementing a business rules engine can quickly become a “support the user” scenario for the developers that implement the rules engine. This is a situation in which the developers actually have to help the people writing the rules to be processed. This is not an ideal situation at all, generally developers supporting users writing rules is a quick way to ensure burn out, misappropriation of skills and turnover.
  3. Many developers will, without hesitation, spout out “are you sure you want to implement a rules engine?” and then follow that up with “let’s discuss your actual problem” with that leading to “are you sure you want to implement a rules engine?”. Other developers upon hearing that one will implement a rules engine immediately respond with, “shit, I’m out.”

At this point I realized I had X, Y and Z reason to use it and would just have to persevere with all of the threats that are inclusive of implementing a rules engine. Sometimes one just has to step into the realm of scary and get it done.

So here’s what I dug up. I’m really not sure about the name of this project, as it appears to be some sort of odd usage, so whatever, but it is indeed called nools (github repo). Nools is a business rules engine based on the Rete Algorithm, something that is helpful to read up on when implementing. The main deployment for nools is to a Node.js server, but I’ve read that it is prospectively deployable in most browsers too.

The Browsers That Have Known Support, ya know, if you want to use nools in a broswer.

The Browsers That Have Known Support, ya know, if you want to use nools in a broswer.

There are a few other things that need definitions. So before moving on here are a few key words.

  • rule – A collection of constraints that must be satisfied for in order for a action to occur.
  • action – The code that will execute when a all of a rule’s constraints have been satisfied.
  • fact – An object/item inserted into a session that the rule’s constraints match against.
  • flow – This is container for rules that will be executed, you can think of it as a rule book.
  • session – This is an “instance” of a flow used to match facts. Sessions are obtained from a flow.

Working With the Examples

The first thing I tried to run was a simple little sample of a nools rule file using the DSL. The nools file I named helloworld.nools and put in a directory called rules. The file looked like the example available in the README.md of the nools repo, but I immediately hit a bump.

The nools Rules

define Message {
    text : '',
    constructor : function(message){
        this.text = message;
    }
}

//find any message that starts with hello
rule Hello {
    when {
        m : Message m.text =~ /^hello(\s*world)?$/;
    }
    then {
        modify(m, function(){this.text += &quot; goodbye&quot;;});
    }
}

//find all messages then end in goodbye
rule Goodbye {
    when {
        m : Message m.text =~ /.*goodbye$/;
    }
    then {
        console.log(m.text);
    }
}

I created a server.js file with a simple bit of code just to see the results.

var nools = require('nools');
var ruleFilePath = __dirname + &quot;rules/helloworld.nools&quot;;
var flow = nools.compile();
var Message = flow.getDefined(&quot;message&quot;);

console.log(&quot;Rules executed @ &quot; + ruleFilePath);
console.log(Message);

I tried running this with a simple node server.js command. The result, unfortunately was just an error dumped into the bash.

16:29 $ node server.js

/Users/adron/Codez/testing-nools/node_modules/nools/lib/index.js:61
        throw new Error(&quot;Name required when compiling nools source&quot;);
              ^
Error: Name required when compiling nools source
    at Object.exports.compile (/Users/adron/Codez/testing-nools/node_modules/nools/lib/index.js:61:15)
    at Object.&lt;anonymous&gt; (/Users/adron/Codez/testing-nools/server.js:3:18)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Function.Module.runMain (module.js:497:10)
    at startup (node.js:119:16)
    at node.js:906:3

Ugh, I suppose some things just don’t work on the first try. Based on that error message I dove into the actual nools code, to line 61 to get some insight into what the error messages actually means. The function the error is actually located in looks like this.

exports.compile = function (file, options, cb) {
    if (extd.isFunction(options)) {
        cb = options;
        options = {};
    } else {
        options = options || {};
        cb = null;
    }
    if (extd.isString(file)) {
        options.name = options.name || (isNoolsFile(file) ? path.basename(file, path.extname(file)) : null);
        file = parse(file);
    }
    if (!options.name) {
        throw new Error(&quot;Name required when compiling nools source&quot;);
    }
    return  compile.compile(file, options, cb, FlowContainer);
};

Line 61 is the line that reads throw new Error(“Name required when compiling nools source”);, which I suppose is a bit obvious. I did a little search on the README.md for the word options, which brought up the way to complie a nools rule and how to name it in the function call. I then looked back at my code and realized I’d not passed in the actual file to the compile function! Doh! I made that minor change of

var flow = nools.compile();

to this

var flow = nools.compile(ruleFilePath);

Before running I noticed I’d have one more little path problem and changed this

var ruleFilePath = __dirname + &quot;rules/helloworld.nools&quot;;

to this

var ruleFilePath = __dirname + &quot;/rules/helloworld.nools&quot;;

Now when I ran node server.js the result displayed as expected.

22:10 $ node server.js
Rules executed @ /Users/adron/Codez/testing-nools/rules/helloworld.nools
[Function]

At this point the individual parts of this code are in need of explanation. But those explanations will be coming in a subsequent blog post. So keep reading, subscribe, and that post will be coming Sunday (that’s tomorrow) at 8:00am!

Framework: Strongloop’s Loopback

Recently I did a series for New Relic on three frameworks, both for APIs and web apps. I titled it “Evaluating Node.js Frameworks: hapi.js, Restify, and Geddy” and it is available via the New Relic Blog. To check out those frameworks give that blog entry a read, then below I’ve added one more framework to the list, Strongloop’s Loopback.

Strength: Very feature-rich generation of models, data structures and related enterprise-type needs. Solid enterprise-style API framework library.
Weakness: Complexity could be cumbersome unless it is needed. Not an immediate first choice for a startup going after lean and clean.
Great for: Enterprise API Services.

When I dove into StrongLoop, I immediately got the feel that I was using a fairly polished package of software. When installing with ‘sudo npm install -g strongloop’ I could easily see the other packages that are installed. But instead of the normal Node.js display of additional dependencies that are installed, the StrongLoop install displayed a number of additional options with a shiny ASCII logo.

StrongLoop's ASCII art begins.

StrongLoop’s ASCII art begins.

Once the library was installed, I took it for a test drive. The loopback library builds a basic API service with a simple command of ‘yo loopback’.

More ASCII Art!!!!

More ASCII Art!!!!

To build an initial model in the API service, simply run ‘yo loopback:model WidgetStuff’. From here I dove into what was being generated.

Taking a look at the JSON for WidgetStuff, the default generation looks like the code below.

{
	"name": "WidgetStuff",
	"base": "PersistedModel",
	"properties": {},
	"validations": [],
	"relations": {},
	"acls": [],
	"methods": []
}

In the JSON, the library has done a good job of leaving hooks for validations, relations, ACLs (Access Control Lists) that all point to an enterprise-grade product.

Moving along to other parts of the standard generated application, I took a look at the server.js file. In that file the code is simple, with comments to explain where to integrate other elements of the application. By default there are other parts of the application that are already in place with a basic minimal functionality. The code file starts off with the basic code as shown.

var loopback = require('loopback');
var boot = require('loopback-boot');

var app = module.exports = loopback();

// Set up the /favicon.ico
app.use(loopback.favicon());

// request pre-processing middleware
app.use(loopback.compress());

// -- Add your pre-processing middleware here --

// boot scripts mount components like REST API
boot(app, __dirname);

// -- Mount static files here--
// All static middleware should be registered at the end, as all requests
// passing the static middleware are hitting the file system
// Example:
// app.use(loopback.static(path.resolve(__dirname', '../client')));

// Requests that get this far won't be handled
// by any middleware. Convert them into a 404 error
// that will be handled later down the chain.
app.use(loopback.urlNotFound());

// The ultimate error handler.
app.use(loopback.errorHandler());

It’s good to see a starting code base that already has compression, basic error handler, not found handler and related functionality. Toward the end of the file there is the startup code.

app.start = function() {
	// start the web server
	return app.listen(function() {
		app.emit('started');
			console.log('Web server listening at: %s', app.get('url'));
		});
	};

	// start the server if `$ node server.js`
	if (require.main === module) {
		app.start();
}

This code from start to finish is pretty clean. And with basic functionality for API services already represented, I can start building a working application right from the start.

I found StrongLoop’s Loopback product to be a great starting point to build APIs off of with minimal ties to dependencies. Plus, the user interface provides an excellent way to interact with the generated API endpoints. In short, StrongLoop offers one of the best API-building experiences available on the Node.js platform. This will give you a better idea of what I’m talking about:

The first user interface screen that displays upon navigating to the explorer end point i shown here.

The LoopBack API web user interface. Very Enterprisey.

The LoopBack API web user interface. Very Enterprisey.

A number of endpoints for the WidgetStuffs model that I generated are now available. To interact with the actual endpoint itself, simply click on one to list or expand the operations.

UI Expanded.

UI Expanded.

The real time saver? Having the ability to check out the explorer interface and immediately send data to the APIs without typing in cryptic curl commands.

My Sample Loopback Project: https://github.com/Adron/strongloop_sample

Troubleshooting Node.js Deploys on Beanstalk – The Express v4 node ./bin/www Switch Up

I’ve gotten a ton of 502 errors and related issues that crop up when deploying the Beanstalk. One of the issues that cropped up a few times recently, until I stumbled into a working solution was the 502 NGINX error. I went digging around and ended up just trying to deploy a default, fresh from the ‘express newAppNameHere’ creation and still got the error.

I went digging through the Beanstalk configuration for the app and found this little tidbit.

Node Command (Click for full size image)

Node Command (Click for full size image)

I’ve pointed out the section where I’ve added the command.

node ./bin/www

Based on the commands that are executed normally, it seems `npm start` would work work to get the application started. But I have surmised the issue is that the commands are executed sequentially;

node server.js
node app.js
npm start

When these are executed in order, errors crop up and the command that should work `npm start` begins with a corrupted and error laden beginning. Leaving the application not running. However by adding the `node ./bin/www` to the text box all the others are skipped and this command is issued, resulting in a running application.

The other thing is to follow the now standard approach of just issue `npm start`, but being sure to replace what I put in the text box above (`node ./bin/www`) with `npm start` so that beanstalk only runs npm start instead of the ordered execution.

Xamarin and I Are Hella Busy Hacking This Week

This week, along with the normal duties of getting everything from SSL working to code slung for account management to intellectual property (what is that exactly :o )…  this week is going to get hella busy. Here’s a few of the public events and training that I’ll be attending this week along with the normal bike n’ hacking n’ gettin’ shit done.

Shared Code Projects, PCL and Xamarin on 7/8/14 @

Intel JFCC Auditorium
2111 N.E. 25th Ave
Hillsboro, OR

James Montemagno  from Xamarin is coming to learn us the deets on how to create common core code that can run on any or all common platforms. Find out the differences between shared code project, portable class libraries, and simple file linking to share more code on iOS, Android, and Windows. This should be pretty kick ass to help kick OrchestrateExecutive off the ground. There’s a little more info here for the event: http://www.padnug.org/.

Database Stuff that aint RDBMS on 7/10/14 @

I’ll @adron be presenting on database types, what’s available out there outside of the relational and RDBMS world. How to resolve various problems with alternate data solutions for better results, better performance and ways to leap around the hurdles that are sometimes faced with RDBMS use.  More info here: http://www.meetup.com/ssdevelopers/events/176032122/

Xamarin Hands-on-Lab/Hackathon on 7/12/14 @

Montgomery Park

Kelly White @mckhendry has put together a hands-on-lab and hackathon, just a few days later hosting a hand-on-lab working with Xamarin to build apps. I’m going to hit up this event too (then go ride a 100+ kilometer bike ride, anybody up for the ride, ping me?) and sling some code on OrchestrateExecutive. Also a little more info here for the event: http://www.padnug.org/.

There’s more, but these are the top few meets I’ll be attending over the next two weeks. Happy hacking!

Installing OpenSSL, Other Notes. Because: Security

I am in the process of setting up an SSL (Secure Sockets Layer) Cert to enable HTTPS on some sites and APIs I’m building. In that effort I needed to setup OpenSSL to create a CSR (Certificate Signing Request) and get the process started. Here’s the steps I went through to accomplish this. First download OpenSSL.

Next make sure you have the prerequisites installed:

  • make
  • Perl 5
  • ANSI C Compiler
  • Dev Environment  in form of dev libraries and C header files
  • Supported Unix Operating System (i.e. – not windows, you’ll have to google those directions separately, for these steps though, get a *nix OS)

Get the zipped contents of the OpenSSL into a working directory to build them. Then follow the standard config, make, and make install steps below.

./config
make
make test
make install

Once you have that installed there are a number of actions you can perform. Here’s a few.

  • Create a CSR (Certificate Signing Request):
    openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key

    This creates a CSR.csr and privateKey.key file for use for SSL.

  • Create a self-signed certificate:
    openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt

    Self signed certificates are great to use on local servers that are used internally, great for dev servers that need SSL, and similar reasons. For most dev needs there is no need to purchase an extra certificate for SSL, just generate one yourself and use it for development purposes.

  • Check a CSR:
    openssl req -text -noout -verify -in CSR.csr
  • Check a private key:
    openssl rsa -in privateKey.key -check
  • Check a cert:
    openssl x509 -in certificate.crt -text -noout
  • Generate a cert signing request from existing cert:
    openssl x509 -x509toreq -in certificate.crt -out CSR.csr -signkey privateKey.key
  • Removing a pass key:
    openssl rsa -in privateKey.pem -out newPrivateKey.pem
  • Generate a CSR for an existing private key:
    openssl req -out CSR.csr -key privateKey.key -new

More to come in the near future. I’ve almost got this SSL mess straightened out and am putting together a more complete how-to.

References:

Getting Started with Swift, For NON-Apple Devs

This past weekend I attempted to get started with Swift coding. Since I have not been an Apple Developer for a while, it wasn’t immediately obvious how to get started. But once I fumbled around a few minutes I realized I needed a developer account to get the latest XCode. Jeez, it really shows how much Apple loves to lock you in hard core to their development ecosystem. An unfortunate trait of a company that is actually extremely closed in much of its behavior, while taking advantage of so much of the open source community. But I digress, this isn’t a rant about the unethical behavior of Apple. I’ll reserve that for the novels worth of material it deserves.

One I signed up for the developer program, which costs $99 bucks, I immediately made my first huge mistake. This damnable mistake blew the entire weekend of hacking. I added under “Company” my simple DBA (Doing Business As) name. I already had an account, and because of this change for making this existing account become a developer account from a personal base level account, sprung a red flag. I checked back frequently over the weekend, but it wasn’t until Monday that somebody checked the app, realized the Company name I added was merely a DBA and ok’d my account. So far, 38 hours down the drain for getting started hacking on Swift! Dammit.

However, this morning I was happy to find everything was ok’d, and thus, the remaining bit of this blog entry is a bit more example and a little less story of my day.

Developer @ Apple

Developer @ Apple

Getting XCode 6 beta

I wanted to do Swift hacking, the first step was to download XCode 6 beta. That’s available via download on the iOS Developer page (and I suppose the Mac Developer page). Scroll down on that page until you find the XCode Download button.

The Warnings and the Download XCode 6 beta page.

The Warnings and the Download XCode 6 beta page.

Also note, if you’re looking to do Swift hacking like I’m doing here, I’d actually advise against getting the iOS 8 beta or OS-X Yosemite Developer Previews right now. Best to keep as stable a machine while toying around with a new language. At least, that’s what the conversations have been so far…

OS-X Yosemite & iOS 8

OS-X Yosemite & iOS 8

Once I got Xcode 6 beta installed I dove right into creating a Swift Project. I created a simple new project that is empty to just check out what Xcode 6 provides out of box for the Swift Project.

Selecting an empty Xcode 6 beta project to use with Swift.

Selecting an empty Xcode 6 beta project to use with Swift.

The next dialog is where the Swift magic is selected.

Selecting Swift, entering a project name and other information dialog.

Selecting Swift, entering a project name and other information dialog.

After that I just clicked through on defaults until I got into the Xcode IDE with the project open.

Selecting the appropriate simulator.

Selecting the appropriate simulator.

Next I executed the project. Since I’d had my phone attached it wanted to run it there, but I have 7.1 iOS on it which won’t execute Swift code. I had to select the appropriate simulator then to run the application project. Once that ran, since I’d not done so on this particular computer, I needed to enable developer mode.

Enabling developer mode.

Enabling developer mode.

I did so and the empty application launched.

An empty iOS 8 iPad Retina Application.

An empty iOS 8 iPad Retina Application.

So that’s the basic getting started, no code actually slung. But rest assured I’ll have another post soon detailing some first code snippets. I also hope to get some comparisons written up between XCode with Swift and Xamarin Studio and C#. It’s cool that Apple finally has a modern feature rich language, so it’ll be interesting to see how each stacks up from a language and IDE perspective.

References: