More on Using the Nools DSL and Engine…

Nools Objects

In the helloworld.nools object there is a single object defined called Message. This object has two elements defined; the text property and the constructor. The specific object definition is shown below.

[code language=”javascript”]
define Message {
text : ”,
constructor : function(message){
this.text = message;
}
}
[/code]

This object can now be referred to by name throughout the nools file. The other way to reference this object is to call the getDefined function from the flow object that is being used in code processing the business rules. In the nools language any javascript has can be put inside the define blog. By defining the constructor as shown, it overrides the default constructor behavior. Continue reading “More on Using the Nools DSL and Engine…”

Learning “nools” Rules Engine

Recently I sat down to work up a solution around a rules engine. There were a few things I noticed right off.

  1. When there is a request to implement or build a rules engine it is very often (I’m guessing a solid 40-60% of the time) reasoned that there is a need solely based on a lack of understanding around what the problem space is that actually needs a solution. The simple assumption, is 40-60% of the time somebody says “let’s implement a rules engine to solve these unknown problems” really translates to “we really don’t know much about this domain so let’s implement something arbitrary as a stop gap”.
  2. Implementing a business rules engine can quickly become a “support the user” scenario for the developers that implement the rules engine. This is a situation in which the developers actually have to help the people writing the rules to be processed. This is not an ideal situation at all, generally developers supporting users writing rules is a quick way to ensure burn out, misappropriation of skills and turnover.
  3. Many developers will, without hesitation, spout out “are you sure you want to implement a rules engine?” and then follow that up with “let’s discuss your actual problem” with that leading to “are you sure you want to implement a rules engine?”. Other developers upon hearing that one will implement a rules engine immediately respond with, “shit, I’m out.”

At this point I realized I had X, Y and Z reason to use it and would just have to persevere with all of the threats that are inclusive of implementing a rules engine. Sometimes one just has to step into the realm of scary and get it done.

So here’s what I dug up. I’m really not sure about the name of this project, as it appears to be some sort of odd usage, so whatever, but it is indeed called nools (github repo). Nools is a business rules engine based on the Rete Algorithm, something that is helpful to read up on when implementing. The main deployment for nools is to a Node.js server, but I’ve read that it is prospectively deployable in most browsers too.
Continue reading “Learning “nools” Rules Engine”

Framework: Strongloop’s Loopback

Recently I did a series for New Relic on three frameworks, both for APIs and web apps. I titled it “Evaluating Node.js Frameworks: hapi.js, Restify, and Geddy” and it is available via the New Relic Blog. To check out those frameworks give that blog entry a read, then below I’ve added one more framework to the list, Strongloop’s Loopback.

Strength: Very feature-rich generation of models, data structures and related enterprise-type needs. Solid enterprise-style API framework library.
Weakness: Complexity could be cumbersome unless it is needed. Not an immediate first choice for a startup going after lean and clean.
Great for: Enterprise API Services.

When I dove into StrongLoop, I immediately got the feel that I was using a fairly polished package of software. When installing with ‘sudo npm install -g strongloop’ I could easily see the other packages that are installed. But instead of the normal Node.js display of additional dependencies that are installed, the StrongLoop install displayed a number of additional options with a shiny ASCII logo.
Continue reading “Framework: Strongloop’s Loopback”

Troubleshooting Node.js Deploys on Beanstalk – The Express v4 node ./bin/www Switch Up

I’ve gotten a ton of 502 errors and related issues that crop up when deploying the Beanstalk. One of the issues that cropped up a few times recently, until I stumbled into a working solution was the 502 NGINX error. I went digging around and ended up just trying to deploy a default, fresh from the ‘express newAppNameHere’ creation and still got the error.

I went digging through the Beanstalk configuration for the app and found this little tidbit.

Node Command (Click for full size image)
Node Command (Click for full size image)

I’ve pointed out the section where I’ve added the command.
[sourcecode language=”bash”]
node ./bin/www
[/sourcecode]

Based on the commands that are executed normally, it seems `npm start` would work work to get the application started. But I have surmised the issue is that the commands are executed sequentially;

[sourcecode language=”bash”]
node server.js
node app.js
npm start
[/sourcecode]

When these are executed in order, errors crop up and the command that should work `npm start` begins with a corrupted and error laden beginning. Leaving the application not running. However by adding the `node ./bin/www` to the text box all the others are skipped and this command is issued, resulting in a running application.

The other thing is to follow the now standard approach of just issue `npm start`, but being sure to replace what I put in the text box above (`node ./bin/www`) with `npm start` so that beanstalk only runs npm start instead of the ordered execution.

Coding on Orchestrate.io & Orchestrate.js & Orchestrate.NET

First context, then I’ll dive in.

Orchestrate

http://orchestrate.io/

Orchestrate is a service that provides a simple API to access a multitude of database types all in one location. Key value, graph or events, some of the database types I’ve been using, are but a few they’ve already made available. There are many more on the way. Having these databases available via an API instead of needing to go through the arduous process of setting up and maintaining each database for each type of data structure is a massive time saver! On top of having a clean API and solid database platform and infrastructure Orchestrate has a number of client drivers that provide easy to use wrappers. These client drivers are available for a number of languages. Below I’ve written about two of these that I’ve been involved with in some way over the last couple of months.

Orchestrate.NET

https://github.com/RobertSmith/Orchestrate.NET

This library I’m currently using for a demonstration application built against the Deconstructed.io services (follow us on twitter ya! @BeDeconstructed), a startup I’m co-founding. I’m not sure exactly what the app will be, but being .NET it’ll be something enterprisey. Because: .NET is Enterprise! For more on this project check out the Deconstructed.io Blog.

Some of the latest updates with this library.

But there’s still a bit of work to do for the library, so consider this a call out for anybody that has a cycle they’d like to throw in on the project, let us know. We’d happily take a few more pull requests!  The main two things we’d like to have done real soon are…

Orchestrate.js

https://github.com/orchestrate-io/orchestrate.js

With the latest fixes, additions and updates the orchestrate.js client driver is getting more feature rich by the day. In addition @housejester has created an orchestrate-brain project for Hubot that uses Orchestrate.js. If you’re not familiar with Hubot, but sure to check out the company robot that can dramatically improve and reduce employee efficiency! Keep an eye on that project for more great things, or create a Hubot to keep a robotic eye on the project.

Here are a few key things to note that have been added to help in day-to-day coding on the project.

  • The travis.yml file has been added for the Travis Continuous Integration build. This build runs against node.js v0.10 and v0.8.
  • Testing is done with mocha, expect.js and nock. To get the tests up and running, clone the repo and then build with the make file. The tests will run in tdd format.
  • Promises are provided via the kew library.

If you’re opening up the project in WebStorm, it’s great to setup the mocha tests with the integrated mocha testing as shown below. After you’ve cloned the project and run ‘npm install’ then follow these steps to add the Mocha testing to the project. We’ve already setup exclusions in the .gitignore for the .idea directory and files that WebStorm uses.

First add a configuration by clicking on Edit Configurations.

Edit Configurations
Edit Configurations

Next click on the + to add a new configuration to run. Select the Mocha option from the list of configurations.

Mocha & Other Configurations in WebStorm
Mocha & Other Configurations in WebStorm

On the next screen set a name for the configuration. Set the test directory to the path for the test directory in the project. Then finally set the User interface option for Mocha to TDD instead of the default BDD.

Edit Configuration Dialog
Edit Configuration Dialog

Last but not least run the tests and you’ll see the list of green lights light up the display with positive results.

Test Build
Test Build