Over the last few months while working on the prototype around Deconstructed I’ve been using the Orchestrate service offering exclusively. With their service around key value and graph store easily accessible via API it was a no brainer to get started building ASAP. Today, that service goes full beta! You can get the full lowdown at the Orchestrate site.
The basic premise is Orchestrate provides full-text search, time ordered events, graph, key value storage and a lot more. All of these capabilities are offered via an API that create a product that’s extremely easy to get started. Think about what you’d need to do to get full-text search against a key value setup. Really think about it. Yeah? That’s a lot of steps. With Orchestrate you just sign up and start using it. Think about setting up a graph store and managing it on production systems. Yeah? Lot’s of work once it gets used. Again, just sign up, it’s all there, the graph to the key value to the event series and more. All the NoSQL juice you need located in a single service so you’re not fighting and maintaining multiple databases, nodes or whatever you’re working with.
Sing up. Use.
I will copy one thing from the press release….
Ad hoc search queries with Lucene
Event and time-ordered storage for activity feeds, sensor data
Create and query graph relationships
Easy to understand pricing
Data export at will – no lock-in
Standards compliant data security protocols
Daily data backups
Bulk data loading
Daily and hourly usage monitoring
A single, simple interface – JSON data in/out
Designed to complement existing databases and MBaaS services
Client libraries for Java, Node.js, and Go. More on the way!
Using Orchestrate
There are quotes in the press release, but I’ve got a few myself. I’m working to build out a prototype service that I and Aaron Gray will be releasing soon. Our startup is called Deconstructed, but more on that later. Without Orchestrate my dev cycle would be longer each day, as I battle with maintaining the data sources that I need. Without it I would have spent another 2-3 weeks setting up and staging nosql database technology. All things I didn’t really need to do. I needed to focus on the service, the value that we’ll soon bring to our customers.
It really boils down to this, and don’t get me wrong, I’m a total data nerd. But when it comes to building a product or service, the last thing I want to do is fight with managing the data anymore than I have to. That notion inspired me to write “Sorry Database Nerds, Nobody Actually Gives a Shit” which still holds true. I can’t think of a single business that wants to sit around and grok how an index works in a key value or what the spline of text-search queries is going to be.
Pricing
Pricing is sweet, for many that want to try it out things are free. Prices go up a bit more from there, but if you fall into the pricing you’re doing some business and ought to be rolling in a few bucks eh!
The interesting thing to me about pricing is that they’ve structured it around MOp, which stands for MegaOps. More specifically that’s one million API calls or one million operations.
Summary
If you write code, even a little or if you manage data you should do yourself the service and check out what Orchestrate has built. It’s a solid investment of time. I’ll have a lot more on Orchestrate and how we’re using the service for Deconstructed and more on using the service with JavaScript in the coming months. Keep your eyes peeled and I might even have some Dart and C# magic thrown in there to boot! Check em’ out, until later, happy hacking.
NOTE: If you just want to check out the code bits, scroll down to the sub-title #symphonize #hacking. Also important to note I’m putting the library through a fairly big refactor at the moment so that everything aligns with the documentation that I’ve recently created. So many things may not be implemented, but we’re moving toward v0.1.0, which will be a functional implementation of the library available via npm based entirely on the documentation and specs that I outline after the history.
There are two main reasons why I chose Orchestrate.io and a data generation library as the two things I wanted to combine. The first, is I knew the orchestrate.io team and really dug what they were building. I wanted to work with it and check out how well it would work for my use cases in the future. The ability to go sit down, discuss with them what they were building was great (which I interviewed Matt Heitzenroder @roder that you can watch Orchestrate.io, Stop Dealing With the Database Infrastructure!) The second reason is that my own startup that I’m co-founding with Aaron Gray (@agray) needed to use key value and graph data storage of some type, somewhere. Orchestrate.io looked like a perfect fit. After some research, giving it a go, it fit very well into what we are building.
December then rolled into the standard holiday doldrums and slowdowns. So fast forward to January post a few rounds of beer and good tidings and I got the 3rd in the series published titled Getting Serious With Symphony.js – JavaScript TDD/BDD Coding Practices (3/3). The post doesn’t speak too much to symphony.js usage but instead my efforts to use TDD or BDD practices in trying to write the library.
Slowly I made progress in building the library and finally it’s in a mostly releasable state now. I use this library daily in working with the code base for Deconstructed and imagine I’ll use it ongoing for many other projects. I hope others might be able to find uses for it too and maybe even add capabilities or ideas. Just ping me via Twitter @adron or Github @adron, add an issue on Github and I’ll be happy to accept pull requests for new features, code refactoring, add you to the project or whatever else you’re interested in.
#symphonize #hacking
Now for the nitty gritty. If you’re up for using or contributing to the project check out the symphonize.js github pages site first. It’s got all the information to help get you kick started. However, you can keep reading as I’ve included much of the information there along with the examples from the README.md below.
NOTE: As I mentioned at the top of this blog entry, the funcitonal implementation of code isn’t available via npm just yet, myself and some others are ripping through a good refactor to align the implementation fo the library with the rewritten and newly available documentation – included blow and at the github pages.
[sourcecode language=”javascript”]
git clone git@github.com:YourUserName/symphonize.git
cd symphonize
npm install
[/sourcecode]
Using The Library
The intended usage is to invocate the JavaScript object and then call generate. That’s it, a super simple process. The code would look like this:
[sourcecode language=”javascript”]var Symphonize = require(‘../bin/symphonize’);
var symphonize = new Symphonize();
[/sourcecode]
The basic constructor invocation like this utilizes the generate.json file to generate data from. To inject the json configuration programmatically just inject the json configuration information via the constructor.
[sourcecode language=”javascript”]
var configJson = {"schema":"keyvalue"};
var Symphonize = require(‘../bin/symphonize’);
var symphonize = new Symphonize();
[/sourcecode]
Once the Symphonize data generator has been created call the generate() method as shown.
That’s basically it. But you say, it’s supposed to do X, Y or Z. Well that’s where the json configuration data comes into play. In the configuration data you can set the data fields and what they’ll generate, what type of data will be generated, the specific schema, how many records to create and more.
generate.json
The library comes with the generate.json file already setup with a working example. Currently the generation file looks like this:
[sourcecode language=”javascript”]
{
"schema": "keyvalue", /* keyvalue, graph, event, geo */
"count": 20, /* X values to generate. */
"write_source": "console", /* console, orchestrateio and whatever other data sources that might come up. */
"fields": {
/* generates a random name. */
"fieldName": "name",
/* generates a random dice roll of a d20. */
"fieldTwo": "d20",
/* A single lorum ipsum random statement is genereated. */
"fieldSentence": "sentence",
/* A random guid is generated. */
"fieldGuid": "guid" }
}
[/sourcecode]
Configuration File Definitions
Each of the configuration options that are available have a default in the configuration file. The default is listed in italics with each definition of the configuration option listed below.
“schema” : This is used to select what type of data structure type is going to be generated. The default iskeyvalue for this option.
“count” : This provides the total records that are to be generated by the library. The default is 1 for this option.
“write_source” : This provides the location to output the generated data to. The default is console for this option.
“fields” : This is a JSON field within the JSON configuration file that provides configuration options around the fields, number of fields and their respective data to generate. The default is one field, with a default data type of guid. Each of the respective entries in this JSON option is a self contained JSON name and value pair. This then looks simply like this (which is also shown above in part):[sourcecode language=”javascript”]{
"someBoolean": "boolean",
"someChar": "character",
"aFloat": "float",
"GetAnInt": "integer",
"fieldTwo": "d20",
"diceRollD10": "d10",
"_string": {
"fieldName": "NameOfFieldForString",
"length": 5,
"pool": "abcdefgh"
},
"_sentence": {
"fieldName": "NameOfFiledOfSentences",
"sentence": "5"
},
"fieldGuid": "guid"
}
[/sourcecode]
Fields Configuration: For each of the fields you can either set the field to a particular data type or leave it empty. If the field name and value pair is left empty then the field defaults to guid. The types of data to generate for fields are listed below. These listed are all simple field and data generation types. More complex nested generation types are listed below under Complex Field Configuration below the simple section.
“boolean“: This generates a boolean value of true or false.
“character“: This generates a single character, such as ‘1’, ‘g’ or ‘N’.
“float“: This generates a float value, similar to something like -211920142886.5024.
“integer“: This generates an integer value, similar to something like 1, 14 or 24032.
“d4“: This generates a random integer value based on a dice roll of one four sided dice. The integer range being 1-10.
“d6“: This generates a random integer value based on a dice roll of one six sided dice. The integer range being 1-10.
“d8“: This generates a random integer value based on a dice roll of one eight sided dice. The integer range being 1-10.
“d10“: This generates a random integer value based on a dice roll of one ten sided dice. The integer range being 1-10.
“d12“: This generates a random integer value based on a dice roll of one twelve sided dice. The integer range being 1-10.
“d20“: This generates a random integer value based on a dice roll of one twenty sided dice. The integer range being 1-20.
“d30“: This generates a random integer value based on a dice roll of one thirty sided dice. The integer range being 1-10.
“d100“: This generates a random integer value based on a dice roll of one hundred sided dice. The integer range being 1-10.
“guid“: This generates a random globally unique identifier. This value would be similar to ‘F0D8368D-85E2-54FB-73C4-2D60374295E3’, ‘e0aa6c0d-0af3-485d-b31a-21db00922517’ or ‘1627f683-efeb-4db8-8174-a5f2e3378c87’.
Complex Field Configuration: Some fields require more complex configuration for data generation, simply because the data needs some baseline of what the range or length of the values need to be. The following list details each of these. It is also important to note that these complex field configurations do not have defaults, each value must be set in the JSON configuration or an error will be thrown detailing that a complex field type wasn’t designated. Each of these complex field types is a JSON name and value parameter. The name is the passed in data type with a preceding underscore ‘_’ to generate with the value having the configuration parameters for that particular data type.
“_string“: This generates string data based on a length and pool parameters. Required fields for this include fieldName, length and pool. The JSON would look like this:[sourcecode language=”javascript”]"_string": {
"fieldName": "NameOfFieldForString",
"length": 5,
"pool": "abcdefgh"
}
[/sourcecode]
Samples of the result would look like this for the field; ‘abdef’, ‘hgcde’ or ‘ahdfg’.
“_hash“: This generates a hash based on the length and upper parameters. Required fields for this included fieldName, length and upper. The JSON would look like this:[sourcecode language=”javascript”]"_hash": {
"fieldName": "HashFieldName",
"length": 25,
"casing": ‘upper’
}
[/sourcecode]
Samples of the result would look like this for the field: ‘e5162f27da96ed8e1ae51def1ba643b91d2581d8’ or ‘3F2EB3FB85D88984C1EC4F46A3DBE740B5E0E56E’.
“_name”: This generates a name based on the middle, *middleinitial* and prefix parameters. Required fields for this included fieldName, middle, middle_initial and prefix. The JSON would look like this:[sourcecode language=”javascript”]"_name": {
"fieldName": "nameFieldName",
"middle": true,
"middle_initial": true,
"prefix": true
}
[/sourcecode]
Samples of the result would look like this for the field: ‘Dafi Vatemi’, ‘Nelgatwu Powuku Heup’, ‘Ezme I Iza’, ‘Doctor Suosat Am’, ‘Mrs. Suosat Am’ or ‘Mr. Suosat Am’.
So that covers the kick start of how eventually you’ll be able to setup, use and generate data. Until then, jump into the project and give us a hand.
You must be logged in to post a comment.