Category Archives: Databases

Let’s Talk Top 7 Options for Database Gumbo

When one starts to dig into databases things get really complex really fast. There’s not only a whole plethora of database companies and projects, but database types, storage engines, and other options and functionality to choose from. One place to get a start is just to take a look at the crazy long list of databases on db-engines. In this post I’m going to take a look at a few of the top database engines to create a starting point – which I’ll reference – for future video streaming coding sessions (follow me @ twitch.tv/adronhall).

My Options for Database Gumbo

  1. Apache Cassandra / DataStax Enterprise
  2. Postgresql
  3. SQL Server
  4. Elasticsearch
  5. Redis
  6. SQLite
  7. Dynamo DB

The Reasons

Ok, so the list is as such, and as stated it’s my list. There are a lot of databases, and of course some are still more used such as Oracle. However here’s some of the logic and reasoning behind my choices above.

Oracle

First off I feel like I need to broach the Oracle topic. Mostly because of their general use in industry. I’m not doing anything with Oracle now, nor have I for years for a long, long, LONG list of reasons. Using their software tends to be buried in bureaucratic, oddly broken and unnecessary usage today anyway. They use predatory market tactics, completely dishonorable approach to sales and services, as well as threatening and suing people for doing benchmarks, and a host of other practices. In face to face experiences, Oracle tends to give off experiences, that Lawrence from Office Space would say, “naw man, I think you’d get your ass kicked for that!” and I agree. Oracle’s practices are too often disgusting. But even from the purely technical point of view, the Oracle Database and ecosystem itself really isn’t better than other options out there. It is indeed a better, more intelligently strategic and tactical option to use a number of alternatives.

Apache Cassandra / DataStax Enterprise

This combo has multiple reasons and logic to be on the list. First and foremost, much of my work today is using DataStax Enterprise (DSE) and Apache Cassandra since I work for DataStax. But it’s important to know I didn’t just go to DataStax because I needed a job, but because I chose them (and obviously they chose me by hiring me) because of the team and technology. Yes, they pay me, but it’s very much a two way street, I advocate Cassandra and DSE because I personally know the tech is top tier and solid.

On the fact that Apache Cassandra is top tier and solid, it is simply the remaining truly masterless distributed database that provides a linear path of scalability on the market that you can use, buy support for, and is actually actively and knowingly maintained not just by DataStax but by members of the community. One could make an argument for MongoDB but I’ll maybe elaborate on that in the future.

In addition to being a solid distributed database there are capabilities inherent in Apache Cassandra because of the data types and respective the CQL (Cassandra Query Language) that make it a great database to use too. DataStax Enterprise extends that to provide spatial (re: GIS/Geo Data/Queries), graph data, analytics engine, and more built on other components like SOLR and related technology. Overall a great database and great prospective combinations with the database.

Postgresql

Postgres is a relational database that has been around for a long time. It’s got some really awesome features like native JSON support, which I’m a big fan of. But I digress, there’s tons of other material that lays out thoroughly why to use Postgres which I very much agree with.

Just from the perspective of the extensive and rich data types Postgres is enough to be put on this list, but considering there are a lot of reasons around multi-tenancy, scalability, and related characteristics that are mostly unique to Postgres it’s held a solid position.

SQL Server

This one is on my list for a few reasons that have nothing to do with features or capabilities. This is the first database I was responsible for in its entirety. Administration, queries, query tuning, setup, and developer against with the application tier. I think of all my experience, this database I’ve spent the most time with, with Apache Cassandra being a close second, then Postgres and finally Riak.

Kind of a pattern there eh? Relational, distributed, relational, distributed!

The other thing about SQL Server however is the integrations, tooling, and related development ecosystem around SQL Server is above and beyond most options out there. Maybe, with a big maybe, Oracle’s ecosystem might be comparable but the pricing is insanely different. In that SQL Server basically can carry the whole workload, reporting, ETL, and other feature capabilities that the Oracle ecosystem has traditionally done. Combine SQL Server with SSIS (SQL Server Integration Services), SSRS (SQL Server Reporting Services), and other online systems like Azure’s SQL Database and the support, tooling, and ecosystem is just massive. Even though I’ve had my ins and outs with Microsoft over the years, I’ve always found myself enjoying working on SQL Server and it’s respective tooling options and such. It’s a feature rich, complete, solidly, and generally well performing relational database, full stop.

Elasticsearch

Ok, this is kind of a distributed database of sorts but focused more exclusively (not totally since it’s kind of expanded its roles) search engine. Overall I’ve had good experiences with Elasticsearch and it’s respective ELK (or Elastic ecosystem) of tooling and such, with some frustrating flakiness here and there over the years. Most of my experience has come from an operational point of view with Elasticsearch. I’ve however done a fair bit of work over the years in supporting teams that are doing actual software development against the system. I probably won’t write a huge amount about Elasticsearch in the coming months, but I’ll definitely bring it up at certain times.

Redis / SQLite / DynamoDB

These I’ll be covering in the coming months. For Redis and DynamoDB I have wanted to dig in for some comparison analysis from the perspective of implementing data tiers against these databases, where they are a good option, and determining where they’re just an outright bad option.

For SQLite I’ve used it on and off for many years, but have wanted to sit down and just learn it and try out some of its features a bit more.

Cassandra Datacenter & Racks

This last post in this series is Distributed Database Things to Know: Consistent Hashing.

Let’s talk about the analogy of Apache Cassandra Datacenter & Racks to actual datacenter and racks. I kind of enjoy the use of the terms datacenter and racks to describe architectural elements of Cassandra. However, as time moves on the relationship between these terms and why they’re called datacenter and racks can be obfuscated.

Take for instance, a datacenter could just be a cloud provider, an actual physical datacenter location, a zone in Azure, or region in some other provider. What an actual Datacenter in Cassandra parlance actually is can vary, but the origins of why it’s called a Datacenter remains the same. The elements of racks also can vary, but also remain the same.

Origins: Racks & Datacenters?

Let’s cover the actual things in this industry we call datacenter and racks first, unrelated to Apache Cassandra terms.

Racks: The easiest way to describe a physical rack is to show pictures of datacenter racks via the ole’ Google images.

racks.png

A rack is something that is located in a data-center, or even just someone’s garage in some odd scenarios. Ya know, if somebody wants serious hardware to work with. The rack then has a number of servers, often various kinds, within that rack itself. As you can see from the images above there’s a wide range of these racks.

Datacenter: Again the easiest way to describe a datacenter is to just look at a bunch of pictures of datacenter, albeit you see lots of racks again. But really, that’s what a datacenter is, is a building that has lots and lots of racks.

data-center.png

However in Apache Cassandra (and respectively DataStax Enterprise products) a datacenter and rack do not directly correlate to a physical rack or datacenter. The idea is more of an abstraction than hard mapping to the physical realm. In turn it is better to think of datacenter and racks as a way to structure and organize your DataStax Enterprise or Apache Cassandra architecture. From a tree perspective of organizing your cluster, think of things in this hierarchy.

  • Cluster
    • Datacenter(s)
      • Rack(s)
        • Server(s)
          • Node (vnode)

Apache Cassandra Datacenter

An Apache Cassandra Datacenter is a group of nodes, related and configured within a cluster for replication purposes. Setting up a specific set of related nodes into a datacenter helps to reduce latency, prevent transactions from impact by other workloads, and related effects. The replication factor can also be setup to write to multiple datacenter, providing additional flexibility in architectural design and organization. One specific element of datacenter to note is that they must contain only one node type:

Depending on the replication factor, data can be written to multiple datacenters. Datacenters must never span physical locations.Each datacenter usually contains only one node type. The node types are:

  • Transactional: Previously referred to as a Cassandra node.
  • DSE Graph: A graph database for managing, analyzing, and searching highly-connected data.
  • DSE Analytics: Integration with Apache Spark.
  • DSE Search: Integration with Apache Solr. Previously referred to as a Solr node.
  • DSE SearchAnalytics: DSE Search queries within DSE Analytics jobs.

Apache Cassandra Racks

An Apache Cassandra Rack is a grouped set of servers. The architecture of Cassandra uses racks so that no replica is stored redundantly inside a singular rack, ensuring that replicas are spread around through different racks in case one rack goes down. Within a datacenter there could be multiple racks with multiple servers, as the hierarchy shown above would dictate.

To determine where data goes within a rack or sets of racks Apache Cassandra uses what is referred to as a snitch. A snitch determines which racks and datacenter a particular node belongs to, and by respect of that, determines where the replicas of data will end up. This replication strategy which is informed by the snitch can take the form of numerous kinds of snitches, some examples include;

  • SimpleSnitch – this snitch treats order as proximity. This is primarily only used when in a single-datacenter deployment.
  • Dynamic Snitching – the dynamic snitch monitors read latencies to avoid reading from hosts that have slowed down.
  • RackInferringSnitch – Proximity is determined by rack and datacenter, assumed corresponding to 3rd and 2nd octet of each node’s IP address. This particular snitch is often used as an example for writing a custom snitch class since it isn’t particularly useful unless it happens to match one’s deployment conventions.

In the future I’ll outline a few more snitches, how some of them work with more specific detail, and I’ll get into a whole selection of other topics. Be sure to subscribe to the blog, the ole’ RSS feed works great too, and follow @CompositeCode for blog updates. For discourse and hot takes follow me @Adron.

Distributed Database Things to Know Series

  1. Consistent Hashing
  2. Apache Cassandra Datacenter & Racks (this post)

 

A New Adventure of Multi-model Distribute Graph Time Series […etc…] Database(s) Explorations Begins!

I arrived at the airport, sending a few tweets of this or that nature with all of this Github and Microsoft News. I have a great view out the window from the Alaska Lounge just before heading to the D gates. For you aeronautics fans like myself, here’s a picture of that view and a few of those Alaska Planes with one of the newly acquired Virgin America Planes!

IMG_5264

All this news with Github and Microsoft was easily eclipsing WWDC18 and in the meanwhile little ole’ me is on my way to a new adventure in my career. So priorities what they are, the news being excited, I’m more excited today to announce today I’m joining a most excellent team at DataStax! to bring forth investigation, research, knowledge, ideas, and whatever else I can as a Developer Evangelist with the crew here at DataStax! I’m unbelievably stoked as I’ve been searching for a company that would check all of my “will this work” check boxes for some months now! DataStax won out among the other prospective candidate companies and I’m starting today!

datastax_logo_blue

To kick off this adventure, I’m heading to San Francisco to join in the fun attending DevxCon. I’ll be there a little later today, hopefully in time for the kick off (ya know, pending flights and BART are all timely and such)! Then a full day of the conf, then later will join the team for a visit to DataStax HQ and maybe a few surprises. I’m super excited and ready to bring awesome content your way, while inventing, building, and experimenting my way through some awesome technologies!

In-memory Orchestrate Local Development Database

I was talking with Tory Adams @BEZEI2K about working with Orchestrate‘s Services. We’re totally sold on what they offer and are looking forward to a lot of the technology that is in the works. The day to day building against Orchestrate is super easy, and setting up collections for dev or test or whatever are so easy nothing has stood in our way. Except one thing…

Every once in a while we have to work disconnected. For whatever the reason might be; Comcast cable goes out, we decide to jump on a train or one of us ends up on one of those Q400 puddle jumpers that doesn’t have wifi! But regardless of being disconnected from wifi, cable or internet connectivity we still want to be able to code and test!

In Memory Orchestrate Wrapper

Enter the idea of creating an in memory Orchestrate database wrapper. Using something like convict.js one could easily redirect all the connections as necessary when developing locally. That way development continues right along and when the application is pushed live, it’s redirected to the appropriate Orchestrate connections and keys!

This in memory “fake” or “mock” would need to have the key value, events, and graph store setup just like Orchestrate. With the possibility of having this in memory one could also easily write tests against a real fake and be able to test connected or disconnected without mocking. Not to say that’s a good or bad idea, but just one more tool in the tool chest doesn’t hurt!

If something like this doesn’t pop up in the next week or three, I might just have to kick off this project myself! If anybody is interested please reach out to me and let’s discuss! I’m open to writing it in JavaScript, C#, Java or whatever poison pill you’d prefer. (I’m not polyglot to limit my options!!)

Other Ideas, Development Shop Swap

Another idea that I’ve been pondering is setting up a development shop swap. I’ll leave the reader to determine what that means!  😉  Feel free to throw down ideas that this might bring up and I’ll incorporate that into the soon to be implementation. I’ll have more information about that idea right here once the project gets rolling. In the meantime, happy coding!

Going Live, Data & Pricing @ Orchestrate

Over the last few months while working on the prototype around Deconstructed I’ve been using the Orchestrate service offering exclusively. With their service around key value and graph store easily accessible via API it was a no brainer to get started building ASAP. Today, that service goes full beta! You can get the full lowdown at the Orchestrate site.

You might recall that I mentioned Orchestrate a while back when they lept into the PIE Class a few months ago. So here’s a few quick thoughts on the release and what Orchestrate is.

The basic premise is Orchestrate provides full-text search, time ordered events, graph, key value storage and a lot more. All of these capabilities are offered via an API that create a product that’s extremely easy to get started. Think about what you’d need to do to get full-text search against a key value setup. Really think about it. Yeah? That’s a lot of steps. With Orchestrate you just sign up and start using it. Think about setting up a graph store and managing it on production systems. Yeah? Lot’s of work once it gets used. Again, just sign up, it’s all there, the graph to the key value to the event series and more. All the NoSQL juice you need located in a single service so you’re not fighting and maintaining multiple databases, nodes or whatever you’re working with.

Sing up. Use.

I will copy one thing from the press release….

  • Ad hoc search queries with Lucene
  • Event and time-ordered storage for activity feeds, sensor data
  • Create and query graph relationships
  • Easy to understand pricing
  • Data export at will – no lock-in
  • Standards compliant data security protocols
  • Daily data backups
  • Bulk data loading
  • Daily and hourly usage monitoring
  • A single, simple interface – JSON data in/out
  • Designed to complement existing databases and MBaaS services
  • Client libraries for Java, Node.js, and Go. More on the way!

Using Orchestrate

There are quotes in the press release, but I’ve got a few myself. I’m working to build out a prototype service that I and Aaron Gray will be releasing soon. Our startup is called Deconstructed, but more on that later. Without Orchestrate my dev cycle would be longer each day, as I battle with maintaining the data sources that I need. Without it I would have spent another 2-3 weeks setting up and staging nosql database technology. All things I didn’t really need to do. I needed to focus on the service, the value that we’ll soon bring to our customers.

It really boils down to this, and don’t get me wrong, I’m a total data nerd. But when it comes to building a product or service, the last thing I want to do is fight with managing the data anymore than I have to. That notion inspired me to write “Sorry Database Nerds, Nobody Actually Gives a Shit” which still holds true. I can’t think of a single business that wants to sit around and grok how an index works in a key value or what the spline of text-search queries is going to be.

Pricing

Pricing is sweet, for many that want to try it out things are free. Prices go up a bit more from there, but if you fall into the pricing you’re doing some business and ought to be rolling in a few bucks eh!

The interesting thing to me about pricing is that they’ve structured it around MOp, which stands for MegaOps. More specifically that’s one million API calls or one million operations.

Summary

If you write code, even a little or if you manage data you should do yourself the service and check out what Orchestrate has built. It’s a solid investment of time. I’ll have a lot more on Orchestrate and how we’re using the service for Deconstructed and more on using the service with JavaScript in the coming months. Keep your eyes peeled and I might even have some Dart and C# magic thrown in there to boot! Check em’ out, until later, happy hacking.

JavaScript

History of Symphonize.js – JavaScript Client Pivot to Data Generation Library

…the history of symphonize.js So Far!

NOTE: If you just want to check out the code bits, scroll down to the sub-title #symphonize #hacking. Also important to note I’m putting the library through a fairly big refactor at the moment so that everything aligns with the documentation that I’ve recently created. So many things may not be implemented, but we’re moving toward v0.1.0, which will be a functional implementation of the library available via npm based entirely on the documentation and specs that I outline after the history.

A Short History

I started the symphonize.js project back on the 1st of November. Originally I started the project as a client driver library for Orchestrate.io, but within a day Chris Molozian commented and pointed out that there was already a client driver library for Orchestrate.io available that Steve Kaliski (Github @sjkaliski and Twitter @stevekaliski and http://stevekaliski.com/) had coded called logically orchestrate.js. Since this was available I did a pivot to symphonize.js being a data generation project instead.

The comment that enabled symphonize.js to pivot from client driver to data generation library.

The comment that made me realize symphonize.js should pivot from client driver to data generation library.

The Official Start of Symphonize.js

After that start and quick pivot I posted a blog with Orchestrate.io titled “Test Data Builder Symphonize.js With Chance.js (1/3)” to officially start the project. In that post I covered key value and graph basics, with a dive into using chance.js and orchestrate.js with examples. Near the same time I also posted a related blog on publishing an NPM module, which is the deployment focus of Symphonize.js.

Reasons Reasoning

There are two main reasons why I chose Orchestrate.io and a data generation library as the two things I wanted to combine. The first, is I knew the orchestrate.io team and really dug what they were building. I wanted to work with it and check out how well it would work for my use cases in the future. The ability to go sit down, discuss with them what they were building was great (which I interviewed Matt Heitzenroder @roder that you can watch Orchestrate.io, Stop Dealing With the Database Infrastructure!) The second reason is that my own startup that I’m co-founding with Aaron Gray (@agray) needed to use key value and graph data storage of some type, somewhere. Orchestrate.io looked like a perfect fit. After some research, giving it a go, it fit very well into what we are building.

CRUD, cURL Hacking & Next Steps

Early December I knocked out two support articles about testing APIs with cURL in Some JavaScript API Coding With Restify & Express & Hacking it With cURL …Segment #1 (with some Webstorm to boot) and Some JavaScript API Coding With Restify & Express & Hacking it With cURL …Segment #2 and an article on the Orchestrate.io Blog for part 2 of that series titled Symphonize Some Create, Read, Update & Delete [CRUD] via Orchestrate.js (2/3).

December then rolled into the standard holiday doldrums and slowdowns. So fast forward to January post a few rounds of beer and good tidings and I got the 3rd in the series published titled Getting Serious With Symphony.js – JavaScript TDD/BDD Coding Practices (3/3). The post doesn’t speak too much to symphony.js usage but instead my efforts to use TDD or BDD practices in trying to write the library.

Slowly I made progress in building the library and finally it’s in a mostly releasable state now. I use this library daily in working with the code base for Deconstructed and imagine I’ll use it ongoing for many other projects. I hope others might be able to find uses for it too and maybe even add capabilities or ideas. Just ping me via Twitter @adron or Github @adron, add an issue on Github and I’ll be happy to accept pull requests for new features, code refactoring, add you to the project or whatever else you’re interested in.

#symphonize #hacking

Now for the nitty gritty. If you’re up for using or contributing to the project check out the symphonize.js github pages site first. It’s got all the information to help get you kick started. However, you can keep reading as I’ve included much of the information there along with the examples from the README.md below.

NOTE: As I mentioned at the top of this blog entry, the funcitonal implementation of code isn’t available via npm just yet, myself and some others are ripping through a good refactor to align the implementation fo the library with the rewritten and newly available documentation – included blow and at the github pages.

How to use this project in one of your projects.

npm install symphonize

How to setup this project for development.

First fork the repository located at https://github.com/Adron/symphonize.

git clone git@github.com:YourUserName/symphonize.git
cd symphonize
npm install

Using The Library

The intended usage is to invocate the JavaScript object and then call generate. That’s it, a super simple process. The code would look like this:

var Symphonize = require('../bin/symphonize');
var symphonize = new Symphonize();

The basic constructor invocation like this utilizes the generate.json file to generate data from. To inject the json configuration programmatically just inject the json configuration information via the constructor.

var configJson = {"schema":"keyvalue"};

var Symphonize = require('../bin/symphonize');
var symphonize = new Symphonize();

Once the Symphonize data generator has been created call the generate() method as shown.

symphonize.generate();

That’s basically it. But you say, it’s supposed to do X, Y or Z. Well that’s where the json configuration data comes into play. In the configuration data you can set the data fields and what they’ll generate, what type of data will be generated, the specific schema, how many records to create and more.

generate.json

The library comes with the generate.json file already setup with a working example. Currently the generation file looks like this:

{
    "schema": "keyvalue", /* keyvalue, graph, event, geo */
        "count": 20, /* X values to generate. */
    "write_source": "console", /* console, orchestrateio and whatever other data sources that might come up. */
    "fields": {
            /* generates a random name. */
            "fieldName": "name",
            /* generates a random dice roll of a d20. */
            "fieldTwo": "d20",
            /* A single lorum ipsum random statement is genereated. */
            "fieldSentence": "sentence",
            /* A random guid is generated. */
            "fieldGuid": "guid"    }
}

Configuration File Definitions

Each of the configuration options that are available have a default in the configuration file. The default is listed in italics with each definition of the configuration option listed below.

  • schema” : This is used to select what type of data structure type is going to be generated. The default iskeyvalue for this option.
  • count” : This provides the total records that are to be generated by the library. The default is 1 for this option.
  • write_source” : This provides the location to output the generated data to. The default is console for this option.
  • fields” : This is a JSON field within the JSON configuration file that provides configuration options around the fields, number of fields and their respective data to generate. The default is one field, with a default data type of guid. Each of the respective entries in this JSON option is a self contained JSON name and value pair. This then looks simply like this (which is also shown above in part):
    {
        "someBoolean": "boolean",
        "someChar": "character",
        "aFloat": "float",
        "GetAnInt": "integer",
        "fieldTwo": "d20",
        "diceRollD10": "d10",
        "_string": {
            "fieldName": "NameOfFieldForString",
            "length": 5,
            "pool": "abcdefgh"
        },
        "_sentence": {
            "fieldName": "NameOfFiledOfSentences",
            "sentence": "5"
        },
        "fieldGuid": "guid"
    }
    
  • Fields Configuration: For each of the fields you can either set the field to a particular data type or leave it empty. If the field name and value pair is left empty then the field defaults to guid. The types of data to generate for fields are listed below. These listed are all simple field and data generation types. More complex nested generation types are listed below under Complex Field Configuration below the simple section.
    • boolean“: This generates a boolean value of true or false.
    • character“: This generates a single character, such as ‘1’, ‘g’ or ‘N’.
    • float“: This generates a float value, similar to something like -211920142886.5024.
    • integer“: This generates an integer value, similar to something like 1, 14 or 24032.
    • d4“: This generates a random integer value based on a dice roll of one four sided dice. The integer range being 1-10.
    • d6“: This generates a random integer value based on a dice roll of one six sided dice. The integer range being 1-10.
    • d8“: This generates a random integer value based on a dice roll of one eight sided dice. The integer range being 1-10.
    • d10“: This generates a random integer value based on a dice roll of one ten sided dice. The integer range being 1-10.
    • d12“: This generates a random integer value based on a dice roll of one twelve sided dice. The integer range being 1-10.
    • d20“: This generates a random integer value based on a dice roll of one twenty sided dice. The integer range being 1-20.
    • d30“: This generates a random integer value based on a dice roll of one thirty sided dice. The integer range being 1-10.
    • d100“: This generates a random integer value based on a dice roll of one hundred sided dice. The integer range being 1-10.
    • guid“: This generates a random globally unique identifier. This value would be similar to ‘F0D8368D-85E2-54FB-73C4-2D60374295E3’, ‘e0aa6c0d-0af3-485d-b31a-21db00922517’ or ‘1627f683-efeb-4db8-8174-a5f2e3378c87’.
  • Complex Field Configuration: Some fields require more complex configuration for data generation, simply because the data needs some baseline of what the range or length of the values need to be. The following list details each of these. It is also important to note that these complex field configurations do not have defaults, each value must be set in the JSON configuration or an error will be thrown detailing that a complex field type wasn’t designated. Each of these complex field types is a JSON name and value parameter. The name is the passed in data type with a preceding underscore ‘_’ to generate with the value having the configuration parameters for that particular data type.
    • _string“: This generates string data based on a length and pool parameters. Required fields for this include fieldNamelength and pool. The JSON would look like this:
      "_string": {
          "fieldName": "NameOfFieldForString",
          "length": 5,
          "pool": "abcdefgh"
      }
      

      Samples of the result would look like this for the field; ‘abdef’, ‘hgcde’ or ‘ahdfg’.

    • _hash“: This generates a hash based on the length and upper parameters. Required fields for this included fieldNamelength and upper. The JSON would look like this:
      "_hash": {
          "fieldName": "HashFieldName",
          "length": 25,
          "casing": 'upper'
      }
      

      Samples of the result would look like this for the field: ‘e5162f27da96ed8e1ae51def1ba643b91d2581d8’ or ‘3F2EB3FB85D88984C1EC4F46A3DBE740B5E0E56E’.

    • _name”: This generates a name based on the middle, *middleinitial* and prefix parameters. Required fields for this included fieldNamemiddlemiddle_initial and prefix. The JSON would look like this:
      "_name": {
          "fieldName": "nameFieldName",
          "middle": true,
          "middle_initial": true,
          "prefix": true
      }
      

      Samples of the result would look like this for the field: ‘Dafi Vatemi’, ‘Nelgatwu Powuku Heup’, ‘Ezme I Iza’, ‘Doctor Suosat Am’, ‘Mrs. Suosat Am’ or ‘Mr. Suosat Am’.

So that covers the kick start of how eventually you’ll be able to setup, use and generate data. Until then, jump into the project and give us a hand.

After this, more examples on the way, cheers!

JavaScript

Plotting Good Things in Portland :: pdxbridge.js / WTF Databases /

Several people got together yesterday to start planning things for 2014 in PDX. It ranged from coding workshops to PDX Node to Node PDX to what kind of food to eat at for lunch. Ya know, daily tactical things that come along with the big picture items. 😉

bridge.js badge.

bridge.js badge.

Two things that I want to bring up to the community out there. One is a workshop that I’ll likely lead efforts to organize and the other is something I’ll just call pdxbridge.js for now. The workshop will cover the topics of which and what databases to use for what data and how to implement. The pdxbridge.js project is about determining the raised or lowered state of the bridges here in Portland.

Some of the other projects, workshops and other topics we discussed included getting a workshop put together around unit, integration and testing code from a behavioral, test driven development or other approaches. This workshop we don’t have anyone to teach, but we’d (ok, so I really really would love to attend a workshop on this) really like to find somebody who would be willing to teach a workshop of this sort, with a focus on Javascript as the language. On that same topic however, if you’re into Java, Erlang, Scala, Haskell or others and would like to teach a TDD, BDD or related testing workshop please get in touch with me. We will work on making that happen! Ping me at adron at composite code dot com. 😉

Workshop: Intro to Databases & Data

(Relational, Key/Value, Distributed, Graph, Event Series, etc.)

This is a course I’ll lead and others will work with me on to put something extra useful together. We will then teach the workshop as a group, kind of a team paired programming teaching workshop. If there is anything in particular that you’d like to learn about, any questions that you have about data and usage in applications or otherwise add your two cents on this blog entries comments. Over the next month we’ll be putting together the material and have the course available sometime early this year. So if you’d like to attend, jump in at any time with the conversation or just keep a read here and I’ll have more information about the course as we get it put together.

Let’s Make pdxbridge.js Happen!

The pdxbridge.js project is all about determining if a bridge in Portland is up or down. Right now there are  several bridges that matter, that are on this list;

If we add other information to track about the bridges we might add the other 3 that exist and the new bridge that is being built. however the five listed are the only bridges that have a raised and lowered state, and in one case the Steel Bridge has a lowered, partially raised and fully raised state. As shown on the pdxbridge.js badge I threw together (shown above).

To get involved with pdxbridge.js go add your input on this issue I started to discuss our first meet, plan and hack.