WE DID IT! DataStax Astra is GA

Yesterday we finally went full GA (General Availability) with DataStax Astra. For the quick TLDR think of it as Apache Cassandra that you can spin up as a service and use in about a minute. I, as I wrote about some months ago, joined the engineering team to help build out the system! I quickly got to reconnoitering the role and working toward build out of features, which now are available to you!

With Astra, if you’ve used Apache Cassandra or DataStax Enterprise you can use the same drivers or CQL you’re familiar with. But with Astra there are two additional capabilities we’ve just released to use in connecting to and working with your databases:

  • Astra REST API
  • Astra GraphQL API

With the REST API there are a number of capabilities to add a table, return a list of all the tables, return content of a table, and delete a table. In addition to tables, there is functionality to retrieve, retrieve all, add, update, and delete columns. All of the standard CRUD (Create, Read, Update, and Delete) commands can also be performed.

For the GraphQL API it gives you the ability to perform CRUD actions and query with filters using the GraphQL syntax.

Authorization Token

To use either of these services, the first thing you’ll need is to create one of Astra’s time based authorization tokens. These tokens work until 30 minutes after the last call made with the token. Once expired a new token must be created. To create a token an HTTP POST to the API can be made, passing several header values, and username and password in the body of a POST request.

For an example of retrieving an authorization token I’ve put together a cURL request below. To get the URL for your database navigate to the Astra dashboard, and on the summary screen of any database the API Access URL’s are listed.

curl --request POST \
  --url https://12c3bb24-e2df-4db3-b993-14707303e57c-us-east1.apps.astra.datastax.com/api/rest/v1/auth \
  --header 'accept: */*' \
  --header 'content-type: application/json' \
  --header 'x-cassandra-request-id: 24cc6f6f-c1d9-4d4e-a4d3-e34c7d8b148a' \
  --data '{"username":"betterbot","password":"betterbot"}'

A successful request will return a result with the auth token that looks like this.

{"authToken":"9a38437f-7e03-49a8-bc5d-b4e305d7c1e8"}

With that authorization token we can now call actions against the REST, or GraphQL APIs.

Creating a Table via the Astra REST API

To create a table, we need a few key elements: The table name, whether it should create if a table exists or not, and column definitions with at least one column as a primary key. This is done by using JSON to pass this schema to the REST API. Here’s an example of some JSON that can be used to create a table.

'{"name":"products","ifNotExists":true,"columnDefinitions":
  [ {"name":"id","typeDefinition":"uuid","static":false},
    {"name":"name","typeDefinition":"text","static":false},
    {"name":"description","typeDefinition":"text","static":false},
    {"name":"price","typeDefinition":"decimal","static":false},
    {"name":"created","typeDefinition":"timestamp","static":false}],"primaryKey":
    {"partitionKey":["id"]},"tableOptions":{"defaultTimeToLive":0}}'

To use this JSON to create a table, just add the pertinent headers, insert your keyspace into the URL, and the x-cassandra-token and POST this data to the REST API end point. A cURL request to create the table would look like this.

curl --request POST \
  --url https://12c3bb24-e2df-4db3-b993-14707303e57c-us-east1.apps.astra.datastax.com/api/rest/v1/keyspaces/betterbotz/tables \
  --header 'accept: */*' \
  --header 'content-type: application/json' \
  --header 'x-cassandra-request-id: 07e37064-b265-4618-94ce-1c4606f584f9' \
  --header 'x-cassandra-token: ' \
  --data '{"name":"products","ifNotExists":true,"columnDefinitions":
  [ {"name":"id","typeDefinition":"uuid","static":false},
    {"name":"name","typeDefinition":"text","static":false},
    {"name":"description","typeDefinition":"text","static":false},
    {"name":"price","typeDefinition":"decimal","static":false},
    {"name":"created","typeDefinition":"timestamp","static":false}],"primaryKey":
    {"partitionKey":["id"]},"tableOptions":{"defaultTimeToLive":0}}'

Adding data via a GraphQL Mutation

At this point, with a data created, we can add, update, or delete data. The sample curl statement I’ve put together here is a sample GraphQL mutation to add a record to the products table.

curl --request POST \
  --url https://ba965c97-86f1-4d38-8cne-58qa1d2209a1-us-east1.apps.astra.datastax.com/api/rest/v1/keyspaces/betterbotz/tables/orders/rows \
  --header 'accept: application/json' \
  --header 'content-type: application/json' \
  --header 'x-cassandra-request-id: xyzaa27b-de8e-4afc-8431-8f06a326047d' \
  --header 'x-cassandra-token: 3ad1ca6a-62pq-4e1b-b273-4c08ea334909' \
  --data-raw '{"query":"mutation {superarms: insertProducts(value:{id:\"65cad0df-4fc8-42df-90e5-4effcd221ef7\"\n name:\"Arm Spec A1\" description:\"Powerful Robot Arm Spec A.\"price: \"9999.99\" created: \"2012-04-23T18:25:43.511Z\"}){value {name description price created}}}","variables":{}}'

For some other examples issuing a GraphQL mutation to add a record, just for good measure.

Go

package main

import (
  "fmt"
  "strings"
  "net/http"
  "io/ioutil"
)

func main() {

  url := "https://32c3bb24-e2df-4db3-b993-14707303e57c-us-east1.apps.astra.datastax.com/api/graphql"
  method := "POST"

  payload := strings.NewReader("{\"query\":\"mutation {superarms: updateProducts(value: {id:\\\"65cad0df-4fc8-42df-90e5-4effcd221ef7\\\" name:\\\"Arm Spec A3 [Newly Updated]\\\" description:\\\"Powerful Robot Arm Spec A3.\\\" price: \\\"19999.99\\\" created: \\\"2012-04-23T18:25:43.511Z\\\" }){value {id name description price created}}}\",\"variables\":{}}")

  client := &http.Client {
  }
  req, err := http.NewRequest(method, url, payload)

  if err != nil {
    fmt.Println(err)
  }
  req.Header.Add("accept", "*/*")
  req.Header.Add("content-type", "application/json")
  req.Header.Add("X-Cassandra-Token", "e85b3021-fb89-4f43-9ba6-a64a49ba5f68")
  req.Header.Add("Content-Type", "application/json")

  res, err := client.Do(req)
  defer res.Body.Close()
  body, err := ioutil.ReadAll(res.Body)

  fmt.Println(string(body))
}

Python

import requests

url = "https://32c3bb24-e2df-4db3-b993-14707303e57c-us-east1.apps.astra.datastax.com/api/graphql"

payload = "{\"query\":\"mutation {superarms: updateProducts(value: {id:\\\"65cad0df-4fc8-42df-90e5-4effcd221ef7\\\" name:\\\"Arm Spec A3 [Newly Updated]\\\" description:\\\"Powerful Robot Arm Spec A3.\\\" price: \\\"19999.99\\\" created: \\\"2012-04-23T18:25:43.511Z\\\" }){value {id name description price created}}}\",\"variables\":{}}"
headers = {
  'accept': '*/*',
  'content-type': 'application/json',
  'X-Cassandra-Token': 'e85b3021-fb89-4f43-9ba6-a64a49ba5f68',
  'Content-Type': 'application/json'
}

response = requests.request("POST", url, headers=headers, data = payload)

print(response.text.encode('utf8'))

Java

OkHttpClient client = new OkHttpClient().newBuilder()
  .build();
MediaType mediaType = MediaType.parse("application/json");
RequestBody body = RequestBody.create(mediaType, "{\"query\":\"mutation {superarms: updateProducts(value: {id:\\\"65cad0df-4fc8-42df-90e5-4effcd221ef7\\\" name:\\\"Arm Spec A3 [Newly Updated]\\\" description:\\\"Powerful Robot Arm Spec A3.\\\" price: \\\"19999.99\\\" created: \\\"2012-04-23T18:25:43.511Z\\\" }){value {id name description price created}}}\",\"variables\":{}}");
Request request = new Request.Builder()
  .url("https://32c3bb24-e2df-4db3-b993-14707303e57c-us-east1.apps.astra.datastax.com/api/graphql")
  .method("POST", body)
  .addHeader("accept", "*/*")
  .addHeader("content-type", "application/json")
  .addHeader("X-Cassandra-Token", "e85b3021-fb89-4f43-9ba6-a64a49ba5f68")
  .addHeader("Content-Type", "application/json")
  .build();
Response response = client.newCall(request).execute();

and C#!

var client = new RestClient("https://32c3bb24-e2df-4db3-b993-14707303e57c-us-east1.apps.astra.datastax.com/api/graphql");
client.Timeout = -1;
var request = new RestRequest(Method.POST);
request.AddHeader("accept", "*/*");
request.AddHeader("content-type", "application/json");
request.AddHeader("X-Cassandra-Token", "e85b3021-fb89-4f43-9ba6-a64a49ba5f68");
request.AddHeader("Content-Type", "application/json");
request.AddParameter("application/json", "{\"query\":\"mutation {superarms: updateProducts(value: {id:\\\"65cad0df-4fc8-42df-90e5-4effcd221ef7\\\" name:\\\"Arm Spec A3 [Newly Updated]\\\" description:\\\"Powerful Robot Arm Spec A3.\\\" price: \\\"19999.99\\\" created: \\\"2012-04-23T18:25:43.511Z\\\" }){value {id name description price created}}}\",\"variables\":{}}",
           ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
Console.WriteLine(response.Content);

With that short tour, check out your free database today @ https://astra.datastax.com/register! Feel free to ping me on Twitter @Adron or here in comments, I’m open to and would love to discuss your experience!

Career Update: Back to Engineering!

Since the inception of my software engineering career, decades ago, I have enjoyed creating resilient software systems and building efficient engineering teams. I found that one of the most important aspects affecting the success of any project is how close engineering teams align with user and community needs. This is one of the reasons it’s crucial for engineering to understand users.

During the past year and a half I’ve been deeply focused on distributed systems and advocacy around Apache Cassandra at DataStax. I chose to work at the company for their commitment to building extremely scalable and performant products. Another reason was my respect for people who work at the company, many of whom are active contributors and committers to Apache Cassandra, Spark and other important open-source projects. DataStax maintains deep experience in the area of distributed databases and I am happy to have been able to contribute to improving products and educational materials around Apache Cassandra. Having gotten to work with many engineering teams within DataStax I am excited about our future efforts! Continue reading “Career Update: Back to Engineering!”

Join the Apollo Beta for FREE! Help the Databass!

Hello to all the data curious, database lovers, and sciency datamungers! I have a small favor to ask of you all. At DataStax we just opened up our Apollo service i.e. “Apache Cassandra as a Service” i.e. DBaaS offering and I’m looking for people that want to test drive the database! Now, you don’t have to actually tell me you’re using it or anything, but I’d love to know if you are. Maybe we could even chat about your experience using it.

To get started:

  1. Sign up here.
  2. Create a database here.
  3. Pick a driver here.  [C#/F#, Node.js/JavaScript, Java, C++, and Python] – I added F# cuz ya know, that’s how F# works and all, you just use the C# driver and BOOM, you’ve got F# access!!
  4. Write CQL and execute the database!
  5. Profit!

Alright, where profit is that’s when you let me know what works for you and what doesn’t. Feel free to comment here, ping me via Twitter @adron, or via the response form here, or however you’ve got to message me. I’d be super stoked to chat!

Getting Started Specifics

To create a database, once you’ve got an account, just navigate to https://apollo.datastax.com/createDatabase and you’ll get prompted with the following screen.

apollo-create-database

Currently during beta we have AWS as the provider option, and you can choose between Developer, Startup, Standard, and Enterprise. Each offering various configurations and future prospective SLA’s and such.

asdf

Once you have the database name, keyspace, user name, and you password set, click on Launch Database and the spin up of the multi-node database will begin. You’ll be greeted with a message notifying you that it’ll take a little bit of time for the database to spin up and an email will be sent once it is done. Enjoy a coffee in the meantime.

prompt

Once the database spins up there are two key sections on the database page. First, there is the connection details. They’re located in the bottom left of the database page.

database-connection-details

If you click on the “Learn How” you’ll get directly linked to the docs pages with multiple examples of how to get connected to the database you’ve just created. You can also reset your password here and retrieve the security bundle (it’s a tar/zip file) that you’ll need to authenticate any applications with.

The other part that can be really helpful, especially as you do any development or testing with your database is the grafana dashboard. It’s on the Health tab of the database page.

grafana-metrics

A trick that I used, to get an easier and full screen view of all the metrics, is to inspect the page right at the metrics, within that you’ll find the iframe in which to get the link specifically to the Grafana metrics. They look pretty nice broken out of frame! As you work through queries and such keep an eye on this for extra insight.

broken-out

Any other thoughts, contemplation, or otherwise do get in touch!

 

 

 

Development Workspace with Terraform on Azure: Part 4 – DSE w/ Packer + Importing State 4 Terraform

The next thing I wanted setup for my development workspace is a DataStax Enterprise Cluster. This will give me all of the Apache Cassandra database features plus a lot of additional features around search, OpsCenter, analytics, and more. I’ll elaborate on that in some future posts. For now, let’s get an image built we can use to add nodes to our cluster and setup some other elements.

1: DataStax Enterprise

The general installation instructions for the process I’m stepping through here in this article can be found in this documentation. To do this I started with a Packer template like the one I setup in the second part of this series. It looks, with the installation steps taken out, just like the code below.

[code language=”javascript”]
{
“variables”: {
“client_id”: “{{env `TF_VAR_clientid`}}”,
“client_secret”: “{{env `TF_VAR_clientsecret`}}”,
“tenant_id”: “{{env `TF_VAR_tenant_id`}}”,
“subscription_id”: “{{env `TF_VAR_subscription_id`}}”,
“imagename”: “”,
“storage_account”: “adronsimagestorage”,
“resource_group_name”: “adrons-images”
},

“builders”: [{
“type”: “azure-arm”,

“client_id”: “{{user `client_id`}}”,
“client_secret”: “{{user `client_secret`}}”,
“tenant_id”: “{{user `tenant_id`}}”,
“subscription_id”: “{{user `subscription_id`}}”,

“managed_image_resource_group_name”: “{{user `resource_group_name`}}”,
“managed_image_name”: “{{user `imagename`}}”,

“os_type”: “Linux”,
“image_publisher”: “Canonical”,
“image_offer”: “UbuntuServer”,
“image_sku”: “18.04-LTS”,

“azure_tags”: {
“dept”: “Engineering”,
“task”: “Image deployment”
},

“location”: “westus2”,
“vm_size”: “Standard_DS2_v2”
}],
“provisioners”: [{
“execute_command”: “chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh ‘{{ .Path }}'”,
“inline”: [
“”
],
“inline_shebang”: “/bin/sh -x”,
“type”: “shell”
}]
}
[/code]

In the section marked “inline” I setup the steps for installing DataStax Enterprise.

[code language=”javascript”]
“apt-get update”,
“apt-get install -y openjdk-8-jre”,
“java -version”,
“apt-get install libaio1”,
“echo \”deb https://debian.datastax.com/enterprise/ stable main\” | sudo tee -a /etc/apt/sources.list.d/datastax.sources.list”,
“curl -L https://debian.datastax.com/debian/repo_key | sudo apt-key add -“,
“apt-get update”,
“apt-get install -y dse-full”
[/code]

The first part of this process the machine image needs Open JDK installed, which I opted for the required version of 1.8. For more information about the Open JDK check out this material:

The next thing I needed to do was to get everything setup so that I could use this Azure Image to build an actual Virtual Machine. Since this process however is built outside of the primary Terraform main build process, I need to import the various assets that are created for Packer image creation and the actual Packer images. By importing these asset resources into Terraform’s state I can then write configuration and code around them as if I’d created them within the main Terraform build process. This might sound a bit confusing, but check out this process and it might make more sense. If it is still confusing do let me know, ping me on Twitter @adron and I’ll elaborate or edit things so that they read better.

check-box-64Verification Checklist

  • At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.

2: Terraform State & Terraform Import Resources

Ok, if you check out part 1 of this series I setup Azure CLI, Terraform, and the pertinent configuration and parts to build out infrastructure as code using HCL (Hashicorp Configuration Language) with a little bit of Bash as glue here and there. Then in Part 2 and Part 3 I setup Packer images and some Terraform resources like Kubernetes and such. All of that is great, but these two parts of the process are now in entirely two different unknown states. The two pieces are:

  1. Packer Images
  2. Terraform Infrastructure

The Terraform Infrastructure doesn’t know the Packer Images exist, but they are sitting there in a resource group in Azure. The way to make Terraform aware that these images exist is to import the various things that store the images. To import these resources into the Terraform state, before doing an apply, run the terraform import command.

In order to get all of the resources we need in which to operate and build images, the following import commands need issued. I wrote a script file to help me out with each of these, and used jq to make retrieval of the Packer created Azure Image ID’s a bit easier. That code looks like this:

[code language=”bash”]
BASECASSANDRA=$(az image list | jq ‘map({name: “basecassandra”, id})’ | jq -r ‘.[0].id’)
BASEDSE=$(az image list | jq ‘map({name: “basedse”, id})’ | jq -r ‘.[0].id’)
[/code]

Breaking down the jq commands above, the following actions are being taken. First, the Azure CLI command is issued, az image list which is then piped | into the jq command of jq 'map({name: "theimagenamehere", id})'. This command takes the results of the Azure CLI command and finds the name element with the appropriate image name, matches that and then gets the id along with it. That command is then piped into another command that returns just the value of the id jq -r '.id'. The -r is a switch that tells jq to just return the raw data, without enclosing double quotes.

I also needed to import the resource group all of these are in too, which following a similar jq command style of piping one command’s results into another, issued this command to get the Resource Group ID RG-IMPORT=$(az group show --name adronsimages | jq -r '.id'). With those three ID’s there is one more element needed to be able to import these into Terraform state.

The Terraform resources that these imported pieces of state will map to need declared, which means the Terraform HCL itself needs written out. For that, there are the two images that are needed and the Resource Group. I added the images in an images.tf files and the Resource Group goes in the resource_groups.tf file.

[code language=”javascript”]
resource “azurerm_image” “basecassandra” {
name = “basecassandra”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}

resource “azurerm_image” “basedse” {
name = “basedse”
location = “West US”
resource_group_name = azurerm_resource_group.imported_adronsimages.name

os_disk {
os_type = “Linux”
os_state = “Generalized”
blob_uri = “{blob_uri}”
size_gb = 30
}
}
[/code]

Then the Resource Group.

[code language=”javascript”]
resource “azurerm_resource_group” “imported_adronsimages” {
name = “adronsimages”
location = var.locationlong

tags = {
environment = “Development Images”
}
}
[/code]

Now, issuing these Terraform commands will pull the current state of those resource into the state, which we can then issue further Terraform commands and applies from.

[code language=”bash]
terraform import azurerm_image.basedse $BASEDSE
terraform import azurerm_image.basecassandra $BASECASSANDRA
terraform import azurerm_resource_group.imported_adronsimages $RG_IMPORT
[/code]

Running those commands, the results come back something like this.

terraform-imports

Verification Checklist

  • At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.
  • Now there is also state in Terraform, that understands where and what these resources are.

Summary, for now.

This post is shorter than I’d like it to be. But it was taking to long for the next steps to get written up – but fear not they’re on the way! In the coming post I’ll cover more of other resource elements we’ll need to import, what is next for getting a virtual machine built off of the image that is now available, some Terraform HCL refactoring, and most importantly putting together the actual DataStax Enterprise / Apache Cassandra Clusters! So stay tuned, subscribe to the blog, and of course follow me on the Twitters @Adron.

TRIP REPORT: Accelerate 2019 in Washington DC, I mean National Harbor!

Trip Time.

Today’s trip care of Alaska Airlines Flight 2 out of SEATAC Airport (Seattle & Tacoma’s airport) to National (Reagan) in Alexandria, Virginia. I’ll be staying there and commuting daily across the Potomoc River to Gaylord Resort and Convention Center (at National Harbor). I decided I’d write up something about this trip for a few specific reasons:

  1. I finally purchased a Bromptown Bicycle which I’ve been wanting to attain and use for my trips that require air travel or don’t have enough space for a proper bicycle.
  2. The adventure is entirely new to me, I’ve not been to these locations at any point in my life. New for me, new for those reading this (or adventuring along with me on my Twitch channel).
  3. I also picked up a number of new things that I want to see how they’ll work for streaming while on the go. These include; Android Phone, a new dual Go Pro + Phone mount for the bike, and among these a few existing devices like my trusty set of GoPro Cameras.
  4. I flew over via first class for various reasons. I thus, wanted to share some of the advantages and why I think it’s more than worth it to fly first class vs. coach and why companies should rethink their ideas around this when positions require frequent travel and working on the go.

Leaving Cascadia

The first thing I did was pack up the Brompton. I got a hardshell case to go along with it since I’d read during my research the airlines sometimes will snap off parts of the bike when a softshell case is used. The other advantage, the hardshell case has wheels! Inside this I also put my front mount messenger bag and some bungie cables so I can mount this stuff up to the bike upon arrival.

Once that was packed it was time to get the Mission Workshop ARKIV backpack I have locked and loaded. In my pack, which is the large of the two sizes, I get all my cloths, toothbrush, razors, and related amenities. In the side pouches that I mount up specific for longer trips I put my power brick and other electric plugs I’d need regularly in the quickest to access pouches. The other things go in various assorted pockets here and there. Since this is such a short trip, I also skip the outer backpack laptop pouch and just put the laptop in the inner sleeve.

Stats

Backpack w/ Laptop: 22 lbs. (with laptop)
Hardshell w/ Bike: 32 lbs.

All in all, a fairly heavy load, but the cool thing is with the configuration and post-arrival setup I have there isn’t actually much to carry. Backpack goes on my back and the hardshell case rolls along like a carry on. What makes it even easier, I’ve got an express bus with plenty of space and light rail with special areas specific for luggage like this. My 17x Express arrives on time, I board and ride off with my pack and hard shell sitting right next to me.

When I arrive downtown I merely pack up and roll downstairs to the Sound Transit LINK, board the train and off to the airport I go. No need to mess with a driver, no need for chatter or worrying about the implications of social anxiety or evils of clicking “don’t talk to me uber driver”. Just board and go. Then, read a book, check your phone, or whatever comes to mind. That’s what I do.

At the airport I strolled and rolled into the first class lounge, which I attempted to record via my new Android with the Twitch app. It… went oddly I’m assuming. Let’s take a look here.

Once I got situated in the lounge I made some pancakes – a tradition I have now – and sat down for some coding. The seats are comfortable, the views are great, and along with the coding I get to nerd out on all the planes taking on and off. At least, when one is flying in and out of C Gate at SEATAC. N Gates are kind of “meh”.

Eventually I left the relaxing lounge and headed into the boarding area of C Gates. The Alaska Air 737-900 arrived and started deplaning. With deplaning, boarding, and refueling done for the trip back east to DC we headed back out on the tarmac to queue up 15th in line to take off. Check that out, total plane traffic jam!

IMG_20190520_140122

Once in the air we flew through some piddly turbulence and into more clouds. Clearing 10,000 foot laptops came out and a little bit more coding resumed. In addition I started this post, took a few pictures, and knocked out a few other things I needed to do.

After a while food and drink services began. In first class anything over an hour can safely assume a meal will be served. This time it was tortellini or a sandwich of some sort. I got the tortellini. The meal is then served in three parts. Starting with a little salad and soup, entree, and then wrapped up with a desert.

The soup was tasty, I was somewhat surprised by this. Where as the salad was merely a salad with some cherry tomatoes, carrots, and greens. Nothing real special, but then of course it’s a salad so not like there’s much expectation.

The tortellini was pretty good. Even in comparison to other food outside of the airlines. A little salt and pepper brought it up just slightly to something I’d even have been happy with in an actual restaurant!

Finally we wrapped up with some Salt & Straw for desert. Considering this is an airplane I was kind of amazed they’d get Salt & Straw, but then again, Alaska Airlines does like to play to the local products and all!

After food, a couple more hours of coding and prep for the oncoming days of Accelerate.

Arrival in the District of Columbia

I arrived in DC, retrieved my Brompton and racked up the case it packs in and threw my bag on the front. Now for a 26 minute bike ride from the airport to Alexandria.

the-path.png

On the way, the setting was magnificent with honey suckle providing a divine fragrance while I road along the bike trail along the Potomac River. The moon shined down, almost full, and in spectacular fashion!

Eventually I arrived at my new home for the week. The ride a success, an experiment that it was.

Bootcamp!

NOTE: I am an employee at DataStax, just so you know, in case you didn’t know. I always do my best to give you the direct details, but just so you don’t think I’m being a shill here. Some people don’t seem to be able to determine how people and occupations are correlated, so I like to keep things on the up and up.

First day, or maybe it’s zero day on account of zero based indexes and all, bootcamp kicked off!

In the boot camp we covered a lot of material to get attendees up to speed on Apache Cassandra. To boot, Patrick McFadin announced that everybody would get to use DataStax Constellation, our new Cassandra as a Service offering – currently in test. The awesomeness about this whole bootcamp was that we provided Constellation for everybody, without a blip on the radar! No system issues came up, albeit we crossed a few programmatic network wires that were crisscrossed but that got remedied in seconds. With that all wrapped up, released, with a bow on top, bootcamp went off without a hitch. Also a huge shout out to the dozens of team members that provided support throughout the room of 300+ attendees!

Good times in success!

Day 1 – Announcing DataStax Constellation

The first day, based on our zero based index numbering of conference days, started with Billy Bosworth CEO of DataStax giving keynote number one.

In the keynote Billy talks about the direction of DataStax and the upcoming releases, and current releases as of Accelerate 2019. Then Chelsea Navo joins Billy to do a LIVE – emphasis on a LIVE demo of DataStax Enterprise (i.e. Apache Cassandra and all the goodies) running multi-cloud in Azure, AWS, and GCP.

9:23 – Demo of DataStax Enterprise – Multi-cloud in real life. “Not a pretend demo!

15:17 – Chealsea shows how we introduced a little chaos into the mix, and introduces the ability to simply and easily bring a datacenter down. In realtime, as the related reads and writes are occurring. Nothing stops, not even a blip… whoops, did I spoil it? Give it a watch, it’s a solid keynote demo!

At the 20 minute mark, Billy introduced DataStax Constellation. Watch it, learn more, etc. Following that Billy talks about Insights, which will be built in and services based AI, system health, and related capabilities within the cloud offering.

After the keynote, everybody broke out into technical sessions on a wide, very wide range of topics. From Apache Cassandra to DataStax to Kafka to Vue.js! Great day!

Day 2 – Apache Cassandra v4.0

On day two Billy starts off the keynotes, and introduces others including Nate McCall. Nate is the Apache Cassandra PMC Chair & committer to the project. He dove into the new features, capabilities, and changes of v4.

Next up is DataStax CTO (and founder!) and Apache Cassandra committer of yore, and more, Jonothan Ellis! (video is time point linked below so you can dive right into the talk).

After the keynotes more technical sessions. I attended some architecture discussions around graph and related technology. Lots of good conversations. I really enjoyed it, and to wrap it all up that evening we had an ending keynote with Keren Elazari.

Departure

I had a great time, and as I always like a little lagniappe. Here’s some photos from the trip back. If you’d like to join DataStax Accelerate for 2020, give a good look at the upcoming conference next year!