I’ve been organizing conferences (with other awesome organizers of course, it’s never a singular person getting that work done!) for a long while now and they’re what they are. Then along came the pandemic and splat, in person conferences became extinct. I’m sure they’ll be back, but I’m not entirely sure they ought to come back. At least, they ought not come back in the same way they existed pre-pandemic time.
Mississippi River in New Orleans along the ole’ Crescent
There’s another type of get together that I’ve been thinking of that I’m really excited about. This experience, I was fortunate enough to experience a bunch of years ago in New Orleans with an awesome group of folks. To add a little context to this, I lived in New Orleans for a good while and grew up about 45 minutes from the city across the state line in Mississippi. With that, I feel like I’ve got a little bit of context for living the New Orleans lifestyle. I must add, it is distinctively and specifically a very unique lifestyle among these United States. Living a New Orleans life is like nothing else in these United States, not even remotely!
When I lived in the area I loved many aspects of this city and there were aspects that I was not happy with. The city has a few parts that make the famous south side of Chicago seem like a peaceful hippy village, but on the other side of the spectrum New Orleans has an intense passion and love among its people. The city is amazing, beautiful, and honestly a marvel of engineering (it’s below sea level). This city, always standing as a monument to passion, music, love, and more is prominent throughout the city. This passion and love of life itself is a positive among positives that in the end, vastly outweigh any of the negatives.
A Dose of That NOLA Life
It’s that famous street y’all!
This adventure I experienced a number of years ago went something like this. In 2010 I had a conference to attend where I was going to speak about various data analysis techniques, coding project ideas, and related technologies around web and data analytics. At the time I worked for a company called WebTrends with a solid bunch. The conference was all set and would be a great time, but it wasn’t the key experience of this trip.
Some friends with a business startup that also were attending the conference decided to rent a house down near Decatur Street. They rented this house and turned it into a coder’s house for a full week! It was a wildly entertaining, enjoyable, unique, and worthwhile experience to undertake. In addition we were wildly productive! Implementing a number of features, swarming on some ideas, and writing up a number of ideas for future implementation while thinking out the design in a great thorough way. It was spectacular!
But there was more, much more to this truly excellent trip. We had access to New Orleans after all which is well known for truly epic food – arguably some of the best options – to explore flavors, tastes, and truly expansive ideas in foodie explorations! The local creole food, the surrounding local southern food, and the combinations therein are unto themselves not comparable in any other part of the United States. Also no, New York, San Francisco, Portland, or anywhere doesn’t even come close in food comparisons and I’m not even going to engage in that silliness. New Orleans food is a culinary delight in it’s own world ranking! As can be see below…
In addition since I knew the city well there were streets to walk, places to explore such as Jax Brewery, the markets, the levies along the riverfront, a riverwalk that’s great, steamship paddle wheelers that traverse the Mississippi river for some amazing explorations, views, and food too!
Ok, ok, ok so that’s a lot of me telling you about the awesomeness of New Orleans. If you’re not into the idea of exploring or visiting the city I can’t really do much more to sell you on the trip. But the next aspect of this post I’ll detail an idea of forming a krewe to head south to the city of New Orleans, build awesome software, eat wonderful food, and generally live the relaxed life for a solid week or so. The idea is this krewe will be a parade of its own that’ll setup shop and live this for the escape, the celebration, and the experience of it all! If this sounds interesting to you, read on, here’s the details.
How This Would Work
For some, we’d join onboard the City of New Orleans, the Crescent, or the Sunset Limited into New Orleans. For others the option of choice may be to fly into the Louis Armstrong Airport or even take the train in from Chicago, Memphis, or other place onboard the City of New Orleans or out of Washington DC, Charlotte, Atlanta, Montgomery or elsewhere onboard the Crescent. Upon arriving we’d converge at the house or houses we’d choose for this adventure where we’d live for the week and get setup for the projects we’d do during the week. That night we’d gather for a grand dinner at our first excellent destination.
Day one dinner at Lil Dizzy’s Cafe & Coding Plans
The first day we’d all get breakfast at Lil Dizzy’s Cafe or somewhere thereabouts. There we’ll get fueled up with a most epic food win and then depart to gather to plot what we’ll create for the week. This is when we’d get a full plan and some goals together as a group. Decide if we want to break out further into groups (depending on our overall group size) and such. We’d find a good place (likely organized well before the trip) and gather there, post-wicked-awesome-amazing good breakfast, and get into all this. This one goal, would be the goal for day one!
Looking at that sinking (yes, by almost an inch per year!) Central Business District in New Orleans!
Day two rolling in… later rise, more good food, and coding time
Day two rolling in. We’d rise a bit later, get some piping hot coffee and maybe a kicker at Cafe Du Monde for the start of day two. Once collected we’d gather for some day hacking or maybe checking out the brewery blocks (it’s more than just breweries, just sayin’). Then we’d get in some evening coding, building, and creating then back into some food and entertainment of whatever sort for the evening. Possibly some jazz at Julius Kimbrough’s Prime Example, Little Gem Saloon, or the Spotted Cat. Either way, a good time and good evening however we want to slice and dice it up.
Day three, onward and forward and advance!
Day three and onward would continue along this theme. Dynamic organization with a loosely coupled and loosely designed scheduled workflow. Mostly to keep it flexible to live NOLA while we’re there. All the while we get to build something as a krewe (team, crew, cohort, however you’d call the group)!
This would continue for the rest of the week. I’ll have more ideas, more to this proposal, more to this trip coming in subsequent blog posts. This post has one purpose, to get the idea introduced to you dear reader and to start the conversation about getting this event put together. If you’d be interested in this idea, please reach out to me via Twitter @Adron, or you can message me via my Contact Form, or if you have some other means – txt me, sms me, slack me, or whatever – that’ll work too. Whatever the medium, let’s get a conversation started about traveling down to the Crescent City for an EPIC week of food, life, music, and hacking together a solution for whatever it is we create!
For more on this, follow me on Twitter, stay tuned here on the blog, and eventually we’ll get an organizing krewe together and start getting together more specifics, like dates and travel times, core ideas, and more.
Cheers!
References:
New Orleans skyline as featured image above is from Wikipedia Commons.
I did try to make sure there wasn’t rights issues with all those glorious food pictures, but will fix if anything is contested.
I would not assume you did know this dear reader, but I’ve joined the amazing team at Hasura! Over the last few years I’ve gotten back into a number of data oriented development efforts, often related to my own interest in database systems and web development. From this angle Hasura has a superb technology solution, a solid team with founders @tanmaigo, @rajoshighosh leading the crew, and I’ll introduce many of them to you all too in the coming weeks and months! 👍🏻
Hasura is a GraphQL API Server, that can be serverless via the Hasura Cloud, deployed to any cloud provider, or run locally on your own infrastructure. It is open source too, so you can dig in and checkout how things work via the Github repo.
My first step when joining I put together a deployment around infrastructure as code and wrote a blog entry “Setup Postgres and a GraphQL API with Hasura on Azure” using Terraform. It’s a fairly thorough post, albeit always open to critique, and will have a follow up real soon! Some of the next steps will include further data modeling, covering various relational database paradigms and how those map to Hasura, and lots of additional information. If there’s something you’d like to see, or technologies you’d like to see me put together, do reach out to me @Adron and I’ll see about getting some customized content put together!
Some of the first endeavors I’ve started tackling is coordinating new learning material around each of the features, capabilities, patterns, practices, and ideas behind GraphQL, development around and uses of GrahpQL, the Hasura product, and how all of these technologies fit together to make software development [pick awesome adjectives here: better, faster, etc] when one is working toward an idea!
See you deep in the code and data, science, data extraction, transformation, loading, and all the software development around it all! 🚀
For JavaScript, Go, Python, Terraform, and more infrastructure, infrastructure as code, web dev, data management, data science, data extraction, transformation, loading, and coding around all of this I stream regularly on Twitch at https://twitch.tv/adronhall, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/c/ThrashingCode. I’ll be regularly participating in, scheduling, and adding content at Hasura’s Twitch & YouTube Channels too, so follow and subscribe over there too.
Last, another way to get updated on just the bare minimum of content and coding I’m doing register for the Thrashing Code Newsletter!
I created a data model to store railroad systems, services, scheduled, time points, and related information, detailing the schema “Beyond CRUD n’ Cruft Data-Modeling” with a few tweaks. The original I’d created for Apache Cassandra, and have since switched to Postgres giving the option of primary and foreign keys, relations, and the related connections for the model.
In this post I’ll use that schema to build out an infrastructure as code solution with Terraform, utilizing Postgres and Hasura (OSS).
For the Docker Compose file I just placed them in the root of the repository. Add a docker-compose.yaml file and then added services. The first service I setup was the Postgres/PostgreSQL database. This is using the standard Postgres image on Docker Hub. I opted for version 12, I do want it to always restart if it gets shutdown or crashes, and then the last of the obvious settings is the port which maps from 5432 to 5432.
For the volume, since I might want to backup or tinker with the volume, I put the db_data location set to my own Codez directory. All my databases I tend to setup like this in case I need to debug things locally.
The POSTGRES_PASSWORD is an environment variable, thus the syntax ${PPASSWORD}. This way no passwords go into the repo. Then I can load the environment variable via a standard export POSTGRES_PASSWORD="theSecretPasswordHere!" line in my system startup script or via other means.
For the db_data volume, toward the bottom I add the key value setting to reference it.
volumes:
db_data:
Next I added the GraphQL solution with Hasura. The image for the v1.1.0 probably needs to be updated (I believe we’re on version 1.3.x now) so I’ll do that soon, but got the example working with v1.1.0. Next I’ve got the ports mapped to open 8080 to 8080. Next, this service will depend on the postgres service already detailed. Restart, also set on always just as the postgres service. Finally two evnironment variables for the container:
HASURA_GRAPHQL_DATABASE_URL – this variable is the base postgres URL connection string.
HASURA_GRAPHQL_ENABLE_CONSOLE – this is the variable that will set the console user interface to initiate. We’ll definitely want to have this for the development environment. However in production I’d likely want this turned off.
At this point the commands to start this are relatively minimal, but in spite of that I like to create a start and stop shell script. My start script and stop script simply look like this:
Starting the services.
docker-compose up -d
For the first execution of the services you may want to skip the -d and instead watch the startup just to become familiar with the events and connections as they start.
Stopping the services.
docker-compose down
🚀 That’s it for the basic development environment, we’re launched and ready for development. With the services started, navigate to https://localhost:8080/console to start working with the user interface, which I’ll have a more details on the “Beyond CRUD n’ Cruft Data-Modeling” swap to Hasura and Postgres in an upcoming blog post.
For the Terraform files I created a folder and added a main.tf file. I always create a folder to work in, generally, to keep the state files and initial prototyping of the infrastructre in a singular place. Eventually I’ll setup a location to store the state and fully automate the process through a continues integration (CI) and continuous delivery (CD) process. For now though, just a singular folder to keep it all in.
For this I know I’ll need a few variables and add those to the file. These are variables that I’ll use to provide values to multiple resources in the Terraform templating.
variable "database" {
type = string
}
variable "server" {
type = string
}
variable "username" {
type = string
}
variable "password" {
type = string
}
One other variable I’ll want so that it is a little easier to verify what my Hasura connection information is, will look like this.
output "hasura_url" {
value = "postgres://${var.username}%40${azurerm_postgresql_server.logisticsserver.name}:${var.password}@${azurerm_postgresql_server.logisticsserver.fqdn}:5432/${var.database}"
}
Let’s take this one apart a bit. There is a lot of concatenated and interpolated variables being wedged together here. This is basically the Postgres connection string that Hasura will need to make a connection. It includes the username and password, and all of the pertinent parsed and string escaped values. Note specifically the %40 between the ${var.username} and ${azurerm_postgresql_server.logisticsserver.name} variables while elsewhere certain characters are not escaped, such as the @ sign. When constructing this connection string, it is very important to be prescient of all these specific values being connected together. But, I did the work for you so it’s a pretty easy copy and paste now!
Next I’ll need the Azure provider information.
provider "azurerm" {
version = "=2.20.0"
features {}
}
Note that there is a features array that is just empty, it is now required for the provider to designate this even if the array is empty.
Next up is the resource group that everything will be deployed to.
Now the Postgres Server itself. Note the location and resource_group_name simply map back to the resource group. Another thing I found a little confusing, as I wasn’t sure if it was a Terraform name or resource name tag or the server name itself, is the “name” key value pair in this resource. It is however the server name, which I’ve assigned var.server. The next value assigned “B_Gen5_2” is the Azure designator, which is a bit cryptic. More on that in a future post.
After that information the storage is set to, I believe if I RTFM’ed correctly to 5 gigs of storage. For what I’m doing this will be fine. The backup is setup for 7 days of retention. This means I’ll be able to fall back to a backup from any of the last seven days, but after 7 days the backups are rolled and the last day is deleted to make space for the newest backup. The geo_redundant_backup_enabled setting is set to false, because with Postgres’ excellent reliability and my desire to not pay for that extra reliability insurance, I don’t need geographic redundancy. Last I set auto_grow_enabled to true, albeit I do need to determine the exact flow of logic this takes for this particular implementation and deployment of Postgres.
The last chunk of details for this resource are simply the username and password, which are derived from variables, which are derived from environment variables to keep the actual username and passwords out of the repository. The last two bits set the ssl to enabled and the version of Postgres to v9.5.
Since the database server is all setup, now I can confidently add an actual database to that database. Here the resource_group_name pulls from the resource group resource and the server_name pulls from the server resource. The name, being the database name itself, I derive from a variable too. Then the character set is UTF8 and collation is set to US English, which are generally standard settings on Postgres being installed for use within the US.
The next thing I discovered, after some trial and error and a good bit of searching, is the Postgres specific firewall rule. It appears this is related to the Postgres service in Azure specifically, as for a number of trials and many errors I attempted to use the standard available firewalls and firewall rules that are available in virtual networks. My understanding now is that the Postgres Servers exist outside of that paradigm and by relation to that have their own firewall rules.
This firewall rule basically attaches the firewall to the resource group, then the server itself, and allows internal access between the Postgres Server and the Hasura instance.
The last and final step is setting up the Hasura instance to work with the Postgres Server and the designated database now available.
To setup the Hasura instance I decided to go with the container service that Azure has. It provides a relatively inexpensive, easier to setup, and more concise way to setup the server than setting up an entire VM or full Kubernetes environment just to run a singular instance.
The first section sets up a public IP address, which of course I’ll need to change as the application is developed and I’ll need to provide an actual secured front end. But for now, to prove out the deployment, I’ve left it public, setup the DNS label, and set the OS type.
The next section in this resource I then outline the container details. The name of the container can be pretty much whatever you want it to be, it’s your designator. The image however is specifically hasura/graphql-engine. I’ve set the CPU and memory pretty low, at 0.5 and 1.5 respectively as I don’t suspect I’ll need a ton of horsepower just to test things out.
Next I set the port available to port 80. Then the environment variables HASURA_GRAPHQL_SERVER_PORT and HASURA_GRAPHQL_ENABLE_CONSOLE to that port to display the console there. Then finally that wild concatenated interpolated connection string that I have setup as an output variable – again specifically for testing – HASURA_GRAPHQL_DATABASE_URL.
To run this, similarly to how I setup the dev environment, I’ve setup a startup and shutdown script. The startup script named prod-start.sh has the following commands. Note the $PUSERNAME and $PPASSWORD are derived from environment variables, where as the other two values are just inline.
Executing that script gives me results that, if everything goes right, looks similarly to this.
./prod-start.sh
azurerm_resource_group.adronsrg: Creating...
azurerm_resource_group.adronsrg: Creation complete after 1s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg]
azurerm_postgresql_server.logisticsserver: Creating...
azurerm_postgresql_server.logisticsserver: Still creating... [10s elapsed]
azurerm_postgresql_server.logisticsserver: Still creating... [20s elapsed]
...and it continues.
Do note that this process will take a different amount of time and is completely normal for it to take ~3 or more minutes. Once the server is done in the build process a lot of the other activities start to take place very quickly. Once it’s all done, toward the end of the output I get my hasura_url output variable so that I can confirm that it is indeed put together correctly! Now that this is preformed I can take next steps and remove that output variable, start to tighten security, and other steps. Which I’ll detail in a future blog post once more of the application is built.
... other output here ...
azurerm_container_group.adronshasure: Still creating... [40s elapsed]
azurerm_postgresql_database.logisticsdb: Still creating... [40s elapsed]
azurerm_postgresql_database.logisticsdb: Still creating... [50s elapsed]
azurerm_container_group.adronshasure: Still creating... [50s elapsed]
azurerm_postgresql_database.logisticsdb: Creation complete after 51s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg/providers/Microsoft.DBforPostgreSQL/servers/logisticscoresystemsdb/databases/logistics]
azurerm_container_group.adronshasure: Still creating... [1m0s elapsed]
azurerm_container_group.adronshasure: Creation complete after 1m4s [id=/subscriptions/77ad15ff-226a-4aa9-bef3-648597374f9c/resourceGroups/adrons-rg/providers/Microsoft.ContainerInstance/containerGroups/adrons-hasura-logistics-data-layer]
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
Outputs:
hasura_url = postgres://postgres%40logisticscoresystemsdb:theSecretPassword!@logisticscoresystemsdb.postgres.database.azure.com:5432/logistics
Now if I navigate over to logisticsdatalayer.westus2.azurecontainer.io I can view the Hasura console! But where in the world is this fully qualified domain name (FQDN)? Well, the quickest way to find it is to navigate to the Azure portal and take a look at the details page of the container itself. In the upper right the FQDN is available as well as the IP that has been assigned to the container!
Navigating to that FQDN URI will bring up the Hasura console!
Next Steps
From here I’ll take up next steps in a subsequent post. I’ll get the container secured, map the user interface or CLI or whatever the application is that I build lined up to the API end points, and more!
For JavaScript, Go, Python, Terraform, and more infrastructure, web dev, and coding in general I stream regularly on Twitch at https://twitch.tv/adronhall, post the VOD’s to YouTube along with entirely new tech and metal content at https://youtube.com/c/ThrashingCode.
It’s happening! It’s really happening y’all! People have opinions and things to say!
I’m starting a new segment for my Twitch channel – and by proxy this new future prospective podcasting that will go along with it as lagniappe – and am looking for those that have something they’d like to converse about!
If you’re in the Seattle area visiting, living, or otherwise and would like to join me on a live stream sometime this is your invite! If you’ve just gotten into programming, started handling infrastructure, dealing with that big data in those database, let’s talk. I want to hear about your interest in what you do, what use cases you have, what the mission is, and how you aim to accomplish innovative ways to solve the problems you and/or your organization are working to solve.
As I was saying programming, infrastructure, database, are all open topics for the live stream. There are a few caveats and topics that I do have an extra interest in. Come join me and tell me, and by proxy have a conversation with the audience about database tech, databases, how your company is using databases and managing all of its data, and of course especially if that database happens to be Apache Cassandra, DataStax Enterprise, or even some other large scale distributed database or multi-model database system. I want to hear from you and what you’re building, so let’s get together and have a conversation and have our audience pull up a chair to the table for questions, comments, and more!
You must be logged in to post a comment.