Subscribe to continue reading
Subscribe to get access to the rest of this post and other subscriber-only content.
Subscribe to get access to the rest of this post and other subscriber-only content.
This is a continuation of my posts on relational databases, started here. Previous posts on this theme, āData Modeling“, āLetās Talk About Database Schemaā, āThe Exasperating Topic of Database Indexesā, and “The Keys & Relationships of Relational Databases“.
Query optimization refers to the process of selecting the most efficient way to execute a SQL query. The goal is to retrieve the requested data as quickly and efficiently as possible, making the best use of system resources. Given that there can be multiple strategies to retrieve the same set of data, the optimizer’s role is to choose the best one based on factors like data distribution, statistics, and available resources.
This is a getting started guide for MariaDB SkySQL. Let’s start with two prerequisites definitions:
Some key features of MariaDB include:
To elaborate further on the specifics of MariaDB SkySQL, here are some of the features of the DBAAS (DataBase As A Service):
Over the past few months I’ve picked up a number of libraries in the Go ecosystem to help me get work done around database engineering. These libraries are ones that I have used to do a range of work primarily around Apache Cassandra, DataStax Enterprise, PostgreSQL, and to a lesser degree MS SQL Server, MySQL, and others. The following is a survey of libraries that I’ve found to be pretty solid for getting the job done.

I’ve broken the follow tooling libraries out into the following categories:

Veneur – Largely used by and originating from Stripe. This library works as a distributed, fault tolerant pipeline for data emitted from run time on systems and services throughout your environment. It has server implementations of the DogStatsD protocol or SSFĀ (Sensor Sensibility Format)Ā for aggregating metrics and sending these metrics for storage or via sinks to various other systems. The system can also works up histograms, sets, and counters as global aggregator.
TLDR;
Veneur is a convenient sink for various observability primitives with lots of outputs!
Honeycomb.io – Honeycomb I did some work for back in February of 2018 and gotta say I loved the team. Charity @mipsytipsy, Christine @cyen, Ben @maplebed and crew are tops! Friendly, wildly smart, and humble thrown in for good measure. With that said, I’m also a fan of the product. It’s a solid high cardinality, query and event intake system for observability. There are libraries for Go as well as others, and it’s pretty easy to use the library to setup ingest for appropriately instrumented applications.
TLDR;
Honeycomb.io is a Saas tool with available libraries for Go to provide observability insight and data collection for your applications!
OpenCensus – This framework and toolsetĀ provides ways to get telemetry out of your services. CurrentlyĀ there are libraries for a number of languages that allow you to capture, manipulate, and export metrics and distributed traces to your data store of choice. The key idea is that OpenCensus works via tracing through the course of events in an application and that data is logged for awareness, insight, and thus observability of your systems.
TLDR;
OpenCensus is a library that provides ways to gather telemetry for your services and store it in your choice of a location.
RxGo – This library is a reactive extensions built for Go. This one is as much a programming concept as it is a way to enhance and specifically focus on observability, so let’s take a look at the intro example they’ve got on the actual repo README.md itself.
ReactiveX, or Rx for short, is an API for programming with observable streams. This is a ReactiveX API for the Go language.
ReactiveXĀ is a new, alternative way of asynchronous programming to callbacks, promises and deferred. It is about processing streams of events or items, with events being any occurrences or changes within the system.
In Go, it is simpler to think of a observable stream as a channel which canĀ
SubscribeĀ to a set of handler or callback functions.The pattern is that youĀ
SubscribeĀ to anĀObservableĀ using anĀObserver:subscription := observable.Subscribe(observer)AnĀ
ObserverĀ is a type consists of threeĀEventHandlerĀ fields, theĀNextHandler,ĀErrHandler, andĀDoneHandler, respectively. These handlers can be evoked withĀOnNext,ĀOnError, andĀOnDoneĀ methods, respectively.TheĀ
ObserverĀ itself is also anĀEventHandler. This means all types mentioned can be subscribed to anĀObservable.nextHandler := func(item interface{}) interface{} { if num, ok := item.(int); ok { nums = append(nums, num) } } // Only next item will be handled. sub := observable.Subscribe(handlers.NextFunc(nextHandler))
TLDR;
RxGo are the reactive extensions that make it easier to go full scale and spectrum observability, with significantly greater insight into your applications over time and the events they execute.

Go-Migrate – This library is written in Go and handles data schema migrations for a significant number of databases; PostgreSQL, MySQL, SQLite, RedShift, Neo4j, CockroadDB, and that’s just a few.
Example:
migrate -source file://path/to/migrations -database postgres://localhost:5432/database up 2
TLDR;
Go-Migrate is an open source library that can be used via CLI or in code to manage all your schema migration needs.
Gocqlx Migrate – This library primarily provides extensions to the Go CQL driver library, and one of those extensions specifically is a data-schema migration functionality.
Example:
package main import ( "context" "github.com/scylladb/gocqlx/migrate" ) const dir = "./cql" func main() { session := CreateSession() defer session.Close() ctx := context.Background() if err := migrate.Migrate(ctx, session, dir); err != nil { panic(err) } }
TLDR;
Gocqlx Migrate is a feature of the Gocqlx extensions library that can be used for schema migrations from within code.

Pachyderm – (Open Source Repo) A pachyderm is
a very large mammal with thick skin, especially an elephant, rhinoceros, or hippopotamus.
So it is kind of a fitting name for this library. The library, the project itself, has found funding and bills itself as “Scalable, Reproducible Data Science“. I’ve used it minimally myself, but find it continually popping up on my “use this tool because you’ll need a ton of the features” list.
TLDR;
Pachyderm is an open source library, and paired capital funded company, that does indeed provide scalable, reproducible data science in addition to being a great library for your ETL and related data management needs.
Reflow – This library provides incremental data processing in the cloud. Providing this ability gives scientists and engineers the ability to put tools together, packaged in Docker images, using programming constructs. The library then evaluates the programs transparently parallelizing the work and memoizing results – i.e. using go routines and caching data appropriately to speed up tasks. The library was created atĀ GRAILĀ to manage our NGS (next generation sequencing) bioinformatics workloads onĀ AWS, but has also been used for many other applications, including model training and ad-hoc data analyses. Severl of Reflow’s key features include:
TLDR;
Reflow provides a way for data scientists, and by proxy database administrators, data programmers, programmers, and anybody that needs to work through ETL or related work to write programs against that data in the cloud or locally.

ResticĀ (Github) – Restic is a backup CLI and Go library that will backup to a number of sources, a few including; local directory, sftp, http REST, S3, Google Cloud Storage, Azure Blob Storage, and others.
Restic follows several objectives:

For each of these there’s a particular single driver that I use for each. Except in the case of Apache Cassandra and DataStax Enterprise I have also picked up gocqlx to add to my gocql usage.
PostgreSQL – Features:
Gocql Features:
Gocqlx Features:
Go-MSSQLDB – Features:
So this is just a few of the libraries I use, have worked with, and suggest checking out if you’re delving into database work and especially building systems around databases for reliability and related efforts.
If you’ve got other libraries that you’ve used, or really like, definitely leave a comment and let me know and I’ll update the post to include new libraries for Go. Subscribe to the blog too as I’ve got more posts in the cooker for database work, Go libraries and usage with databases, and a lot more. Happy thrashing code!
I’ve downloaded and installed MySQL recently. I was doing a few things to make it easier to work with and thought, “I ought to document this, it isn’t real intuitive without pages of documentation being read.” So here’s some tips.
1. Make sure you can view all of your files in OS-X, especially if you intend to do development. What I’ve found to be the easiest way to do this, is to setup a script application on the desktop. Open up the AppleScript Editor, quickest way is to press theĀ ā + Space Bar and type in applescript and hit enter. In the editor enter the following code:
[sourcecode language=”bash”]
set dotVisible to do shell script "defaults read com.apple.Finder AppleShowAllFiles"
if dotVisible = "0" then
do shell script "defaults write com.apple.Finder AppleShowAllFiles 1"
else
do shell script "defaults write com.apple.Finder AppleShowAllFiles 0"
end if
tell application "Finder" to quit
delay 1
tell application "Finder" to activate
[/sourcecode]
Then save the file with the following options in the save as dialog selected.

The only thing you really need to set is the File Format to “Application” and be sure nothing is checked. Now whenever you double click the file on the desktop, the finder will automatically be restarted with all the files visible, and double clicking again will hide all of the files from view.
2. Setup MySQL and when done, make sure to add the appropriateĀ aliasesĀ to the .bash_rc file for the bash shell. These include setting these two aliases:
[sourcecode language=”bash”]
alias mysql=/usr/local/mysql/bin/mysql
alias mysqladmin=/usr/local/mysql/bin/mysqladmin
[/sourcecode]
That’s it for now, more tidbits to come and a write up of my extended weekend PDX hacking sessions with Geoloqi.
You must be logged in to post a comment.