C# Array to Phone Number String Conversion & Testing with NUnit

Programming Problems & Solutions : “How to Format Arrays as Phone Numbers with NUnit Testing”. The introduction to this series is here and includes all links to every post in the series. If you’d like to watch the video (see just below this), or the AI code up (it’s at the bottom of the post) they’re available! But if you just want to work through the problem keep reading, I cover most of what is in the video plus a slightly different path down below.

The AI continuation and lagniappe is at the bottom of this post.

In the world of software development, sometimes a seemingly simple task has lessons to teach such as language features and problem-solving. Today, I’m diving into a fun coding challenge that does exactly that: writing a method in C# that takes an array of 10 integers and returns these numbers formatted as a phone number. This exercise is perfect for understanding array manipulation, string formatting, and how to effectively use testing frameworks like NUnit to verify our solution.

Continue reading “C# Array to Phone Number String Conversion & Testing with NUnit”

Finding the Maximum Sum Path in a Binary Tree

Programming Problems & Solutions: “Finding the Maximum Sum Path in a Binary Tree” is the first in this series. The introduction to this series is here and includes all links to every post in the series. If you’d like to watch the video (see just below this), or the AI code up (it’s at the bottom of the post) they’re available! But if you just want to work through the problem keep reading, I cover most of what is in the video plus a slightly different path down below.

Scroll to the bottom to see the AI work through of the code base.

Doing a little dive into the world of binary trees. If you’re in college, just got out, or having flashbacks to algorithms and data structures classes, this will be familiar territory. It’s been a hot minute since I’ve toiled through these, so figured it’d be fun to dive in again for some more modern purpose. I’m going to – over the course of the next few weeks – work through a number of algorithm and data structures problems, puzzles, katas, or what have you – and then once complete work through them with various AI tools. I’ll measure the results in whatever ways seem to bring good defining characteristics of the system and post the results. So join me, on this journey into algorithms, data structures, and a dose of AI (actually LLMs and ML but…) systems and services.

Continue reading “Finding the Maximum Sum Path in a Binary Tree”

What’s The Practice with Rider & .NET Solutions/Projects? How to resolve IDE errors?

Alright, I’ve fought with these kinds of things since the inception of .NET, the key difference now is that there are the variances in .NET versions and also IDEs. The question is, how should I setup my solution and respective projects?

First – .NET Solution

Usually what I do is create an empty solution. Name it, check the options accordingly to create a directory and make it a Git repository.

Then, as a kind of general practice I create a class library to build out logic and what not, and add a test project and then whatever the interface is going to be. That might be a web app, a GUI, or console as I’ve got selected here. Sometimes, if useful I go ahead and create a Dockerfile.

At this point, there are a few key problems I always bump into. Obviously, since I have a repo I need a .gitignore file, preferably a README.md, and maybe even a license file right? How does one add, so that the IDE is aware of them, any extra files at the solution level? What I usually do here, is use the terminal to just build out the solution level files, but then the IDE doesn’t know about them. Similar problems come up in the other .NET IDEs like Visual Studio and such.

Second – Open a Folder Solution

The second type of solution I find myself creating is started by opening a repo folder.

Empty, with nothing else that looks like this.

Creating a .gitignore and other respective files is easy at this point. As a right click will allow me to do just that. I’ve also added some projects.

At this point everything works great. As you can see below the runner is wired up, the IDE is aware of the tests, and runners are available for that.

One problem I’ve run into however with this technique, is sometimes I close this type of project – one started by merely opening the folder – and all this awareness just disappears. On the same machine, same folder, no deletion of the .idea folder or anything like that. It just loses all the awareness.

So my question is, which method do you use to start a project and how to do you, fellow .NET coders, work around these types of issues?

Cassie Schema Migrator >> CaSMa

A few weeks back I started working on a schema migration tool for Apache Cassandra and DataStax Enterprise. Just for context, here are the short definitions of what each of the elements of CaSMa are.

  • cstar-iconApache Cassandra
    • Definition: Apache Cassandra is a free and open-source, distributed, wide column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.
    • History: Avinash Lakshman, one of the authors of Amazon’s Dynamo, and Prashant Malik initially developed Cassandra at Facebook to power the Facebook inbox search feature. Facebook released Cassandra as an open-source project on Google code in July 2008. In March 2009 it became an Apache Incubator project. On February 17, 2010 it graduated to a top-level project. Facebook developers named their database after the Trojan mythological prophet Cassandra, with classical allusions to a curse on an oracle.
  • dse-logoDataStax Enterprise
    • Definition: DataStax Enterprise, or routinely just referred to as DSE, is an extended version of Apache Cassandra with multi-model capabilities around graph, search, analytics, and other features like security capabilities and a core data engine 2x speed improvement.
    • History: DataStax was formed in 2009 by Jonathan Ellis and Matt Pfeil and originally named Riptano. In 2011 Riptano changes names to DataStax. For more history check out the Wikipedia page or company page for a timeline of events.
  • command-toolsSchema Migration
    • Definition:In software engineering, schema migration (also database migration, database change management) refers to the management of incremental, reversible changes to relational database schemas. A schema migration is performed on a database whenever it is necessary to update or revert that database’s schema to some newer or older version. Migrations are generally performed programmatically by using a schema migration tool. When invoked with a specified desired schema version, the tool automates the successive application or reversal of an appropriate sequence of schema changes until it is brought to the desired state.
    • Addition reference and related materials:

iconmonstr-twitch-5Over the next dozen weeks or so as I work on this application via the DataStax Devs Twitch stream (next coding session events list) I’ll also be posting some blog posts in parallel about schema migration and my intent to expand on the notion of schema migration specifically for multi-model databases and larger scale NoSQL systems; namely Apache Cassandra and DataStax Enterprise. Here’s a shortlist for the next three episodes;

The other important pieces include the current code base on Github, the continuous integration build, and the tasks and issues.

Alright, now that all the collateral and context is listed, let’s get into at a high level what this is all about.

CaSMa’s Mission

Schema migration is a powerful tool to get a project on track and consistently deployed and development working against the core database(s). However, it’s largely entrenched in the relational database realm. This means it’s almost entirely focused on a schema with the notions of primary and foreign keys, the complexities around many to many relationships, indexes, and other errata that needs to be built consistently for a relational database. Many of those things need to be built for a distributed columnar store, key value, graph, time series, or a million other possibilities too. However, in our current data schema world, that tooling isn’t always readily available.

The mission of CaSMa is to first resolve this gap around schema migration, first and foremost for Apache Cassandra and prospectively in turn for DataStax Enterprise and then onward for other database systems. Then the mission will continue around multi-model systems that should, can, and ought to take advantage of schema migration for graph, and related schema modeling. At some point the mission will expand to include other schema, data, and state management focused around software development and data needs within that state

As progress continues I’ll publish additional posts here on the different data model concepts and nature behind various multi-model database options. These modeling options will put us in a position to work consistently, context based, and seamlessly with ongoing development efforts. In addition to all this, there will be the weekly Twitch sessions where I’ll get into coding and reviewing what coding I’ve done off camera too. Check those out on the DataStax Devs Channel.

If you’d like to get into the project and help out just ping me via Twitter @Adron or message me here.

Thrashing Code Twitch Schedule September 19th-October 2nd

I’ve got everything queued back up with some extra Thrashing Code Sessions and will have some on the rails travel streams. Here’s what the schedule looks like so far.

Today at 3pm PST (UPDATED: Sep 19th 2018)

thrashing-code-terraformUPDATED: Video available https://youtu.be/NmlzGKUnln4

I’m going to get back into the roll of things this session after the travels last week. In this session I’m aiming to do several things:

  1. Complete next steps toward getting a DataStax Enterprise Apache Cassandra cluster up and running via Terraform in Google Cloud Platform. My estimate is I’ll get to the point that I’ll have three instances that launch and will automate the installation of Cassandra on the three instances. Later I’ll aim to expand this, but for now I’m just going to deploy 3 nodes and then take it from there. Another future option is to bake the installation into a Packer deployed image and use it for the Terraform execution. Tune in to find out the steps and what I decide to go with.
  2. I’m going to pull up the InteroperabilityBlackBox and start to flesh out some objects for our model. The idea, is based around something I stumbled into last week during travels, the thread on that is here.

Friday (Today) @ 3pm PST

thrashing-code-gopherThis Friday I’m aiming to cover some Go basics before moving further into the Colligere CLI  app. Here are the highlights of the plan.

  1.  I’m going to cover some of the topics around program structure including: type declarations, tuple assignment, variable lifetime, pointers, and other variables.
  2.  I’m going to cover some basics on packages, initialization of packages, imports, and scope. This is an important aspect of ongoing development with Colligere since we’ll be pulling in a number of packages for generation of the data.
  3. Setting up configuration and schema for the Colligere application using Viper and related tooling.

Tuesday, October 2nd @ 3pm PST

thrashing-code-terraformThis session I’m aiming to get some more Terraform work done around the spin up and shutdown of the cluster. I’ll dig into some more specific points depending on where I progress to in sessions previous to this one. But it’s on the schedule, so I’ll update this one in the coming days.