Terraform “Invalid JWT Signature.”

I ran into this issue recently. The “Invalid JWT Signature.” error while running some Terraform. It appeared to occur whenever I was setting up a bucket in Google Cloud Platform to use for a back-end to store Terraform’s state. In console, here’s the exact error.

terraform-jwt-invalid.png

My first quick searches uncovered some Github issues that looked curiously familiar. Invalid JWT Token when using Service Account JSON #3100 which was closed without any particular resolution. Upon further searching it didn’t help to much but I’d be curious as to what the resolution was. The second is Creating GCP project in terraform #13109 which sounded much more on point compared to my issue. This appeared closer to my issue but it looked like I should probably just start from scratch, since this did work on one machine already but just didn’t work on this machine I shifted to. (Grumble grumble, what’d I miss).

The Solution(s)?

In the end this is a message, if you work on multiple machines with multiple cloud accounts you might get the keys mixed up. In this particular case I reset my NIC (i.e. you can just reboot too, especially if on Windows it’s just easier to do that). Then everything just started working. In some cases however, the JSON with the gcloud/gcp keys needs to be regenerated as the old key was rolled or otherwise invalidated.

 

Let’s Really Discuss Lock In

For to long lock-in has been referred to with an almost entirely negative connotation even though it can be inferred in positive and negative situations. The fact is that there’s a much more nuanced and balanced range to benefits and disadvantages of lock-in. Often this may even be referred to as this or that dependency, but either way a dependency often is just another form of lock in. Weighing those and finding the right balance for your projects can actually lead to lock-in being a positive game changer or something that simply provides one a basis in which to work and operate. Sometimes lock-in actually will provide a way to remove lock-in by providing more choices to other things, that in turn may provide another variance of lock-in.

Concrete Lock-in Examples

The JavaScript Lock-In

IT Security icons. Simplus seriesTake the language we choose to build an application in. JavaScript is a great example. It has become the singular language of the web, at least on the client side. This was long ago, a form of lock-in that browser makers (and standards bodies) chose that dictated how and in which direction the web – at least web pages – would progress.

JavaScript has now become a prominent language on the server side now too thanks to Node.js. It has even moved in as a first class language in serverless technology like AWS’s Lambda. JavaScript is a perfect example of a language, initially being a source of specific lock-in, but required for the client, that eventually expanded to allow programming in a number of other environments – reducing JavaScript’s lock in – but displacing lock in through abstractions to other spaces such as the server side and and serverless functions.

The .NET Windows SQL Server Lock In

IT Security icons. Simplus seriesJavaScript is merely one example, and a relatively positive one that expands one’s options in more ways than limits one’s efforts. But let’s say the decision is made to build a high speed trading platform and choose SQL Server, .NET C#, and Windows Server. Immediately this is a technology combination that has notoriously illuminated in the past * how lock-in can be extremely dangerous.

This application, say it was built out with this set of technology platforms and used stored procedures in SQL Server, locking the application into the specific database, used proprietary Windows specific libraries in .NET with the C# code, and on Windows used IIS specific advances to make the application faster. When it was first built it seemed plenty fast and scaled just right according to the demand at the time.

Fast forward to today. The application now has a sharded database when it hit a mere 8 Terabytes, loaded on two super pumped up – at least for today – servers that have many cores, many CPUs, GPUs, and all that jazz. They came in around $240k each! The application is tightly coupled to a middle tier, that is then sort of tightly coupled to those famous stored procedures, and the application of course has a turbo capability per those IIS Servers.

But today it’s slow. Looking at benchmarks and query times the database is having a hard time dealing with things as is, and the application has outages on a routine basis for a whole variation of reasons. Sometimes tracing and debugging solves the problems quickly, other times the servers just oversubscribe resources and sit thrashing.

Where does this application go? How does one resolve the database loading issues? They’ve already sunk a half million on servers, they’re pegged out already, horizontally scaling isn’t an option, they’re tightly coupled to Window Servers running IIS removing the possibility of effectively scaling out the application servers via container technologies, and other issues. Without recourse, this is the type of lock in that will kill the company if something is changed in a massive way very soon.

To add, this is the description of an actual company that is now defunct. I phrased it as existing today only to make the point. The hard reality is the company went under, almost entirely because of the costs of maintaining and unsustainable architecture that caused an exorbitant lock in to very specific tools – largely because the company drank the cool aid to use the tools as suggested. They developed the product into a corner. That mistake was so expensive that it decimated the finances of the company. Not a good scenario, not a happy outcome, and something to be avoided in every way! This is truly the epitomy of negative lock in.

Of course there’s this distinctive lock in we have to steer clear from, but there’s the lock in associated with languages and other technology capabilities that will help your company move forward faster, easier, and with increasing capabilities. Those are the choices, the ties to technology and capabilities that decision makers can really leverage with fewer negative consequences.

The “Lock In” That Enables

IT Security icons. Simplus seriesOne common statement is, “the right tool for the job”. This is of course for the ideal world where ideal decisions can be made all the time. This doesn’t exist and we have to strive for balance between decisions that will wreck the ship or decisions that will give us clear waters ahead.

For databases we need to choose the right databases for where we want to go versus where we are today. Not to gold plate the solution, but to have intent and a clear focus on what we want our future technology to hold for us. If we intend to expand our data and want to maintain the ability to effectively query – let’s take the massive SQL Server for example – what could we have done to prevent it from becoming a debilitating decision?

A solution that could have effectively come into play would have been not to shard the relational database, but instead to either export or split the data in a more horizontal way and put it into a distributed database store. Start building the application so that this system could be used instead of being limited by the relational database. As the queries are built out and the tight coupling to SQL Server removed, the new distributed database could easily add nodes to compensate for the ever growing size of the data stored. The options are numerous, that all are a form of lock-in, but not the kind that eventually killed this company that had limited and detrimentally locked itself into use of a relational database.

At the application tier, another solution could have been made to remove the ties to IIS and start figuring out a way to containerize the application. One way years ago would have been to move away from .NET, but let’s say that wasn’t really an option for other reasons. The idea to mimic containerization could have been done through shifting to a self-contained web server on Windows that would allow the .NET application to run under a singular service and then have those services spin off the application as needed. This would decouple from IIS, and enable spreading the load more quickly across a set number of machines and eventually when .NET Core was released offer the ability to actually containerize and shift entirely off of Windows Server to a more cost efficient solution under Linux.

These are just some ideas. The solutions of course would vary and obviously provide different results. Above all there are pathways away from negative lock in and a direction toward positive lock in that enables. Realize there’s the balance, and find those that leverage lock in positively.

Nuanced Pedantic Notes:

  • Note I didn’t say all examples, but just that this combo has left more than a few companies out on a limb over the years. There are of course other technologies that have put companies (people actually) in awkward situations too. I’m just using this combo here as an example. For instance, probably some of the most notorious lock in comes from the legal ramifications of using Oracle products and being tied into their sales agreements. On the opposite end of the spectrum, Stack Overflow is a great example of how choosing .NET and scaling with it, SQL Server, and related technologies can work just fine.

Kubernetes Networking Explained & The Other Projects

Still stumbling through determining what Kubernetes does for networking? Here’s a good piece written up by Mark Betz, titled “Understanding Kubernetes Networking: Pods”. Just reading Mark’s latest on Kubernetes is great, but definitely take a look at his other writing too, it’s a steady stream of really solid material that is insightful, helpful, and well thought out. Good job Mark.

I’ve got two more blog entries on getting Kubernetes deployed and what you get with default Terraform configuration setups in Azure and AWS (The Google Cloud Platform write up is posted here). Once complete that’s a wrap for that series. Then I’m going shift gears again and start working on a number of elements around application and services (ala microservices) development.

Always staying nimble means always jumping around to the specific details of the things that need done! This, among many efforts to jump around to the specific thing that needs done, actually feels more like a return to familiar territory. After all, the vast majority of my work in the last many years has been writing code implemented against various environments to ensure reliable data access and available services for customers; customers being web front end devs, nurses in a hospitals, GIS workers resolving mapping conflicts, veterinarians, video watching patrons on the internet, or any host of someone using the software I’ve built.

In light of that, here’s a few extra thoughts and tidbits about what’s in the works next.

  • Getting the Data Diluvium Project running, a core product implemented, and usable live out there on the wild web.
  • Getting blue-land-app (It’s a Go Service), blue-world-noding (It’s a Node.js Service), and blue-world-making (It’s the infrastructure the two run on) are all working and in usable states for prospective tutorials, sample usage, and for speaking from in presentations. They’re going to be, in the end, solid examples of how to get up and running with those particular stacks + Kubernetes. A kind of a from zero to launch examples.

Other Links of Note:

Truly Excellent People and Coding Inspiration…

.NET Fringe took place this last week. It’s been a rather long time since my last actual conference that I actually got to really attend, meet people, and talk to people about all the different projects, aspirations, goals, and ideas about what’s next for the future. This conference was perfect to jump into, first and foremost, I knew it was an effort in being inclusive of the existing community and newcomers. We’d reached out to many brave souls to come and attend this conference about pushing technology into the future.

I met some truly excellent people. Smart, focused, intent, and a whole lot of great conversations followed meeting these people. Here’s a few people you’ll want to keep an eye on based on the technology they’re working on. I got to sit down and talk to every one of these coders and they’re in top form, smart, inventive, witty and full of great humor to boot!

Maria Naggaga @Twitter

I met Maria and one of the first things I saw was her crafty and most excellent art sketches around lifestyles, heroes, and more. I love art like this, and was really impressed with what Maria had done with her’s.

Maria giving us the info.

Maria giving us the info.

I was able to hang out with Maria a bit more and had some good conversation time talking about evangelism, tech fun and nonsense all around. I also was able to attend her talk on “Legacy… What?” which was excellent. The question she posed in the description states a common question posed, “When students think about .Net they think: legacy , enterprise , retired, and what is that?” which I too find to be a valid thought. Is .NET purely legacy these days? For many getting into the field it generally isn’ the landscape of greenfield applications and is far more commonly associated with legacy applications. Hearing her vantage point on this as an evangelist was eye opening. I gained more ideas, thoughts, and was pushed to really get that question answered for students in a different way…  which I’ll add to sometime in the future in another blog entry.

Kathleen Dollard @Twitter && @Github

I spoke to Kathleen while we took a break across the street from the conference at Grendal’s Coffee Shop. We talked a lot about education and what is effective training, diving heavily into what works around video, samples, and related things. You see, we’re both authors at Pluralsight too and spend a lot of time thinking about these things. It was great to be able to sit down and really discuss these topics face to face.

We also dived into a discussion about city livability and how Portland’s transit system works, what is and isn’t working in the city and what it’s like to live here. I was, of course, more than happy to provide as much information as I could.

We also discussed her interest in taking legacy shops (i.e. pre-C# even, maybe Delphi or whatever might exist) and helping them modernize their shop. I found this interesting, as it could be a lot of fun figuring out large gaps in technology like that and helping a company to step forward into the future.

Kathleen gave two presentations at the conference – excellent presentations. One was the “Your Code, Your Brain” presentation, talking about exactly the topic of legacy shops moving forward without disruption.

If you’re interested in Kathleen’s courses, give a look here.

Amy Palamountain @Twitter && @Github

Amy had a wicked great slides and samples that were probably the most flawless I’ve seen in a while. Matter of fact, a short while after the conference Amy put together a blog entry about those great slides and samples “Super Smooth Technical Demoes“.

An intent and listening audience.

An intent and listening audience.

An intent and listening audience.Amy’s talked at the conference was titled “Space, Time, and State“. It almost sounds like we could just turn that into an acronym. The talk was great, touched on the aspects of reactiveness and the battle of state that we developers fight every day while building solutions.

We also got to talk a little after the presentation, the horror of times zones, and a slew of good conversation.

Tomasz Janczuk @Twitter && @Github

AAAAAaggghhhhhh! I missed half of Tomasz’s talk! It always happens at every conference right! You get to talking to people, excited about this topic or that topic and BOOM, you’ve missed half of a talk that you fully intended to attend. But hey, the good part is I still got to see half the talk!

If you’re not familiar with Tomasz’ work and you do anything with Node.js you should pay close attention. Tomasz has been largely responsible for the great work behind Edge.js and influencing the effort to get Node.js running (and running damn well might I add) on Windows. For more on Edge.js check out Act I and Act II and the Github repository.

The Big Hit for Me, Distributed Systems

First some context. About 4 years ago I left the .NET Community almost entirely. Even though I was still doing a little work with C# I primarily switched stacks to other things to push forward with Riak, distributed systems usage, devops deployment of client apps, and a whole host of other things. At the time I basically had gotten real burned out on where the .NET Community had ended up worldwide, while some pushed onward with the technologies I loved to work with, I was tired of waiting and dived into some esoteric stuff and learned strange programming techniques in JavaScript, Ruby, Erlang and dived deeper into distributed technologies for use in application construction.

However some in the community didn’t stop moving the ball forward, and at this conference I got a great view into some of that progress! I’m stoked to see this technology and where it is now, because there is a LOT of potential for a number of things. Here’s the two talks and two more great people I got to see speak. One I knew already (great to see you again and hang out Aaron!) and one I had the privilege & honor to meet (it was most excellent hanging out and seeing your presentation Lena).

Aaron Stonnard @Twitter && @Github

Aaron I’d met back when Troy & I put together the first Node PDX. Aaron had swung into Portland to present on “Building Node.js Applications on Windows Azure“. At .NET Fringe however Aaron was diving into a topic that was super exciting to me. The first line of the description from the topic really says it all “Distributed computing in .NET isn’t something you often hear about, but it’s becoming an increasingly important area for growing .NET businesses around the globe. And frankly it’s an area where .NET has lagged behind other runtimes and platforms for years – but this is changing!“. Yup, that’s my exact pain point, it’s awesome to know Aaron & Petabridge are kicking ass in this space now.

Aaron’s presentation was solid, as to be expected. We also had some good conversations after and before the presentation about the state of distributed compute and systems within the Microsoft and Windows ecosystem. To check out more about Akka .NET that Aaron & Andrew Skotzko …  follow @AkkaDotNet, @aaronontheweb, @petabridge, and @askotzko.

Akka .NET

Alena Dzenisenka @Twitter && @Github

...

…Lena traveled all the way from Kiev in the Ukraine to provide the .NET Fringe crowd with some serious F# distributed and parallel compute knowledge in “Embracing the Cloud“!  (Slides here)

Here’s a short dive into F# here if you’re unfamiliar, which you can install on OS-X, Windows or whatever. So don’t use the “well, I don’t use windows” excuse to not give it a try! Here’s info about MBrace that  Lena also used in her demo. Also dive into brisk from elastacloud…

In addition to the excellent talk that Lena gave I also got to hang out with her, Phil Haack, Ryan Riley, and others over food at Biwa on the last day of the conference. After speaking with Lena about the Ukraine, computing, coding and other topics around hacking and the OSS Community she really inspired me to take a dive into these tools for some of the work that I’m working on now and what I’ll be doing in the near future.

All The Things

Now of course, there were a ton of other people I got to meet, people I got to catch up with I haven’t seen in ages and others I didn’t get to write about. It was a really great conference with great content. I’m looking forward to round 2 and spending more time with everybody in the future!

The whole bunch of us at the end of the conference!

The whole bunch of us at the end of the conference!

Cheers everybody!   \m/

An Aside of Blog Entries on .NET Fringe

Here are some additional blog entries that others wrote about the event. In addition to these blog entries I’ll be updating this entry with any additional entries that I see pop up – so if you post one let me know, and I’ll also update these talks above that I’ve discussed with videos when they’re posted live.

PDX Cloud – A Question Posed.

I attended the PDX Cloud meeting to present, but more to ask a question. Here’s how I posed that question (slide deck at the bottom of this blog entry). I frame the scenario of the distributed development world of cloud computing, dive into the vertical world of enterprise dev and then throw down the big question…

This is a situational report on the current state, of the somewhat bi-polar condition that exists in software development right now. This is reflective of my train of thought around a number of aspects of the industry and what questions have come up time and time again while working with fellow coders and technologists.

The first segment of the industry that we often here about. it’s the hip and cool thing to do, as well as the obvious path into the future right now. It’s not particularly the idea that this segment, of building things as distributed systems is new, it’s just that it has become more important and more capable now than it ever has in the past.

A lot of this has to do with the advent of key technologies around virtualization, cloud computing and large scale object storage and network capabilities. We can spool up enough compute to rival a super computer, sitting alone at home, to storing more data than we can imagine with zero theoretical limit to that storage. All of this networked together behind load balancers, switches and programmable devices that a mere half dozen years ago would have taken more resources than any reasonably sized small business could even afford. All of these capabilities are literally at our fingertips now.

I’ve spooled up a 1000 EC2 instances for a demo before. That was 2 years ago even! Now I as well as many host applications and databases entirely in memory. SSDs as a cloud back end option at AWS and other locations provide another avenue that brings these devices into a world where they can be utilized immediately. Blink an eye, you’ll have the resources.

The storage realm, with costs falling through the floor with Glacier to operationally effective options like S3, EBS, Table Store, Object storage and others make our junk trunks limitless. The option to throw away any data at all seems less and less appealing.

Many developers, but definitely not all, have seized opportunities to alter the way they work and what they’re able to accomplish by using these new capabilities. From the now common asynchronous approach to development, shifting languages and stack to the invention of new paradigms around development and operations into a devop practice, leadership has stepped up to this changing game.

Vertical systems have in the past twenty years held the main position in the enterprise as the go to architectures. Client server or three tier or whatever one may call it. With a synchronous mindset the vertical implementation of systems produced several benefits.

We gained the ability through diligent documentation and widget style architecture to build CRUD (Create Read Update Delete) and LOB (Line of Business) applications at a rapid rate. With a simplified approach like this businesses spent a lot of time focusing on their business, not particularly on efficient utilization of resources, processing or reliability. But who could blame them, with Moore’s Law it seemed the only real ways to scale vertical systems were by writing faster code or buying a bigger computer, for a while that seemed to work fine.

Most of the, what I’ll call “vertical revolution” happened with the GSD mindset. GSD mean Get Shit Done. Again, another idea that sort of worked pretty well as long as Moore’s Law was in effect. But things have started to change, with Moore’s Law faltering.

Management practices also became a complete TLA soup during this time. The last 20 years continued the standard “let’s cookie cut people into widget producers”. It never works as well as it could or should, but the industry – and really all humanity keeps trying – to do this anyway. This is fine, we’ve got to try. The vertical stack however brought this to the extreme forefront as the industry tried to shoe horn all sorts of development into singular types of management practices.

Overall though as long as things stayed simple, we stuck to our KISS principles as software craftspeoples the architecture stays straight forward enough and the stack stays easy. However there are voluminous limitations. There are massive management and project issues with all of this.

Many parts of the industry are screaming for the future. As we have it, some agree on certain aspects of what the future should be and others agree on other aspects of the future.

We have some bright spots amid the confusion that is making the distributed world much easier, and the technology continues to do this.

Some want convergence. Which may work well in some ways, but in others it is converging into a clustered mess. As with the roadways of the 50s and the effervescent ideas of 50s planners, we’re finding the idea of the superhighways aren’t working either. The same is starting to appear for some types of device convergence. So where does this really leave us? Where are our weak spots as an industry? It seems like right now we’re stuck in that traffic jam getting to the next step.

Things are looking a little like this freakingnews.com MAV. Multifunction and not functional at all.

So to gain clarity on direction I pose the question…

  • How do we change the later world to work as well as the new world of distributed systems?

…and a few follow ups.

  • What do developers in the industry need to make true distributed computing advances while drawing on the known elements of the vertical computing realm?
  • What do we need as developers and leaders to more reliably advance the industry without setbacks?
  • What do we need as leaders to move the industry forward to the next steps, stages and developments in converging technology?
  • Are these even valid questions? What would you propose to ask?

TeamCity Setup for Junction Build, Plus Implosions

I wanted to get a continuous delivery process setup for Junction that could help everybody involved get a clear and quick status of the project. The easiest way to do this for a Windows 8 .NET Project is to setup a Team City CI Server.

This article covers what I went through to get the server up and running. In the next part I’ll cover troubleshooting that I went through to get a Visual Studio 2012 Window 8 C# Project building correctly on the server.

Finally, the last part is a small surprise, but suffice it to say I’ll be getting a completely different language and tech stack up and running which you’ll likely not guess (or maybe you will).  😉

Setting up Team City 8.0.3 (build 27540) using Tier 3 and a Windows 2008 Server, or not…

Setting up a Windows 2008 Server with Tier 3 is super easy, as you’d expect with a cloud service provider. Log into your account, click on “Create Server” to bring up the create server dialog.

Create a New Server screen. Click for full size image.

Create a New Server screen. Click for full size image.

Click image for full size.

Click image for full size.

Next enter the information and select a Standard server.

Select how much horsepower you want the build server to have. Click for a full size image.

Select how much horsepower you want the build server to have. Click for a full size image.

Click next and then make the last few selections.

Server Tasks. No need to change the defaults here. Click for full size image.

Server Tasks. No need to change the defaults here. Click for full size image.

Click Create Server and then sit tight for a few while the server is created. Once the server is created navigate back to the server information screen (I’ll leave you to get back to this screen).

Server information screen. Click for full size image.

Server information screen. Click for full size image.

On this screen click on the add public ip button to bring up the IP & port selection screen.

Adding a public IP Address. Click for full size image.

Adding a public IP Address. Click for full size image.

On the public IP screen select the HTTP (80) and RDP (3389) ports to open up. Click the add ip address button and again sit tight for a few. Once the server has the IP set then we can log in using RDP (Remote Desktop or on Mac try CoRD).

Next install the .NET 4.5 SDK. For the latest, it’s best to install the latest windows SDK that is available for Windows Server 2008 also.

Team City install

In the instructions below, you’ll notice everything is now Windows Server 2012. That’s because after installing everything on a Windows 2008 Server I stumbled on a very important fact. I’m working to put a build together for a Windows 8 Store Application, which requires a Windows Server 2012 (or Windows 8) operating system to build on.

I got a sudden flashback to OS-X and iOS land there for a second, but leapt in and wiped out the image I’d just built. Since I’d built it in a cloud environment, it merely meant spending a few seconds to get a new OS instance built up. So after a few clicks, just like the instructions above for building a Windows 2008 Server I had a Windows 2012 Server instead. There are, however a few steps to follow once you have a good Windows Server 2012 install. Once you have a good Windows 2012 Server up and running it should have a public IP, some memory, compute and storage capabilities. In the image below I didn’t give it a huge amount of horsepower for a few reasons.

  1. It’s just doing builds, not computing the singularity.
  2. If it can build on this, I’m doing good keeping the project clean.
  3. I want to keep the build fast, keeping it on a weak machine and still having it fast also reinforces that I have a clean project.
  4. I don’t need a successful build every second, the server gets used only during pushes by devs. If we get up to dozens of devs hacking on this, I can easily spool up and get a faster, more hard core heavier horsepower option up and running.
Windows Server 2012 w/ Public IP, 1 Proc, 1 GB RAM and 40 GB Storage. Click for full size image.

Windows Server 2012 w/ Public IP, 1 Proc, 1 GB RAM and 40 GB Storage. Click for full size image.

When Windows Server 2012 boots up the first thing that will launch is the Server Manager. We don’t really need that yet, so just ignore it, close it or move it to the side.

Windows Server 2012 Server Manager. Click for full size image.

Windows Server 2012 Server Manager. Click for full size image.

The first thing we will need is Internet Explorer, so we can download Chrome or Firefox. Internet Explorer is wired up with high security so the first thing it will do is explode with messages about sites not being in the right zone. It is, hugely annoying. So add each site to the zone and head out to the web to pick up Chrome or Firefox.

Internet Explorer security configuration explosions. Click for full size.

Internet Explorer security configuration explosions. Click for full size.

In the following screenshots I didn’t actually download Chrome or Firefox first, but instead downloaded TeamCity. I advise getting Chrome or Firefox FIRST and then downloading TeamCity with one of those browsers. Life is dramatically simpler that way.

Team City - add another site to the site list for security clearance. Click for full size.

Team City – add another site to the site list for security clearance. Click for full size.

Team City downloading. Click for full size image.

Team City downloading. Click for full size image.

I know one can turn off the security settings in IE, but it’s just dramatically easier to go and use one of the other browsers. Just trust me on this one, if you want to turn off the security features in IE, be my guest, I’d however recommend just getting a different browser to work with.

Once you’ve got your browser of choice and Team City downloaded, run the installer executable.

Installer Downloaded w/ Security Scan in IE. Click for full size image.

Installer Downloaded w/ Security Scan in IE. Click for full size image.

Executable downloaded.

Executable downloaded.

Installing Team City.

Installing Team City.

Leave the components checked unless you have some specific goal for your server and build agents.

Server & Build Agents Options.

Server & Build Agents Options.

In one of the subsequent dialogs there is the option to run the server under the SYSTEM account or under a user account. Since this is a single purpose machine and I don’t really want to manage Windows users, I’m opting for the SYSTEM account.

SYSTEM Account.

SYSTEM Account.

After everything is installed navigate in a browser to http://localhost. This will automatically direct you to the TeamCity First Start page.

TeamCity First Start Page. Click for full size image.

TeamCity First Start Page. Click for full size image.

At this point you’ll be prompted to ok the EULA.

Signing one of those famous EULAs. Click for full size image.

Signing one of those famous EULAs. Click for full size image.

Then you’ll be prompted to create the first Administrator user.

Creating the administrator user.

Creating the administrator user.

From there you’ll be sent to the TeamCity interface, ready to create a new build project.

TeamCity Tools is marked by a giant pink arrow, Great ways to integrate TeamCity into your workflow. Click for full size clarity!

TeamCity Tools is marked by a giant pink arrow, Great ways to integrate TeamCity into your workflow. Click for full size clarity!

Click on Projects at the top left of the screen and you’ll navigate to the Create a Project dialog. Click on the Create a Project link to start the process.

Creating a project. Click for full size image.

Creating a project. Click for full size image.

Once you’ve entered the name, project ID and description click on Create. This will bring you to the next step, and to the general tab of the project. On this screen click on Create build configuration.

Project Setup. Click for full size image.

Project Setup. Click for full size image.

Now create a name, enter the config id, and click the VCS Settings >> button to move on to the next step of the process.

Build Configuration. Click for full size image.

Build Configuration. Click for full size image.

In VCS Settings leave everything as default and click on the Add Build Step >> button.

Click for full size image.

Click for full size image.

Now select the Visual Studio (sln) option from the Runner type and give the dialog a moment to render the options below that. They’ll appear and then enter the Step Name, Visual Studio type needs to be set to Microsoft Visual Studio 2012 and then click on Save.

Setting up the Build Type. Click for full size image.

Setting up the Build Type. Click for full size image.

From there you’ll be navigated back to the Project Build Steps screen. On that page you’ll see the build step listed. We’ll have one more we’ll need to add in a moment, but for now click on Version Control Settings again.

Build Step displaced, click on Version Control Settings Again. Click for full size image.

Build Step displaced, click on Version Control Settings Again. Click for full size image.

On this page click on the Create an attach a new VCS root.

Attach a new VCS root. Click for full size image.

Attach a new VCS root. Click for full size image.

Now select Git from the dialog and wait for the page to populate the form settings and options.

VCS Root Options. Click for full size image.

VCS Root Options. Click for full size image.

Now enter the correct Fetch URL to the Git repo (which on github looks something like https://github.com/username/gitrepo.git and is available to copy and paste from the right hand side of the repo page on github), enter the appropriate default branch to build and an appropriate VCS root name and VCS root ID. Once that is done click on the Test connection button.

Test Connection. Click for full size image.

Test Connection. Click for full size image.

Click save and now navigate back to the Build Triggers screen by click on the #5 option on the right hand side of the page. You’ll be navigated back to the magical Version Control Settings screen where you now have a few more options available and a VCS root available.

Version Control Settings. Click for full size.

Version Control Settings. Click for full size.

Now an Add New Build Trigger dialog appears to add the trigger. I set it to trigger a new build at each new check-in. The TeamCity server checks frequently to see if a commit has been made and will initiate a build. Another way however to setup this is to not add a trigger and instead go to Github (if you’re using Github) and setup a push trigger from Github itself. That way every commit will initiate a build instead of the TeamCity Server, which knows nothing about the actual status of the repo until it checks, giving a more timely build process to your commits & dev workflow.

Build Trigger. Click for full size image.

Build Trigger. Click for full size image.

The added build trigger. Click for full size image.

The added build trigger. Click for full size image.

Now, one more build step. Add the NuGet Installer (which is included with the TeamCity Build Server, check the docs for TeamCity 8.x for NuGet Installer and NuGet for more information). For our purposes once you’ve insured that the NuGet Installer you need is available add a new build step. Select from the Runner Type NuGet Installer and the respective form will populate below.

NuGet Installer. Click for full size image.

NuGet Installer. Click for full size image.

Once the step is added, click on Reorder Build Steps under the Build Steps list and a dialog, specifically for reordering the build steps will appear.

Reordered Build Steps.

Reordered Build Steps. Click for full size image.

Reorder the steps so that Getting NuGetty (the name I’ve give to it, click for a full size image) will be run first.

The NuGet Settings.  Under the NuGet.exe is where to add the Nuget executable if it isn't already installed and available. Click the NuGet settings for options. Click for full size image.

The NuGet Settings. Under the NuGet.exe is where to add the Nuget executable if it isn’t already installed and available. Click the NuGet settings for options. Click for full size image.

At this time you now have all of the steps you actually need. You’ll be able to go back to the main projects screen and built the project.

When you do this however, if you’ve actually set this up to build a Windows 8 Store Project you’ll get a build failure. Which is a total bummer, but that makes for a great follow up blog which I’ll have posted real soon! For now, these are great steps for getting a modern ASP.NET, Java, Maven and a whole host of other builds up and running. For the solution around the Windows 8 Store Project keep reading (subscribe on the top right hand side to the RSS!) and I’ll have that posted up real soon.

Until next entry, Cheers!  > Adron

Using Bosh to Bootstrap Cloud Foundry via Stark & Wayne Consulting

I finally sat down and really started to take a stab at Cloud Foundry Bosh. Here’s the quick lowdown on installing the necessary bits and getting an initial environment built. Big thanks out to Dr Nic @drnic, Luke Bakken & Brain McClain @brianmmcclain for initial pointers to where the good content is. With their guidance and help I’ve put together this how-to. Enjoy…  boshing.

Prerequisites

Step: Get an instance/machine up and running.

To make sure I had a totally clean starting point I started out with an AWS EC2 Instance to work from. I chose a micro instance loaded with Ubuntu. You can use your local workstation if you want to or whatever, it really doesn’t matter. The one catch, of course is you’ll have to have a supported *nix based operating system.

Step: Get things updated for Ubuntu.

sudo apt-get update

Step: Get cURL to make life easy.

sudo apt-get install curl

Step: Get Ruby, in a proper way.

\curl -L https://get.rvm.io | bash -s stable
source ~/.rvm/scripts/rvm
rvm autolibs enable
rvm requirements

Enabling autolibs sets up so that rvm will install all the requirements with the ‘rvm requirements’ command. It used to just show you what you needed, then you’d have to go through and install them. This requirements phase includes some specifics, such as git, gcc, sqlite, and other tools needed to build, execute and work with Ruby via rvm. Really helpful things overall, which will come in handy later when using this instance for whatever purposes.

Finish up the Ruby install and set it as our default ruby to use.

rvm install 1.9.3
rvm use 1.9.3 --default
rvm rubygems current

Step: Get bosh-bootstrap.

bosh-bootstrap is the easiest way to get started with a sample bosh deployment. For more information check out Dr Nic’s Stark and Wayne repo on Github. (also check out the Cloud Foundry Bosh repo.)

gem install bosh-bootstrap
gem update --system

Git was installed a little earlier in the process, so now set the default user name and email so that when we use bosh it will know what to use for cloning repositories it uses.

git config --global user.name "Adron Hall"
git config --global user.email plzdont@spamme.bro

Step: Launch a bosh deploy with the bootstrap.

bosh-bootstrap deploy

You’ll receive a prompt, and here’s what to hit to get a good first deploy.

Stage 1: I select AWS, simply as I’ve no OpenStack environment. One day maybe I can try out the other option. Until then I went with the tried and true AWS. Here you’ll need to enter your access & secret key from the AWS security settings for your AWS account.

For the region, I selected #7, which is west 2. That translates to the data center in Oregon. Why did I select Oregon? Because I live in Portland and that data center is about 50 miles away. Otherwise it doesn’t matter which region you select, any region can spool up almost any type of bosh environment.

Stage 2: In this stage, select default by hitting enter. This will choose the default bosh settings. The default uses a medium instance to spool up a good default Cloud Foundry environment. It also sets up a security group specifically for Cloud Foundry.

Stage 3: At this point you’ll be prompted to select what to do, choose to create an inception virtual machine. After a while, sometimes a few minutes, sometimes an hour or two – depending on internal and external connections – you should receive the “Stage 6: Setup bosh” results.

Stage 6: Setup bosh

setup bosh user
uploading /tmp/remote_script_setup_bosh_user to Inception VM
Initially targeting micro-bosh…
Target set to `microbosh-aws-us-west-2′
Creating initial user adron…
Logged in as `admin’
User `adron’ has been created
Login as adron…
Logged in as `adron’
Successfully setup bosh user
cleanup permissions
uploading /tmp/remote_script_cleanup_permissions to Inception VM
Successfully cleanup permissions
Locally targeting and login to new BOSH…
bosh -u adron -p cheesewhiz target 54.214.0.15
Target set to `microbosh-aws-us-west-2′
bosh login adron cheesewhiz
Logged in as `adron’
Confirming: You are now targeting and logged in to your BOSH

ubuntu@ip-yz-xyz-xx-yy:~$

If you look in your AWS Console you should also see a box with a key pair named “inception” and one that is under the “microbosh-aws-us-west-2” name. The inception instance is a m1.small while the microbosh instance is an m1.medium.

That should get you going with bosh. In my next entry around bosh I’ll dive into some of Dr Nic & Brian McClain’s work before diving into what exactly Bosh actually is. As one may expect, from Stark & Wayne we can expect some pretty cool stuff, so keep an eye over there on Stark & Wayne.