Windows 10’s Path of Destruction and Garbage Support for Linux from Dell

At this point, I’d say do not buy a Dell XPS 15 unless you are only going to use Windows 10 apps and such on the machine. Linux runs like garbage on the machine. The generic drivers that allow one to sort of load Linux onto the machine work mediocre at best, with lot’s of crashes, kernel panics, and related wrecks within seconds and minutes (if you get that far) after starting the machine with Linux. So that the situation there.

Docker, Hyper-V, and VirtualBox Conflict

As I’ve used Windows 10 I keep hitting hard roadblocks on getting things done. After years of MacOS and Linux usage it’s kind of an ongoing, slow motion, insulting train wreck how monopolistic a lot of their applications are still built to run. For example, the latest application that I’ve tried to use that is effectively disabled is Virtual Box. Upon further research I realized I can’t used Virtual Box if I have Hyper-V installed. But that means if I want to use the latest and greatest Docker software then I either have to enable Hyper-V and use Docker or disable Docker and Hyper-V and use Virtual Box. Which means I could go back a version or two of this and that and use Docker-machine with Virtual Box, but, seriously, that’s ridiculous.

The way, from what I understand so far, is that they’ve built the operation of Hyper-V to work on Windows 10 in a way that it effectively prevents Virtual Box, and I suppose VMWare, from actually operating. So one has to go and disable Hyper-V in order to be able to run Virtual Box.

Enabling / Disabling VirtualBox and Hyper-V

Ok, here’s one way to do this with Powershell. Open up Powershell with the “Run As Administrator” option. You’ll have to do that by clicking the start menu (or whatever it’s called these days) and typing “power” which brings up the Powershell application icon. Right click on it and select “Run as Administrator”.


# Remember, all of these commands need executed via Powershell that is started/opened with "Run As Administrator".
# Disabling Hyper-V
Disable-WindowsOptionalFeature Online FeatureName MicrosoftHyperVHypervisor
# Enabling Hyper-V
Enable-WindowsOptionalFeature Online FeatureName MicrosoftHyperV All
# You may also want DSIM enabled with all this, here's that detail.
DISM /Online /Enable-Feature /All /FeatureName:MicrosoftHyperV

dism_upd

hyper-vHere’s the dialog where one can disable Hyper-V to run VirtualBox too. One this is run, or disabled, Windows 10 will likely want you to restart your computer. You’ll actually need to do this too, because VirtualBox will still die out without a clean OS launch for the base underlying operating system.

Once the restart is complete however, VirtualBox should start up and run virtual machines just fine. Emphasis as usual for Windows on the keyword of should.

Virtual Machine SITREP Resolved Poorly

At this point I’m moderately ok with this solution. It just means that I need to re-create all of my images in Hyper-V and nuke VirtualBox though. I’m really indifferent on which one I need to use, but would have liked to have RTFM’ed the docs before so I had realized upgrading to Windows 10 and using Hyper-V for Docker would kill off my VirtualBox images that I’ve been using for months. Oh well, onward!

The other thing that I find horrifying about this though, is as I’ve learned why and how Hyper-V disabled VirtualBox from running side by side, it seems like 90’s era monopolistic Microsoft building something in a way that disables competition. Ok, so I know it isn’t really because of that, but it’s close enough that it just seem dirty, nasty, and all around kind of disingenuous on Microsoft’s part. Especially considering they neglected Hyper-V for so many years and of course now are all like, “hey, Hyper-V is the way the future, nobody uses it outside of the Windows camp, but it’s the way of the future so bow down and use it Windows users!!” But whatever, I’m at least back in business to get some things done and get some development images put together in Hyper-V.

Hyper-V plus Docker for Windows 10

At this point, since Linux has such poor support on the Dell XPS 15 I’m just sticking with Windows 10 and going the Hyper-V + Docker + Virtual Machines path for what I’m working on. More on this soon, and I’ll be mentioning various things on Twitch soon too. When I get back around to getting the Linux + XPS 15 situation sorted out and tried out again I’ll be sure to blog that too!

 

 

Docker Tips n’ Tricks for Devs – #0001 – 3 Second to Postgresql

The easiest implementation of a docker container with Postgresql that I’ve found recently allows the following commands to pull and run a Postgresql server for you.

[sourcecode language=”bash”]
docker pull postgres:latest
docker run -p 5432:5432 postgres
[/sourcecode]

Then you can just connect to the Postgresql Server by opening up pgadmin with the following connection information:

  • Host: localhost
  • Port: 5432
  • Maintenance DB: postgres
  • Username: postgres
  • Password:

With that information you’ll be able to connect and use this as a development database that only takes about 3 seconds to launch whenever you need it.

The Question of Docker, The Future of OS Virtualization

In this article I’m going to take a look at Docker and OS Virtualization autonomously of each other. There’s a reason, which will unfold as I dig through some data and provide this look into what is and isn’t happening in the virtualization space.

It’s important to also note what methods were used to attain the information provided in this article. I have obtained information through speaking with Docker employees and key executives including Ben Golub and founder Solomon Hykes over the years since the founding of Docker (and it’s previous incarnation dotCloud, before the pivot and name change to Docker).

Beyond communicating directly with the Docker team and gaining insight from them I have also done a number of interviews over the course of 4 days. These interviews have followed a fairly standard set of questions and conversation about the Docker technology, including but not limited to the following questions.

  • What is your current use of Docker visualization technologies?
  • What is your future intended use of Docker technologies?
  • What is the general current configuration and setup of your development team(s) and tooling that they use (i.e. stack: .NET, java, python, node.js, etc)
  • Do you find it helps you to move forward faster than without?

The History of OS-Level Virtualization

First, let’s take a look at where virtualization has been, then I’ll dive into where it is now, and then I’ll take a look at where it appears to be going in the future and derive some information from the interviews and discussions that I’ve had with various teams over the last 4 days.

The Short of It

OS-level virtualization is a virtualization application that allows the installation of software in a complete file system, just like a hypervisor based virtualization server, but dramatically faster installation and prospectively speed overall by using the host OS for OS-level virtualization. This cuts down on excess redundancies
within the core system and the respective virtual clients on the host.

Virtualization in concept has been around since the 1960s, with IBM being heavily involved at the Cambridge Scientific Center. Over time developments continued, but the real breakthrough in pushing virtualization into the market was VMware in 1999 with their virtual platform. This, hypervisor level virtualization great into a huge industry with the help of VMware.

However OS-level virtualization, which is what Docker is based on, didn’t take off immediately when introduced. There were many product options that came out over time around OS-level virtualization, but nothing made a huge splash in the industry similar to what Docker has. Fast forward to today and Docker was released in 2013 to an ever increasing developer demand and usage.

Timeline of Virtualization Development

Docker really brought OS-level virtualization to the developer community at the right time in regards to demands around web development and new ways to implement effective continuous delivery of applications. Docker has been one of the most extensively used OS-level virtualization tools to implement immutable infrastructure, continuous build, integration, and deployment environments, and to use as a general virtual environment to spool up resources as needed for development.

Where we Are With Virtualization

Currently Docker holds a pretty dominant position in the OS-level virtualization market space. Let’s take a quick review of their community statistics and involvement from just a few days ago.

The Stats: Docker on Github -> https://github.com/docker/docker

Watchers: 2017
Starred: 22941
Forks: 5617

16,472 Commits
3 Branches
102 Releases
983 Contributors

Just from that data we can ascertain that the Docker Community is active. We can also take a deep look into the forks and determine pull requests, acceptance of and related data to find out that the overall codebase is healthy with involvement. This is good to know since at one point there were questions if Docker had the capability to manage the open source legions pushing the product forward while maintaining the integrity, reputation, and quality of the product.

Now let’s take a look at what that position is based on considering the interviews I’ve had in the last 4 days. Out of the 17 people I spoke with all knew what Docker is. That’s a great position to be in compared to just a few years ago.

Out of the 17 people I spoke with, 15 of the individuals are working on teams that have, are implementing or are in some state between having and implementing Docker into their respective environments.

Of the 17, only 13 said they were concerned in some significant way about Docker Security. All of these individuals were working on teams attempting to figure out a way to use Docker in a production way, instead of only in development or related uses.

The list of uses that the 17 want to use or are using Docker for vary as much as the individual work that each is currently working on. There are however some core similarities in what they’re working on where Docker comes into play.

The most common similarity among Docker uses is simply as a platform to build out development testing environments or test servers. This is routinely a database server or simple distributed database like Cassandra or Riak, that can be built immutably, then destroyed and recreated whenever it is needed again for test and development. Some of the build outs are done with Docker specifically to work up a mock distributed database environment for testing. Mind you, I’m probably hearing about and seeing this because of my past work with Basho and other distributed systems programmers, companies, and efforts around this type of technology. It’s still interesting and very telling none the less.

The second most common usage is for Docker to be used somewhere in the continuous delivery chain. The push to move the continuous integration and delivery process to a more immutable, repeatable, and reliable process has been a perfect marriage between Docker and these needs. The ability to spin up entire environments in a matter of seconds and destroy them on whim, creating them again a matter of moments later, as made continuous delivery more powerful and more possible than it has ever been.

Some of the less common, yet still key uses of Docker, that came up during the interviews included; in memory cache servers, network virtualization, and distributed systems.

Virtualization’s Future

Pathing

With the history covered, the core uses of Docker discussed, let’s put those on the table with the acquisitions. The acquisitions by Docker have provided some insight into the future direction of the company. The acquisitions so far include: Kitematic, SocketPlane, Koality, and Orchard.

From a high level strategic play, the path Docker is pushing forward into is a future of continued virtualization around, as the hipsters might say “all the things”. With their purchase of Kitematic and SocketPlane. Both of these will help Docker expand past only OS virtualization and push more toward systemic virtualization of network environments with programmatic capabilities and more. These are capabilities that are needed to move past the legacy IT environments of yesteryear which will open up more enterprise possibilities too.

To further their core use that exists today, Docker has purchased Koality. Koality provides parallelizable continuous integration, deployment, and related services. This enables Docker to provide more built out services around this very important.

The other acquisition was Orchard (orchardup.com). This is a startup that provides a Docker host in the cloud, instantly. This is a similar purchase as the Koality one. It bulks up capabilities that Docker had some level of already. It also pushes them forward with two branches of capabilities: SaaS based on the web and prospectively offering something behind the firewall, which the Koality acquisition might have some part to play also.

Threat Vectors

Even though the pathways toward the future seem clear for Docker in many ways, in other ways they see dramatically less clear. For one, there are a number of competitive options that are in play now, gaining momentum and on the horizon. One big threat is Google’s lack of interest in Docker has led them to build competing tooling. If they push hard into the OS level virtualization space they could become a substantial threat.

The other threat vector, is the simple unknown of what could become a threat. Something like Mesos might explode in popularity and determine it doesn’t want to use Docker, and focus on another virtualization path. In the same sense, Mesos could commoditize Docker to a point that the value add at that level of virtualization doesn’t retain a business market value that would sustain Docker.

The invisible threat around this area right now is fairly large. There’s no greater way to determine this then to just get into a conversation with some developers about Docker. In one sense they love what it allows them to do, but the laundry list of things they’d like would allow for a disruptor to come in and steal the Docker thunder pretty easily. To put it simply, there isn’t a magical allegiance to Docker, developers will pick what helps them move the ball forward the fastest and easiest.

Another prospective threat is a massive purchase by a legacy software company like Oracle, Microsoft, or someone else. This could effectively destabilize the OSS aspects of the product and slow down development and progress, yet it could increase corporate adoption many times over what it is now. So this possibility is something that shouldn’t be ruled out.

Summary

Docker has two major threats: the direct competitor and their prospectively being leapfrogged by another level of virtualization. The other prospective threat to part of the company is acquisition of Docker itself, while it could mean a huge increase in enterprise penetration. In the future path the company and technology is moving forward in, there will be continued growth in usage and capabilities. The growth will maintain in the leading technology startups and companies of this kind, while the mid-size and larger corporate environments will continue to adopt and deploy at a slower pace.

A Question for You

I’ve put together what I’ve noticed, and I’d love to see things that you dear reader might notice about the Docker momentum machine. Do you see networking as a strength, other levels of virtualization, deployment of machines, integration or delivery, or some other part of this space as the way forward into the future. Let me know what your thoughts are on Twitter or whatever medium you feel like reaching out on. Of course, I’d also love to know if you think I’m wrong about anything I’ve written here.

Docker Red Hat and Containerization Wreck Virtualization

Conversation has popped up around a few tweets Alex Williams regarding virtualization at the Red Hat Summit. One of the starts to the conversation.

https://twitter.com/alexwilliams/status/456134531821887489/

Paraphrased the discussion has been shaped around asking,

“Why is OS-level virtualization via containers (namely Docker) become such a massive hot topic?”

With that, it got me thinking about some of the shifts around containerization and virutalization at the OS level versus at the hyper-visor level. Here’s some of my thoughts, which seemed to match more than a few thoughts at Red Hat.

  1. Virtualization at the hyper-visor level is starting to die from an app usage level in favor of app deployment via OS-level virtualization. Virtualization at the OS level is dramatically more effective in almost every single scenario that virtualization is used today for application development. I’m *kind of* seeing this, interesting that RH is I suppose seeing this much more.
  2. Having a known and identified container such as what Docker works with provides a dramatically improved speed and method for deployment over traditional hyper-visor based virtualized or pure OS based deployment.

There are other points that have brought up but this also got me thinking on a slight side track, what are the big cloud providers doing now? Windows Azure, AWS, Rackspace or GCE, you name it and they’re all using a slightly different implementation of virtualized environments. Not always ideally efficient, fast or effective but they’re always working on them.

…well, random thoughts aside, I’m off to the next session and some hacking in between. Feel free to throw your thoughts into the fray on twitter or in the comments below.

Docker Course, Ubuntu, WordPress, Angular.js, Notes, Rich Hickey, Datomic…

Updates, updates, updates…

Docker Course @ Pluralsight

I added a new course on Docker to my Pluralsight list of courses today. This joins my one other course on Riak, which I’m aiming to have more added to that list in the future! Check those out and let me know what you think, how I could improve, what I did right and what you learned (or already knew). I’d greatly appreciate it!

Rich Hickey, Datomic, Clojure, Angular.js and Notes

I started a section on the blog here for notes on topics I’m studying. The first two I’ve hit on are Angular.js and Rich Hickey, Clojure and Hammock Driven Development. I’ll be adding to these over time and will likely report whenever I add good chunks of info or helpful tutorials, how-to docs or just whatever I deem worth mentioning. Simply put I won’t broadcast it much, unless I add some real goodies that are worth it.  😉

Ubuntu & WordPress

I needed a kind of WordPress Workstation to hack around testing some WordPress so I put together quick notes on the fastest and cleanest way to setup a WordPress VM from scratch.

Until later, happy coding, have a metal \m/ \m/ Friday!