The next thing I wanted setup for my development workspace is a DataStax Enterprise Cluster. This will give me all of the Apache Cassandra database features plus a lot of additional features around search, OpsCenter, analytics, and more. I’ll elaborate on that in some future posts. For now, let’s get an image built we can use to add nodes to our cluster and setup some other elements.
1: DataStax Enterprise
The general installation instructions for the process I’m stepping through here in this article can be found in this documentation. To do this I started with a Packer template like the one I setup in the second part of this series. It looks, with the installation steps taken out, just like the code below.
The first part of this process the machine image needs Open JDK installed, which I opted for the required version of 1.8. For more information about the Open JDK check out this material:
The next thing I needed to do was to get everything setup so that I could use this Azure Image to build an actual Virtual Machine. Since this process however is built outside of the primary Terraform main build process, I need to import the various assets that are created for Packer image creation and the actual Packer images. By importing these asset resources into Terraform’s state I can then write configuration and code around them as if I’d created them within the main Terraform build process. This might sound a bit confusing, but check out this process and it might make more sense. If it is still confusing do let me know, ping me on Twitter @adron and I’ll elaborate or edit things so that they read better.
Verification Checklist
At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.
2: Terraform State & Terraform Import Resources
Ok, if you check out part 1 of this series I setup Azure CLI, Terraform, and the pertinent configuration and parts to build out infrastructure as code using HCL (Hashicorp Configuration Language) with a little bit of Bash as glue here and there. Then in Part 2 and Part 3 I setup Packer images and some Terraform resources like Kubernetes and such. All of that is great, but these two parts of the process are now in entirely two different unknown states. The two pieces are:
Packer Images
Terraform Infrastructure
The Terraform Infrastructure doesn’t know the Packer Images exist, but they are sitting there in a resource group in Azure. The way to make Terraform aware that these images exist is to import the various things that store the images. To import these resources into the Terraform state, before doing an apply, run the terraform import command.
In order to get all of the resources we need in which to operate and build images, the following import commands need issued. I wrote a script file to help me out with each of these, and used jq to make retrieval of the Packer created Azure Image ID’s a bit easier. That code looks like this:
Breaking down the jq commands above, the following actions are being taken. First, the Azure CLI command is issued, az image list which is then piped | into the jq command of jq 'map({name: "theimagenamehere", id})'. This command takes the results of the Azure CLI command and finds the name element with the appropriate image name, matches that and then gets the id along with it. That command is then piped into another command that returns just the value of the id jq -r '.id'. The -r is a switch that tells jq to just return the raw data, without enclosing double quotes.
I also needed to import the resource group all of these are in too, which following a similar jq command style of piping one command’s results into another, issued this command to get the Resource Group ID RG-IMPORT=$(az group show --name adronsimages | jq -r '.id'). With those three ID’s there is one more element needed to be able to import these into Terraform state.
The Terraform resources that these imported pieces of state will map to need declared, which means the Terraform HCL itself needs written out. For that, there are the two images that are needed and the Resource Group. I added the images in an images.tf files and the Resource Group goes in the resource_groups.tf file.
Now, issuing these Terraform commands will pull the current state of those resource into the state, which we can then issue further Terraform commands and applies from.
Running those commands, the results come back something like this.
Verification Checklist
At this point there now exists a solidly installed and baked image available for use to create a Virtual Machine.
Now there is also state in Terraform, that understands where and what these resources are.
Summary, for now.
This post is shorter than I’d like it to be. But it was taking to long for the next steps to get written up – but fear not they’re on the way! In the coming post I’ll cover more of other resource elements we’ll need to import, what is next for getting a virtual machine built off of the image that is now available, some Terraform HCL refactoring, and most importantly putting together the actual DataStax Enterprise / Apache Cassandra Clusters! So stay tuned, subscribe to the blog, and of course follow me on the Twitters @Adron.
In part 1 of this series I setup Terraform and put together a basic setup for ongoing use. In part 2 I setup Packer and got a template started that installs Apache Cassandra 3.11.4.
In this part there’s one more key piece. Really key piece for iterating and moving quickly with development needs on a day to day basis. I need some development love with Kubernetes. Terraform is extremely well suited to spin this up in Azure! Since I setup Terraform in part 1 I’ll leave those specifics linked here.
1: Terraform ❤️ Kubernetes
There are a couple of different, and very important aspects to how and what can be done with Kubernetes with Terraform. First, which I’ll cover right here, is the creation of a Kubernetes Cluster. Later I’ll cover more material related to working with the cluster itself and managing the resources within the cluster. To a Kubernetes cluster running with Terraform there’s a singular resource we’ll need to use.
Opening up the project – same as in the previous two blog article in this series – and went straight into the main.tf file in the root and added the follow Kubernetes resource.
output “client_certificate” {
value = “${azurerm_kubernetes_cluster.test.kube_config.0.client_certificate}”
}
output “kube_config” {
value = “${azurerm_kubernetes_cluster.test.kube_config_raw}”
}
[/code]
With that I immediately applied the changes to build the environment with terraform apply and confirming with a yes.
The agent pool profile is what sets up our virtual machines that will make up our Kubernetes Cluster. In this case I’ve selected the Standard_D1_v2 just because that’s the default example, but depending on use case this may need to change. For more information about the various virtual machine sizes in Azure here are a few important links:
The Service Principal as shown is pulled from variable, as setup in part 1 and part 2 of this series.
The two output variables in the section of configuration above will print out the client cert and raw configuration which we’ll need for other uses in the future. Be sure to put both of those somewhere that can be retrieved for future use. Ideally, keep them secure! I’ll speak to this more in future posts, but for now I’m going to focus on this Kubernetes Cluster.
Verification Checklist
Based on the infrastructure in part 1 and part 2 and this addition, there is now the Packer image for the Apache Cassandra 3.11.4 image, in Azure the Service Principal, Resource Group, and related collateral, and now a Kubernetes Cluster to work with.
2: Kubernetes ❤️ Terraform
Alright, with a Kubernetes (K8s) done, we can now add some more elements to it via Terraform in which to work with. Specifically let’s add a pod and then get an Nginx container up and running.
The first thing we need however is a connection to the cluster that is created in the previous step. In the HCL (HashiCorp Configuration Language) above we setup two output variables. The data in those variables is needed for our ongoing connection to Kubernetes, however we don’t particularly need to pass them via output variables. The reason we don’t need to pass them via variables into another phase of deployment is because Terraform can handle the creation of a Kubernetes cluster and the post-creation processing of creating pods and related collateral in the order it needs to occur. Since Terraform knows how to do this, there is a way to setup the connection information for the Kubernetes Provider that prevents us from needing to post output variables. Before moving on those should be removed. Once that is done add the follow provider for Kubernetes.
In this section of configuration the host is set from the previously created Kubernetes Cluster, then the username and password, then the certificates, and key. Each of these, as you can see, is pulled from the Kubernetes Cluster resource (azurerm_kubernetes_cluster) and then from the test cluster that was created. With this addition run terraform init to get the provider downloaded and ready for use.
With the connection now set, the provider downloaded, the next step is to add the Kubernetes Pod that we’ll need to run the Nginx container. Hat tip to the HashiCorp docs for this specific example.
spec {
container {
image = “nginx:1.7.8”
name = “example”
port {
container_port = 80
}
}
}
}
[/code]
This will setup a pod that uses and runs a nginx:1.7.8 container image and makes it available on port 80. To make this container available as a service however there is one more step, to create a Kubernetes Service that’ll make port 80 available and mapped to the container within Kubernetes.
[code language=”javascript”]
resource “kubernetes_service” “nginx” {
metadata {
name = “nginx-example”
}
spec {
selector = {
App = kubernetes_pod.nginx.metadata[0].labels.App
}
port {
port = 80
target_port = 80
}
type = “LoadBalancer”
}
}
[/code]
Alright, now the setup is almost 100% complete. The last step is to create another output variable that’ll provide a way for us to navigate to and make requests against the Nginx service, we’ll need either the IP or the hostname. Depending on the cloud provider one or other other can be retrieved by asking for the load_balancer_ingress[0].ip value or the load_balancer_ingress[0].hostname. For Azure and GCP, one can retrieve the IP, for AWS you’d want to specifically get the hostname.
In the end the HCL looks like this.
[code language=”javascript”]
provider “azurerm” {
version = “=1.27.0”
spec {
container {
image = “nginx:1.7.8”
name = “example”
port {
container_port = 80
}
}
}
}
resource “kubernetes_service” “nginx” {
metadata {
name = “nginx-example”
}
spec {
selector = {
App = kubernetes_pod.nginx.metadata[0].labels.App
}
port {
port = 80
target_port = 80
}
type = “LoadBalancer”
}
}
output “lb_ip” {
value = kubernetes_service.nginx.load_balancer_ingress[0].ip
}
output “lb_hostname” {
value =
}
[/code]
You can also check out this specific iteration of my developer workspace project on Github at the example-nginx-pod-on-kubernetes branch.
Verification Checklist
Based on the infrastructure in part 1 and part 2 and this addition, there is now the Packer image for the Apache Cassandra 3.11.4 image, in Azure the Service Principal, Resource Group, and related collateral, and now a Kubernetes Cluster to work with.
With this second segment done, there is now a pod running an nginx container. The container is then running as a service with a port mapping for port 80.
Summary
At this point in the series there are enough elements to really start to get some work done deploying, building some applications, and getting some databases deployed! So subscribe here to the blog, follow me at @Adron on Twitter, @adronhall on Twitch (for even more coding), and subscribe to my YouTube channel here. More material coming about how to get all of this wired together and running, until next post, cheers!
Packer is another great Hashicorp tool that is available to build virtual machine and related images for a wide range of platforms. Specifically this article is going to cover Azure but the list is long with AWS, GCP, Alicloud, Cloudstack, Digital Ocean, Docker, Hyper-V, Virtual Box, VMWare, and others!
Download & Setup
To download Packer navigate over to the Hashicorp download page. Currently it is at version 1.4.2. I installed this executable the same way I installed Terraform, by untar or unzipping the executable into a path I have setup for all my executable CLI’s. As shown, I simply open up the compressed file and pulled the executable over into my Apps directory, making it immediately executable just like Terraform from anywhere and any path on my system. If you want it somewhere specific and need to setup a path, refer to the previous blog entry for details and links to where and how to set that up.
If the packer command is executed, which I’ve done in the following image, it shows the basic commands as a good CLI would do (cuz Hashicorp makes really good CLI tools!)
The command we’ll be working with mostly to get images setup is the build command. The inspect, validate, and version commands however will become mainstays of use when one really gets into Packer, they’re worth checking out further.
Verification Checklist
Packer is now setup and can be executed from any path on the system.
2: Packer Azure Setup
The next thing that needs to be done is to setup Packer to work with Azure. To get these first few images building the Device Login will be used for authorization. Eventually, upon further automation I’ll change this to a Service Principal and likely recreate images accordingly, but for now Device Login is going to be fine for these examples. Most of the directions are available via HashiCorp Docs located here. As always, I’m going to elaborate a bit beyond the docs.
The Device Login will need three pieces of information: A SubscriptionID, Resource Group, and Storage Account. The Resource Group will be used in Azure to maintain this group of resources created specific to the Resource Group that live around a particular life-cycle of work. The Storage Account is just a place to store the images that will be created. The SubscriptionID is the ID of your account with Azure.
NOTE: To enable this mode, simply don’t set client_id and client_secret in the Packer build provider.
To get your SubscriptionID just type az login at the terminal. The response will include this value designated simply as “id”. Another value that you’ll routinely need is displayed here too the “tenant_id”. Once logged in you can also get a list of accounts and respective SubscriptionID values for each of the accounts you might have as I’ve done here.
Here you can see I’ve authenticated against and been authorized in two Azure Accounts. Location also needs to be determined, and can be done so with az account list-locations.
This list will continue and continue and continue. From it, a singular location will be chosen for the storage.
In a bash script, I go ahead and assign these values that I’ll need to build this particular image.
Now the next step is to create the Resource Group and Storage. With a few echo commands to print out what is going on to the console, I add these commands as shown.
[code]
echo ‘Creating the managed resource group for images.’
az group create –name $GROUPNAME –location $LOCATION
echo ‘Creating the storage account for image storage.’
The final command here is another echo just to identify what is going to be built and then the Packer build command itself.
[code]
echo ‘Building Apache cluster node image.’
packer build node.json
[/code]
The script is now ready to be run. I’ve placed the script inside my packer phase of my build project (see previous post for overall build project and the repository here for details). The last thing needed is a Packer template to build.
Verification Checklist
Packer is now setup and can be executed from any path on the system.
The build has been setup for use with Device Login for this first build.
A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
3: Azure Images
The next step is getting an image or some images built that are needed for further work. For my use case I want several key images built around various servers and their respective services that I want to use to deploy via Terraform. Here’s an immediate shortlist of images we’ll create:
Apache Cassandra Node – An image that is built with the latest Apache Cassandra installed and ready for deployment into a cluster. In this particular case that would be Apache Cassandra v4, albeit I’m going to go with 3.11.4 first and then work on getting v4 installed in a subsequent post. The installation instructions we’ll mostly be following can be found here.
Gitlab Server – This is a product I like to use, especially for pre-rolled build services and all of those needs. For this it takes care of all source control, build services, and any related work that needs to be from inside the workspace itself that I’m building. i.e. it’s a necessary component for an internal corporate style continuous build or even continuous integration setup. It just happens this is all getting setup for use internally but via a public cloud provider on Azure! It can be done in parallel to other environments that one would prospectively control and manage autonomous of any cloud provider. The installation instructions we’ll largely be following are also available via Gitlab here.
DataStax Enterprise 6.7 – DataStax Enterprise is built on Apache Cassandra and extends the capabilities of that database with multi-model options for graph, analytics, search, and many other capabilities and security. For download and installation most of the instructions I’ll be using are located here.
Verification Checklist
Packer is now setup and can be executed from any path on the system.
The build has been setup for use with Device Login for this first build.
A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
Now there is a list of images we need to create, in which we can work from to create the images in Azure.
4: Building an Azure Image with Packer
The first image I want to create is going to be used for an Apache Cassandra 3.11.4 Cluster. First a basic image test is a good idea. For that I’ve used the example below to build a basic Ubuntu 16.04 image.
In the code below there are also two environment variables setup, which I’ve included in my bash profile so they’re available on my machine whenever I run this Packer build or any of the Terraform builds. You can see they’re setup in the variables section with a "{{env `TF_VAR_tenant_id`}}". Not that the TF_VAR_tenant_id is prefaced with TF_VAR per Terraform convention, which in Terraform makes the variable just “tenant_id” when used. Also note that things that might look like single quotes are indeed back ticks, not single quotes around TF_VAR_tenant_id. Sometimes the blog formats those oddly so I wanted to call that out! (For example of environment variables, I set up all of them for the Service Principal setup below, just scroll further down)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
During this build, when Packer begins there will be several prompts during the build to authorize the resources being built. Because earlier in the post Device Login was used instead of a Service Principal this step is necessary. It looks something like this.
This will happen a few times and eventually the build will complete and the image will be available. What we really want to do however is get a Service Principal setup and use that so the process can be entirely automated.
Verification Checklist
Packer is now setup and can be executed from any path on the system.
The build has been setup for use with Device Login for this first build.
A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
We have one base image building, to prove out that our build template we’ll start with is indeed working. It is always a good idea to get a base build image template working to provide something in which to work from.
5: Azure Service Principal for Automation
Ok, a Service Principal is needed now. This is singular command, but it has to be very specifically the command from what I can tell. Before running that though, know where you are going to store or how you will pass the client id and client secret that will be provided by the principal when it is created.
The command for this is az ad sp create-for-rbac -n "Packer" --role contributor --scopes /subscriptions/00000000-0000-0000-0000-000000000 where all those zeros are the Subscription ID. When this executes and completes all the peripheral values that are needed for authorization via Service Principal.
One of the easiest ways to keep all of the bits out of your repositories is to setup environment variables. If there’s a secrets vault or something like that then it would be a good idea to use that, but for this example I’m going to setup use of environment variables in the template.
Another thing to notice, which is important when building these systems, is that the “Retrying role assignment creation: 1/36″ message. Which points to the fact there are 36 retries built into this because of timing and other irregularities in working with cloud systems. For various reasons, this means when coding against such systems we routinely will have to put in timeouts, waits, and other errata to ensure we get messages we want or mark things disabled as needed.
After running that, just for clarity, here’s what my .bashrc/bash_profile file looks like with the added variables.
With that set, a quick source ~/.bashrc or source ~/.bash_profile and the variables are all set for use.
Verification Checklist
Packer is now setup and can be executed from any path on the system.
The build has been setup for use with Device Login for this first build.
A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
We have one base image building, to prove out that our build template we’ll start with is indeed working. It is always a good idea to get a base build image template working to provide something in which to work from.
The Service Principal is now setup so the image can be built with full automation.
Environment variables are setup so that they won’t be checked in to the code and configuration repository.
6: Apache Cassandra 3.11.4 Image
Ok, all the pieces are in place. With confirmation that the image builds, that Packer is installed correctly, with Azure Service Principal, Managed Resource Group, and related collateral setup now, building an actual image with installation steps for Apache Cassandra 3.11.4 can now begin!
First add the client_id and client_secret environment variables to the variables section of the template.
Now the image can be executed, but let’s streamline the process a little bit more. Since I won’t want but only one image at any particular time from this template and I want to use the template in a way where I can create images and pass in a few more pertinent pieces of information I’ll tweak that in the Packer build script.
Below I’ve added the variable name for the image, and dubbed in Cassandra so that I can specifically reference this image in the bash script with IMAGECASSANDRA="basecassandra". Next I added a command to delete an existing image that would be called this with the az image delete -g $GROUPNAME -n $IMAGECASSANDRA line of script. Finally toward the end of the file I’ve added the variable to be passed into the template with packer build -var 'imagename='$IMAGECASSANDRA node-cassandra.json. Note the odd way to concatenate imagename and the variable of the passed in variable from the bash script. This isn’t super clear which way to do this, but after some troubleshooting this at least works on Linux! I’m assuming it works on MacOS, if anybody else tries it and it doesn’t please let me know.
With that done the build can be run things without needing to manually delete the image each time since it is part of the script now. The next part to add to the template is more of the needed installation steps for Apache Cassandra. These steps can be found on the Apache Cassandra site here.
Under the provisioners section of the Packer template I’ve added the installation steps and removed the sudo part of the commands. Since this runs as root there’s really no need for sudo. The inline part of the provisioner when I finished looks like this.
With that completed we now have the full workable template to build a node for use in starting or using as a node within an Apache Cassandra cluster. All the key pieces are there. The finished template is below, with the build script just below that.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Packer is now setup and can be executed from any path on the system.
The build has been setup for use with Device Login for this first build.
A script is now available to execute Packer for a build without needing to pass parameters every single time and simplifies assurances that the respective storage and resource groups are available for creation of the image.
We have one base image building, to prove out that our build template we’ll start with is indeed working. It is always a good idea to get a base build image template working to provide something in which to work from.
The Service Principal is now setup so the image can be built with full automation.
Environment variables are setup so that they won’t be checked in to the code and configuration repository.
Packer add package repository and installs Cassandra 3.11.4 on Ubuntu 18.04 LTS in Azure.
I’ll get to the next images real soon, but for now, go enjoy the weekend and the next post will be up in this series in about a week and a half!
So I’ve been fighting through getting some Packer images built in Azure again. I haven’t done this in almost a year but as always I’ve just walked in and BOOM I’ve got an error already. This first error I’ve gotten is after I create an Resource Group and then a Service Principal. I’ve gotten all the variables set and then I validated my json for the node I’m trying to build and then tried to build, which is when the error occurs.
Now this leaves me with a number of questions. What is this 169.254.169.254 and why is that the IP for the attempt to communicate. That seems familiar when using Hashicorp tooling. Also, why is there no route to the host? This example (as I’ve pasted below) is the same thing as the example in the Microsoft docs here.
The JSON Template for Packer
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Big shout out of thanks to Jamie Phillips @phillipsj73 for the direct assist and Patrick Svensson @firstdrafthell for the ping connection via the Twitter. Big props for the patience to dig through my template for Packer and figuring out what happened. On that note, a few things actually were wrong, here’s the run down.
1: Environment Variables, Oops
Jamie noticed after we took a good look. I hadn’t used the actual variable names of the environment variables nor assigned them correctly. Notice at the top, the first block is for variables. Each of the variables above is correct, for declaring the environment variables themselves. These of course are then set via my bash startup scripts.
Which in turn, what this does is take the environment variables and passes them to what will be user variables for the builder block to make use of. Which at this point you may see what I did wrong. These variables are named client_id, client_secret, tenant_id, and subscription_id which means in the builders blog it needs to be assigned as such.
Notice in the original code above from the Github gist I re-assigned them back to the actual environment variables which won’t work. With that fixed the build almost worked. I’d made one other mistake.
2: Resource Groups != Managed Resource Groups
Ok, so Microsoft has an old way and a new way I’ve been realizing of how they want to build images. Has anyone realized they’ve had to reinvent their entire cloud offering from the back end up more than once? I’m… well, let’s just go with speechless. More on that later.
So creating a resource group by selecting resource groups in the interface and then creating a resource group had not appeared to work. Something was amiss still so I went a used the CLI command. Yes, it may be the case that the Azure CLI is not in sync with the Azure Console UI. But alas, this got it to work. I had to make sure that the Resource Group created is indeed a Managed Resource Group per this. It also left me pondering that maybe, even though they’re very minimalist and to the point, these instructions here might need some updating with actual detailed specifics.
I got the correct Managed Resource Group and changed it to what I had just created, which is the managed_image_resource_group_name property and commenced a build again. It ran for many minutes, I didn’t count, and at the end BOOM! Another big error.
This error actually stated that it was likely Packer itself. I hadn’t had one of these before! With that I’m wrapping up for today (the 6th of August) but will be back at it tomorrow to get this resolved.
UPDATED @ 18:34 Aug 8th – Day 2 of A Solution?
Notice this title for this update is a question, because it’s a solution but it isn’t. I’ve fought through a number of things trying to get this to work but so far it seems like this. The Packer template is finishing to build on the creation side of things but when it is cleaning up it comes crashing down. It then requests I file a bug, which I did. The bug is listed here on the Packer repo on Github in the issues.
Here is the big catch to this bug though. I went in and could build a VM from the image that is created even though Packer is throwing an error. I went ahead and created the crash.log gist and requisite details so this bug could be further researched, but it honestly seems like Azure just isn’t reporting a cleanup of the resources correctly back to Packer. But at this point I’m not entirely sure. The current state of the Packer template file I’m trying to build is available via a gist too.
So at this point I’m kind of in a holding pattern waiting to figure out how to clean up this automation step. I need two functional elements, one is this to create images that are clean and primarily have the core functionality and capabilities for the server software – i.e. the Apache Cassandra or DSE Cluster nodes. The second is to get a Kubernetes Cluster built and have that run nodes and something like a Cassandra operator. With the VM route kind of on hold until this irons itself out with Azure, I’m going to spool up a Kubernetes cluster in Azure and start working on that tomorrow.
UPDATED @ 16:02 Aug 14th – Solution is in place!
I filed the issue #7961 listed previously in the last update, which was different but effectively a duplicate of #7816 that is fixed with the patch in #7837. I rolled back to Packer version 1.4.1 and also got it to work, which points to the issues being something rooted in version 1.4.2. With that, everything is working again and I’ve got a good Packer build. The respective completed blog entry where I detail what I’m putting together will be coming out very soon now that things are cooking appropriately!
Also note that the last bug, file in #7961 wasn’t the bug that originally started this post, but was where I ended up with. The build of the image however, being the important thing, is working just fine now! Whew!
You must be logged in to post a comment.