Category Archives: Kubernetes

KubeCon 2018 Mission Objectives: Is the developer story any better?

This is my second year at KubeCon. This year I had a very different mission than I did last year. Last year I wanted to learn more about services meshes, what the status of various features around Kubernetes like stateful sets, and overall get a better idea of what the overall shape of the product was and where the project was headed.

This year, I wanted to find out two specific things.

  1. Developer Story: Has the developer story gotten any better?
  2. Database Story: Do any databases, and their respective need for storage, have a good story on Kubernetes yet?

Well, I took my trusty GoPro Camera, mounted it up like the canon the Predator uses on my should and departed. I was going to attend this conference with a slightly different plan of attack. I wanted to have video and not only take a few notes, attend some sessions, and generally try to grok the information I collected that way. My thinking went along the lines, with additional resources, I’d be able to recall and use even more of the information available. With that, here’s the video I shot while perusing the showroom floor and some general observations. Below the fold and video I’ll have additional commentary, links, and updates along with more debunking the cruft from the garbage from the good news!

Gloo-01Gloo – Ok, this looks kind of interesting. I stopped to look just because it had interesting characters. When I strolled up to the booth I listened for a few minutes, but eventually realized I needed to just dig into what docs and collateral existed on the web. Here’s what’s out there, with some quick summaries.

Gloo exists as an application gateway. Kind of a mesh of meshes or something, it wasn’t immediately clear. But I RTFMed the Github Repo here and snagged this high level architecture diagram. Makes it interesting and prospectively offers insight to its use.

high_level_architecture

Gloo has some, I suppose they’re “sub” projects too. Here’s a screenshot of the set of em’. Click it to navigate to solo.io which appears to be the parent organization. Some pretty cool software there, lots of open source goodness. It also leads me to think that maybe this is part of that first point above that I’m looking for, which is where is the improved developer story?

gloo.png

More on that later, for now, I want to touch on one more thing before moving on to next blog posts about the KubeCon details I’m keen to tell you about.

ballerina-logo

Ballerina – Ok, when I approached the booth I wasn’t 100% sure this was going to be what I was wanting it to be. Upon getting a demo (in video too) and then returning to the web – as ya do – then digging into the details and RTFMing a bit I have become hopeful. This stack of technology looks good. Let’s review!

The description on the website describes Ballerina as,

A compiled, transactional, statically and strongly typed programming language with textual and graphical syntaxes. Ballerina incorporates fundamental concepts of distributed system integration and offers a type safe, concurrent environment to implement microservices with distributed transactions, reliable messaging, stream processing, and workflows.

which sounds like something pretty solid, that could really help developers build on – let’s say Kubernetes – in a very meaningful way. However it could also expand far beyond just Kubernetes, which is something I’ve wanted to see, and help developers expand and expedite their processes and development around line of business applications. Which currently, is still the same old schtick with the now ancient RAD tools all the way to today’s React and web tools without a good way to develop those with understanding or integrations with modern tooling like Kubernetes. It’s always, jam it on top, config a bunch of yaml, and toss it over the wall still.

A few more key points of how Ballerina is described on the website. Here’s the stated philosophy goal,

Ballerina makes it easy to write cloud native applications while maintaining reliability, scalability, observability, and security.

Observability and security eh? Ok, I’ll be checking into this further, along with finally diving into Rust in the coming weeks. It looks like 2019 is going to be the year I delve into more than one new language! Yikes, that’s gonna be intense!

TiDB – Clearly the team at PingCap didn’t listen to the repeated advice of “don’t write your own database, it’ll…” from Charity Majors and a zillion other people who have written their own databases! But hey, this one, regardless of the advice being unheeded, looks kind of interesting. Right in the TiDB repo they’ve got an architecture diagram which is…  well, check out the diagram first.

tidb-architecture

So it has a mySQL app protocol, that connects with the TiDB cluster, which then has a DistSQL API (??) and KV API connecting to the TiKV (which stands for Ti Key Value) which is a cluster, that then uses a DistSQL API to connect the other direction to a Spark Cluster. The Spark SQL can then be used. It appears the running theme is SQL all the things.

Above this, to manage the clusters and such there’s a “PD Cluster” which I also need to read about. If you watched the video above, you’ll notice the reference to it being the ZooKeeper of the system. This “PD Cluster ZooKeeper” thing manages the metadata, TSO Data Location and data location pertinent to the Spark Cluster. Overall, 4 clusters to manage the system.

Just for good measure (also in the video) the TiDB is built in Go, while the TiKV is built in Rust, and some of the data location or part of the Spark comms are handled with some Java Virtual Machine – I think – I might have misunderstood some of the explanation. So how does all this work? I’ve no idea at this point, but I’m curious to find out. But before that in the next few weeks and months I’m going to be delving into building applications in Node.js, Java, and C# against Cassandra and DataStax Enterprise, so I might add some cross-comparisons against TiDB.

Also, even though I didn’t get to have a conversation with anybody from Foundation DB I’m interested in how it’s working these days too, especially considering it’s somewhat storied history. But hey, what project doesn’t have a storied history these days right! Stay tuned, subscribe here on the blog, and I’ll have updates when that work and other videos, twitch streams, and the like are published.

After all those conversations and running around the floor, at this point I had to take a coffee break. So with that, enjoy this video on how to appropriately grab good coffee real quick and an amazing cookie treat. Cheers!

Yes, I mispelled “dummy” it’s ok I don’t want to re-render it. I also know that the cookie name is kind of vulgar LOLz but you know what, welcome to Seattle we love ya even when you’re mind is in the gutter!

Kubernetes 101 Workshop

Today I TA’d (Teacher’s Assistant) a course with Bridget at GOTO Chicago Conference. There were a number of workshops besides just the Kubernetes 101 like; Working Effectively with Legacy Code with Michael Feathers (@mfeathers), Estimates or NoEstimates with Woody Zuill (@WoodyZuill), Testing Faster with Dan North (@tastapod), Data Science and Analytics for Developers (Machine Learning) with Phil Winder (@DrPhilWinder), and so many others that I’d love to have multi-processed all at the same time! Digging through Kubernetes from a 101 course level was interesting, as I’ve never formally tried to educate myself about Kubernetes, just dove in. My own knowledge is very random about what I do or don’t know about, and a 101 course fills out some of the gaps for me.

The conference is located in a cool and sort of strange place for a conference, out kind of in the lake, called the Navy Pier. Honestly, I dig it, it’s a cool place for a conference. I enjoyed the ~15 minute walk from the hotel to the location too, because it’s right there on the tip of the pier, as shown in the fancy map below.

chicagonavypier

The workshop is going well. Bridget is filling up student’s brains and I’m going to dork around Kuberneting some Cassandra and Node.js for my talk. I’m pretty stoked as the talk has given me a good excuse to delve into some Node.js again, from a nodal systemic viewpoint, “Node Systems for Node.js Services on Nodes of Systemic Nodal Systems” this Thursday.

Good Morning KubeCon?

kubelogo-wideI hope you’re having a good morning so far. KubeCon has kicked off in full force like the pro-conference that it is. With 4k+ people in attendance the crowds are distinctive, even among a city like Austin. The conference lit off the day with an absurdly early registration time of 7am, and a continental breakfast of some fruit and pastries.

Keynote(s)

Dan Kohn

The keynotes this morning kicked off at 9am. Dan Kohn, the executive director of the Cloud Native Computing Foundation (CNCF), started off the keynote session. He started off with a simple story about communities building. Outlining a quote from Tim Hockin, “existing time for boring infrastructure”. This really wraps up so much of everything happening in this space these days.

“Exciting time for boring infrastructure.”

Dan continued, pointing out that Linux is one of the largest projects on Github and probably one of the, if not the most important project on Github. Kubernetes is in the top 10 on Github too, as Jim Zemlin said, “Kubernetes is the Linux of the Cloud”. The commit and member involvement in Kubernetes. The growth of the KubeCon event and how this event in Austin is 4x the last 4 KubeCon events! Huge growth.

“Kubernetes is the Linux of the Cloud” – Jim Zemlin

Dan then introduced an unexpected speaker from Alibaba that elaborated on the massive scale of Alibaba, its leading position in China, and other projects Alibaba is doing with Kubernetes.

NOTE: Diveristy Scholarship

The US failed to provide visas for 4 of the diversity scholarship recipients. However they’ve been offered to attend the Copenhagen event, which in the end the US has taken more more hit because of our poor immmigration and border control rules in the United States. Something that needs to be modernized, and I’m sure we’ll keep losing out for years to come until national leadership has the spine to fix this. But I digress, onward with the keynote.

Michelle Noorali

Michelle Noorali, Senior Software Engineer at Microsoft, then was introduced and came out to provide information on Special Interest Groups (SIGs), the deep dive sessions, techincal salons, and hallway tracks, and more details.

She continued to give us the story of KubeCon, Kubernetes, and the distinct history of how the rise of microservices, cloud, and container technology has changed the landscape of infrastructure and related technology. This is something important as the story isn’t always clear, but the story is a fundamental detail that informs and provides a more clearly defined path to where we’re all going with this technology.

Next Michelle dove into some specifics about the Workloads API coming in Kubernetes 1.9. For one it’ll be stable. Another is Windows Server container support beta, which I guess is stable – I haven’t used it and am curious who is, I’d love to talk!

Michelle then introduced Tom Wilkie to talk about observability and Prometheus.

Next up she introduced Eduardo Silva to discuss fluentd and announce the v1.0 release. Highlights include multi-process workers, sub-second time resolution, native TLS/SSL, otimized protocol buffers, and improved Fluentd protocol buffers. He continued and elaborated on the data streaming options, and flowing to technology like Kafka. Also, fluentd has been ported to Windows finally so Windows Server users can now natively use fluentd.

Reliability? What’s that? Oliver Gould then introduced by Michelle to talk about linkerd and give us some insight to the progress of the tool. Oliver then introduced Conduit too, and then gave us a demo of it working.

Michelle then came back around to wrap up her section of the keynote and cover some additional projects within the CNCF; grpc, envoy, and such.

Imad Sousou

Next up Imad Sousou came out to talk about introducing container runtimes at Intel.

NOTE: Just for your information, Intel isn’t just processor chips.

Intel have thousands of software developers actually working on a lot of various software projects. Intel is also a company that has a fairly large number of people working on open source projects.

Imad then elaborated on Kata containers that are hardware accelerated containers that use virtualization technology.

Diane Marsh

Diane Marsh of Netflix came out next to discuss the importance of culture of building these tools, and what happens with these tools. The core of the talk focused around tools Spinnaker, Asgard – namely continuous delivery – and other Netflix OSS and how culture plays a part. She detailed how Netflix culture affected how people accepted and were able to use Asgard or Spinnaker. For instance, the culture at Netflix is one of freedom, and many companies don’t have this level of maturity. Netflix has this level of freedom, to deploy, because of a very strong level of trust.

Adrian Cockroft

Finally to wrap up the morning keynote, Adrian Cockroft came out to show us a few things. Leading of with cloud native principles; immutable code deployments, high utilization with aggressive efficiency by turning things off, pay as you go, no waiting globally deployed and distributed models. At AWS he’s working to increase the contributions AWS makes, working with CNCF (which they’ve joined), pushing cloud native, and integrating CNCF components (all those projects) into AWS.

Some of the projects AWS is involved in include; containerd, kubernetes installers and security, and CNI, the Container Networking Interface.

Adrian also noted, importantly, that all AWS work with Kubernetes is upstreamed to Kubernetes itself and not a fork of the project. They’re working toward making all integration at all levels within AWS. Some of the work even include work and partnerships with Heptio around authentication for IAM within AWS. Lots of good things, and a lot of high integrity work!

Beyond Kubernetes and other elements Adrian’s teams are working on integrations with SPIFFE, HashiCorp Vault, and other open source tooling. I for one am pretty excited aobut these tools coming online at AWS as it’ll make my life easier to get some great things deployed and enabling customers and groups I work with! “Much excite, much wow” as the internet would say.

Adrian then dove into Fargate, discussed how it folds in with EKS, and how the integration is going to work.

For now, that’s that for the keynote.

KubeCon – Arrival, Flight, and Scheduling

I’m on another plane departing Seattle via SEATAC (SEA). An Alaska Air Boeing 737-900 to be specific. The flight is currently en route to Austin, Texas and the vast majority of people aboard are going to KubeCon. The seats, as they always are, aren’t built for any mortal, normal, reasonably sized human being. So we’re all cuddled up annoyingly but making the best of it we can. Seriously though, I’d rather be on an overnight train. I’d rather spend another 24+ or more hours comfortably studying some Netflix infrastructer and chilling out instead of flying, but that isn’t really an option in this giant country, so onward I go as the dream of comfort in transportation eludes me.

I’m setup and am aiming to provide coverage of numerous events, topics, and the like while at the conference. To boot, after conference I’ll be writing up some coverage of open source projects and companies that are at KubeCon.

cncf_kubecon_color_new-1There are a few new practices, techniques, and related things I’m trying out so I can cover even more of the event with useful things. We’ll see how that works. As always, much of my coverage will be on the various mediums I post to. The rest may appear in other various sources, which I’ll tweet and provide a summary email via my Thrashing Code News at the end of the conference.

My Schedule -> https://kccncna17.sched.com/adronhall
Here are a few of the specific things I’ve got on the roster.

Wednesday

  • Pancake Breakfast with The New Stack cuz’ they’re freakin’ awesome!
  • Dan, Michelle, Imad, Dianne, and Adrian’s keynote!
  • Panel: Kubernetes, Cloud Native and the Public Cloud [B] – Moderated by Dan Kohn, Cloud Native Computing Foundation
  • Full Stack Visibility with Elastic: Logs, Metrics, and Traces
  • The Art of Documentation and Readme.md for Open Source Projects

…and a bunch more to come! Subscribe to Thrashing Code News, follow me on the twitters @Adron, and I’ll have more coverage coming soon!

Build a Kubernetes Cluster on Google Cloud Platform with Terraform

terraformIn this blog entry I’m going to detail the exact configuration and cover in some additional details the collateral resources you can expect to find once the configuration is executed against with Terraform. For the repository to this write up, I create our_new_world available on Github.

First things first, locally you’ll want to have the respective CLI tools installed for Google Cloud Platform, Terraform, and Kubernetes.

Now that all the prerequisites are covered, let’s dive into the specifics of setup.

gcpIf you take a look at the Google Cloud Platform Console, it’s easy to get a before and after view of what is and will be built in the environment. The specific areas where infrastructure will be built out for Kubernetes are in the following areas, which I’ve taken a few screenshots of just to show what the empty console looks like. Again, it’s helpful to see a before and after view, it helps to understand all the pieces that are being put into place.

The first view is of the Google Compute Engine page, which currently on this account in this organization I have no instances running.

gcp-01

This view shows the container engines running. Basically this screen will show any Kubernetes clusters running, Google just opted for the super generic Google Container Engine as a title with Kubernetes nowhere to be seen. Yet.

gcp-02

Here I have one ephemeral IP address, which honestly will disappear in a moment once I delete that forwarding rule.

gcp-03

These four firewall rules are the default. The account starts out with these, and there isn’t any specific reason to change them at this point. We’ll see a number of additional firewall settings in a moment.

gcp-04

Load balancers, again, currently empty but we’ll see resources here shortly.

gcp-05

Alright, that’s basically an audit of the screens where we’ll see the meat of resources built. It’s time to get the configurations built now.

Time to Terraform Our New World

Using Terraform to build a Kubernetes cluster is pretty minimalistic. First, as I always do I add a few files the way I like to organize my Terraform configuration project. These files include:

  • .gitignore – for the requisite things I won’t want to go into the repository.
  • connections.tf – for the connection to GCP.
  • kubernetes.tf – for the configuration defining the characteristics of the Kubernetes cluster I’m working toward getting built.
  • README.md – cuz docs first. No seriously, I don’t jest, write the damned docs!
  • terraform.tfvars – for assigning variables created in variables.tf.
  • variables.tf – for declaring and adding doc/descriptions for the variables I use.

In the .gitignore I add just a few items. Some are specific to my setup that I have and IntelliJ. The contents of the file looks like this. I’ve included comments in my .gitignore so that one can easily make sense of what I’m ignoring.

# A silly MacOS/OS-X hidden file that is the bane of all repos.
.DS_Store

# .idea is the user setting configuration directory for IntelliJ, or more generally Jetbrains IDE Products.
.idea
.terraform

The next file I write up is the connections.tf file.

provider "google" {
  credentials = "${file("../secrets/account.json")}"
  project     = "thrashingcorecode"
  region      = "us-west1"
}

The path ../secrets/account.json is where I place my account.json file with keys and such, to keep it out of the repository.

The project in GCP is called thrashingcorecode, which whatever you’ve named yours you can always find right up toward the top of the GCP Console.

console-bar

Then the region is set to us-west1 which is the data centers that are located, most reasonably to my current geographic area, in The Dalles, Oregon. These data centers also tend to have a lot of the latest and greatest hardware, so they provide a little bit more oompf!

The next file I setup is the README.md, which you can just check out in the repository here.

Now I setup the variables.tf and the terraform.tfvars files. The variables.tf includes the following input and output variables declared.

// General Variables

variable "linux_admin_username" {
  type        = "string"
  description = "User name for authentication to the Kubernetes linux agent virtual machines in the cluster."
}

variable "linux_admin_password" {
  type ="string"
  description = "The password for the Linux admin account."
}

// GCP Variables
variable "gcp_cluster_count" {
  type = "string"
  description = "Count of cluster instances to start."
}

variable "cluster_name" {
  type = "string"
  description = "Cluster name for the GCP Cluster."
}

// GCP Outputs
output "gcp_cluster_endpoint" {
  value = "${google_container_cluster.gcp_kubernetes.endpoint}"
}

output "gcp_ssh_command" {
  value = "ssh ${var.linux_admin_username}@${google_container_cluster.gcp_kubernetes.endpoint}"
}

output "gcp_cluster_name" {
  value = "${google_container_cluster.gcp_kubernetes.name}"
}

In the terraform.tfvars file I have the following assigned. Obviously you wouldn’t want to keep your production Linux username and passwords in this file, but for this example I’ve set them up here as the repository sample code can only be run against your own GCP org service, so remember, if you run this you’ve got public facing default linux account credentials exposed right here!

cluster_name = "ournewworld"
gcp_cluster_count = 1
linux_admin_username = "frankie"
linux_admin_password = "supersecretpassword"

Now for the meat of this effort. The kubernetes.tf file. The way I’ve set this file up is as shown.

resource "google_container_cluster" "gcp_kubernetes" {
  name               = "${var.cluster_name}"
  zone               = "us-west1-a"
  initial_node_count = "${var.gcp_cluster_count}"

  additional_zones = [
    "us-west1-b",
    "us-west1-c",
  ]

  master_auth {
    username = "${var.linux_admin_username}"
    password = "${var.linux_admin_password}}"
  }

  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]

    labels {
      this-is-for = "dev-cluster"
    }

    tags = ["dev", "work"]
  }
}

With all that setup I can now run the three commands to get everything built. The first command is terraform init. This is new with the latest releases of Terraform, which pulls down any of the respective providers that a Terraform execution will need. In this particular project it pulls down the GCP Provider. This command only needs to be run the first time before terraform plan or terraform apply are run, if you’ve deleted your .terraform directory, or if you’ve added configuration for something like Azure, Amazon Web Services, or Github that needs a new provider.

terraform-init

Now to ensure and determine what will be built, I’ll run terraform plan.

terraform-plan

Since everything looks good, time to execute with terraform apply. This will display output similar to the terraform plan command but for creating the command, and then you’ll see the countdown begin as it waits for instances to start up and networking to be configured and routed.

terraform-apply

While waiting for this to build you can also click back and forth and watch firewall rules, networking, external IP addresses, and instances start to appears in the Google Cloud Platform Console. When it completes, we can see the results, which I’ll step through here with some added notes about what is or isn’t happening and then wrap up with a destruction of the Kubernetes cluster. Keep reading until the end, because there are some important caveats about things that might or might not be destroyed during clean up. It’s important to ensure you’ve got a plan to review the cluster after it is destroyed to make sure resources and the respective costs aren’t still there.

Compute Engine View

In the console click on the compute engine option.

gcp-console-compute-engine

I’ll start with the Compute Engine view. I can see the individual virtual machine instances here and their respective zones.

gcp-console-01

Looking at the Terraform file confiugration I can see that the initial zone to create the cluster in was used, which is us-west1-a inside the us-west1 region. The next two instances are in the respective additional_zones that I marked up in the Terraform configuration.

additional_zones = [
  "us-west1-b",
  "us-west1-c",
]

You could even add additional zones here too. Terraform during creation will create an additional virtual machine instance to add to the Kubernetes cluster for each increment that initial_node_count is set to. Currently I set mine to a variable so I could set it and other things in my terraform.tfvars file. Right now I have it set to 1 so that one virtual machine instance will be created in the initial zone and in each of the designated additional_zones.

Beyond the VM instances view click on the Instance groups, Instance templates, and Disks to seem more items setup for each of the instances in the respective deployed zones.

If I bump my virtual machine instance count up to 2, I get 6 virtual machine instances. I did this, and took a screenshot of those instances running. You can see that there are two instances in each zone now.

gcp-console-2instances-01

Instance groups

Note that an instance group is setup for each zone, so this group kind of organizes all the instances in that zone.

gcp-console-02

Instance Templates

Like the instance groups, there is one template per zone. If I setup 1 virtual machine instance or 10 in the zone, I’ll have one template that describes the instances that are created.

gcp-console-03

gcp-console-04

To SSH into any of these virtual machine instances, the easiest way is to navigate into one of the views for the instances, such as under the VM instances section, and click on the SSH button for the instance.

gcp-console-05

Then a screen will pop up showing the session starting. This will take 10-20 seconds sometimes so don’t assume it’s broken. Then a browser based standard looking SSH terminal will be running against the instance.

ssh-window-01

ssh-window-02

This comes in handy if any of the instances ends up having issues down the line. Of all the providers GCP has made connecting to instances and such with this and tools like gcloud extremely easy and quick.

Container Engine View

In this view we have cluster specific information to check out.

gcp-console-container-clusters

Once the cluster view comes up there sits the single cluster that is built. If there are additional, they display here just like instances or other resources on other screens. It’s all pretty standard and well laid out in Google Cloud Platform fashion.

gcp-console-05

The first thing to note, in my not so humble opinion, is the Connect button. This, like on so many other areas of the console, has immediate, quick, easy ways to connect to the cluster.

gcp-console-06

Gaining access to the cluster that is now created with the commands available is quick. The little button in the top right hand corner copies the command to the copy paste buffer. The two commands execute as shown.

gcloud container clusters get-credentials ournewworld --zone us-west1-a --project thrashingcorecode

and then

kubectl proxy

gcp-console-07

With the URI posted after execution of kubectl proxy I can check out the active dashboard rendered for the container cluster at 127.0.0.1:8001/ui.

IMPORTANT NOTE: If the kubectl version isn’t up to an appropriate parity version with the server then it may not render this page ok. To ensure that the version is at parity, run a kubectl version to see what versions are in place. I recently went through troubleshooting this scenario which rendered a blank page. After trial and error it came down to version differences on server and client kubectl.

Kubernetes_Dashboard

I’ll dive into more of the dashboard and related things in a later post. For now I’m going to keep moving forward and focus on the remaining resources built, in networking.

VPC Network

gcp-console-09

Once the networking view renders there are several key tabs on the left hand side; External IP addresses, Firewall rules, and Routes.

External IP Addresses

Setting and using external IP addresses allow for routing to the various Kubernetes nodes. Several ephemeral IP addresses are created and displayed in this section for each of the Kubernetes nodes. For more information check out the documentation on reserving a static external IP address and reserving an internal IP address.

gcp-console-11

Firewall Rules

In this section there are several new rules added for the cluster. For more information specific to GCP firewall rules check out the documentation about firewall rules.

gcp-console-12

Routes

Routes are used to setup paths mapping an IP range to a destination. Routes setup a VPC Networks where to send packets for a particular IP address. For more information about documentation route details.

gcp-console-13

Each of these sections have new resources built and added as shown above. More than a few convention based assumptions are made with Terraform.

Next steps…

In my next post I’ll dive into some things to setup once you’ve got your Kubernetes cluster. Setting up users, getting a continuous integration and delivery build started, and more. I’ll also be writing up another entry, similar to this for AWS and Azure Cloud Providers. If you’d like to see Kubernetes setup and a tour of the setup with Terraform beyond the big three, let me know and I’ll add that to the queue. Once we get past that there are a number of additional Kubernetes, containers, and dev specific posts that I’ll have coming up. Stay tuned, subscribe to the blog feed or follow @ThrashingCode for new blog posts.

Resources: