The gist from the video:
Ways To Cleanup Docker Containers & Images
The gist from the video:
The gist from the video:
There’s likely a million introductions to Minikube, but I wanted one of my own. Thus, here you go! Minikube is basically Kubernetes light that runs on your own machine. Albeit, it does this similarly to how Docker used to do it, via a virtual machine. Thus, you can do some things with it but if you want to get serious you’ll still need to spool up a proper cluster somewhere as it will start to bog down your machine with any heavy workloads.
Linux Direct:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
&& sudo install minikube-linux-amd64 /usr/local/bin/minikube
Linux Debian:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
&& sudo install minikube-linux-amd64 /usr/local/bin/minikube
Linux Red Hat:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-1.4.0.rpm \
&& sudo rpm -ivh minikube-1.4.0.rpm
minikube start
will start a minikube instance, pulling images, resources, kubelets, kubeadm, dashboard, and all those resources.
minikube stop
brings the minikube service to a stop, allowing for restart later.
minikube delete
will delete the minikube. This will delete any of the content or related collateral that was running in the minikube.
minikube start
this is the way to restart a minikube instance after you’ve stopped the instance. It’s also the way start a minikube, as shown above.
If you want a named minikube instance, use the -p switch, with a command like minikube start -p adrons-minikube
.
To check out the dashboard, that pretty Google dashboard for Kubernetes, run minikube dashboard
to bring that up.
To get a quick update on the current state of the minikube instance just run minikube status
.
This is, albeit I may be mistaken, this is a Linux only feature. Run minikube start --vm-driver=none
and it’ll kick off a minikube right there on your local machine.
References:
May the database deluge begin, it’s time for “Bunches of Databases in Bunches of Weeks”. We’ll get into looking at databases similar to how they’re approached in “7 Databases in 7 Weeks“. In this session I got into a hard look at PostgreSQL or as some refer to it just Postgres. This is the first of a few sessions on PostgreSQL in which I get the database installed locally on Ubuntu. Which is transferable to any other operating system really, PostgreSQL is awesome like that. Then after installing and getting pgAdmin 4, the user interface for PostgreSQL working against that, I go the Docker route. Again, pointing pgAdmin 4 at that and creating a database and an initial table.
Below the video here I’ve added the timeline and other details, links, and other pertinent information about this series.
0:00 – The intro image splice and metal intro with tunes..
3:34 – Start of the video database content.
4:34 – Beginning the local installation of Postgres/PostgreSQL on the local machine.
20:30 – Getting pgAdmin 4 installed on local machine.
24:20 – Taking a look at pgAdmin 4, a stroll through setting up a table, getting some basic SQL from and executing with pgAdmin 4.
1:00:05 – Installing Docker and getting PostgreSQL setup as a container!
1:00:36 – Added the link to the stellar post at Digital Ocean’s Blog.
1:00:55 – My declaration that if Digital Ocean just provided documentation I’d happily pay for it, their blog entries, tutorials, and docs are hands down some of the best on the web!
1:01:10 – Installing Postgesql on Ubuntu 18.04.
1:06:44 – Signing in to Docker hub and finding the official Postgresql Docker Image.
1:09:28 – Starting the container with Docker.
1:10:24 – Connecting to the Docker Postgresql Container with pgadmin4.
1:13:00 – Creating a database and working with SQL, tables, and other resources with pgAdmin4 against the Docker container.
1:16:03 – The hacker escape outtro. Happy thrashing code!
For each of these sessions for the “Bunches of Databases in Bunches of Weeks” series I’ll follow this following sequence. I’ll go through each database in this list of my top 7 databases for day 1 (see below), then will go through each database and work through the day 2, and so on. Accumulating additional days similarly to the “7 Databases in 7 Weeks”
“Day 1” of the Database, I’ll work toward building a development installation of the particular database. For example, in this session I setup PostgreSQL by installing it to the local machine and also pulled a Docker image to run PostgreSQL.
“Day 2” of the respective database, I’ll get into working against the database with CQL, SQL, or whatever that one would use to work specifically with the database directly. At this point I’ll also get more deeply into the types, inserting, and storing data in the respective database.
“Day 3” of the respective database, I’ll get into connecting an application with C#, Node.js, and Go. Implementing a simple connection, prospectively a test of the connection, and do a simple insert, update, and delete of some sort against the respective database built on the previous day 2 of the same database.
“Day 4” and onward I’ll determine the path and layout of the topic later, so subscribe on YouTube and Twitch, and tune in. The events are scheduled, with the option to be notified when a particular episode is coming on that you’d like to watch here on Twitch.
When Linux starts up (or most Unix variants or OS-X for that matter, which is after all a kind of Unix variant) there are particular scrips that execute. The key two are ~/.bash_profile and ~./bashrc. When you log in the ~/.bash_profile executes and when you startup a shell then the ~/.bashrc executes.
These two files are standard executable script files, so any bash will do. For instance, some of the bash script I end up in my ~/.bash_profile includes a git prompt, as shown below.
if [ -f "$(brew --prefix)/opt/bash-git-prompt/share/gitprompt.sh" ]; then
source "$(brew --prefix)/opt/bash-git-prompt/share/gitprompt.sh"
fi
Another few lines of code actually load my nvm, which is my Node.js Version Manager.
export NVM_DIR="/Users/axh6454/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm
I also have a few functions I’ve created, that load and are ready for my use at any location I open the terminal at.
gimmedocker() { if [ $1 ]; then
docker-machine start $1
docker-machine env $1
eval $(docker-machine env $1)
docker ps -a
fi };
cleandocker() {
# Wipe out the images and containers.
docker rm $(docker ps -a -q)
docker rmi $(docker images -q)
};
The first function executes simply by entering gimme docker nameOfDockerVirtualMachineImage
. It then checks for the virtual machine image parameter (the $1) and then executes various docker-machine commands against that image. Then ends with the evaluation and execution of the docker machine terminal connection.
The second function deletes my docker containers and then deletes my images. This way I can start fresh without deleting an entire docker virtual machine (sometimes the later may actually be easier). It’s a quick way to start fresh with docker images and containers when working through a lot of minor changes.
The last thing I’ll cover real quick that is commonly located in these startup scripts are some environment variables being set. For instance, I use Terraform to build out infrastructure. For that, sometimes I setup some Terraform variables, that are built to work specifically when prefaced with TFVAR. So my variables look something like this when set in script.
export TF_VAR_username="root"
export TF_VAR_password="someSecretPassword"
So that’s some examples and the basic gist of things you might see, and what you might want to run with your ~/.bash_profile or ~/.bashrc files. Happy bash hacking!
The easiest implementation of a docker container with Postgresql that I’ve found recently allows the following commands to pull and run a Postgresql server for you.
[sourcecode language=”bash”]
docker pull postgres:latest
docker run -p 5432:5432 postgres
[/sourcecode]
Then you can just connect to the Postgresql Server by opening up pgadmin with the following connection information:
With that information you’ll be able to connect and use this as a development database that only takes about 3 seconds to launch whenever you need it.
You must be logged in to post a comment.