Showing posts with label Docker. Show all posts
Showing posts with label Docker. Show all posts

26 September, 2020

Back to learning about Docker

A few months ago, I started learning about Docker. I got derailed with work projects and personal projects. Now that I have a bit of time, I want to get back to learning about Docker. 
I came across the book "Learn Docker in a Month of Lunches" by Elton Stoneman. Elton is putting a YouTube video each workday.

Day 1 - Understanding Docker and running Hello, World

Docker Commands covered in "Learn Docker in a Month of Lunches: Day 1"
  • docker version
  • docker-compose version
  • docker container run {name of image}
  • docker container run --interactive --tty {name of a image}
  • docker container ls
  • docker container top {container id}
  • docker container logs {container id}
  • docker container inspect {container id}
  • docker container ls --all
  • docker container run --detach --publish {local port:container port} {name of a image}
  • docker container status {container id}
  • docker container rm --force $(docker container ls --all --quite)

Notes from Day 1

  • The Docker Engine is the thing that runs the containers.
  • Container wraps the application and has resources that the container can access.
  • Containers are like a virtual environment in concept. One major difference is the container relies on the hosting machine's resources.
  • Containers stop automatically when the code in the container stops.
  • The Docker CLI can be configured to run against a local docker server or against a remote docker server

Day 2 - Building your own Docker images

Docker Commands covered in "Learn Docker in a Month of Lunches: 02"
  • docker image pull {name of image}
  • docker container run -d --name {name I want the container called} {name of image}
  • docker container logs {name of image}
  • docker container -rm {name of image}
  • docker container run --end {key=value pair} {name of image}
  • docker image build --tag {name} .
  • docker image ls
  • docker image history {image name}
  • docker system df

Notes from Day 2

  • Dockerfile contains the steps needed to build a Docker image. This kind of like install instructions or a runbook.
  • Docker commands follow a pattern
    • docker {command} {subcommand} - with Docker 1.13 the CLI was restructured
  • We talk about an image as a singular thing. An image is built up of many layers or parts. The images parts can be shared among other images.
  • Dockerfile
    • Dockerfile commands are capitalized (ie FROM, ENV, WORKDIR)
    • FROM line is required and is first-line
    • WORKDIR creates a working directory
    • CMD is the startup command for a container
  • The size listed from the command 'docker image ls' is the logical size.
  • A good practice is to structure the Dockerfile in such a way that the cache is used. This helps reduce the amount of time it takes to rebuild an image. 
    • The build process creates a hash for each step in the Dockerfile. This hash is used to help determine if that step needs to be rebuilt. Once the build encounters a step that needs to be rebuilt then all steps from that point to the last step will be rebuilt.
    • Put the commands that are least likely to change first

Day 3 - Packaging apps from source code into Docker images


Docker commands covered in "Learn Docker in a Month of Lunches: 03"
  • docker image build -t { name of image } .
  • docker container run { name of image }
  • docker network create { name of network }
  • docker container run --name { name of container} -d -p { host port }:{ container port } --network { name of network } { name of image }docker co
  • docker image ls -f reference={first image} -f reference={second image}
  • docker system prune

Notes from Day 3

  • Able to run commands inside Dockerfile
    • examples would be expanding zip files, running Windows installers, running a git command
  • Dockerfile can contain multi-stages
    • the final stage should contain only what is needed to run the application
    • the image size can change while the image is building
  • With Docker, it is possible to build an image from source code without having an SDK installed
  • Dockerfile
    • EXPOSE - port the container should listen on
  • Multistage Dockerfiles are important and useful
    • Helps standardize your project across various OS
    • Helps standardize processes across projects. Any project using a Dockerfile basically need two commands to get the project running
      • docker image build
      • docker container run
    • The use of layers within an image reduces the amount of time spent in rebuilding images
      • The first time an image is built all the layers need to be pulled, processed, and cached. After the first build, the cached layers can be used.
        • The order of the commands in the Dockerfile is important
      • The use of layers and cache comes at the expensive of hard drive space
        • Docker doesn't auto clean the cache
    • Separates the build & package tools from what is needed for deployment
  • Within a multistage Dockerfile, the steps in each stage should be organized so the steps change slowly or whose results are not like to change goes first.
    • You can only optimize so far. If one step relies on the results of a previous step then the steps must appear in that order.
      • An example would be the run command 


Resources





13 September, 2020

Updating the version of Docker on my primary machine

I noticed that the version of Docker that I am running a little behind the current stable version. I am thinking it is time to update.

After reading the 'Install Docker Engine on Ubuntu' page, I needed to make a few changes to my system.

My first step was to remove the source 'https://download.docker.com/linux/ubuntu disco stable' from sources. It was simple to follow the steps.

Ran the following commands

  • sudo apt-get remove docker docker-engine docker.io containerd runc
    • This made sure that any older versions were removed from my computer.
  • sudo apt-get update
  • sudo apt-get install \
        apt-transport-https \
        ca-certificates \
        curl \
        gnupg-agent \
        software-properties-common
    • This allows apt to use a repository over HTTPS
  • curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    • Added the Docker's GPG key
  • sudo apt-key fingerprint 0EBFCD88
    • Verifies that I have key with the correct fingerprint
  • sudo add-apt-repository \
       "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
       $(lsb_release -cs) \
       stable"
    • setup a the stable repository
  • sudo apt-get update
  • sudo apt-get install docker-ce docker-ce-cli containerd.io
    • installed the latest version of Docker Engine and containerd

16 February, 2020

Rebuild my docker lab setup

Background

Over the last few weeks, I have rebuilt my Docker lab and Home Assistant set up a few times.  The heart of the setup is a Raspberry 4. Finally, all the hardware I ordered has arrived and I have a bit of time to rebuild my setup. Hopefully, I won't need to rebuild for a while.

Plan

This time I want to set up a Rasberry Pi 4 Model B (2GB) with Raspbain Buster then configure the Pi to use a 1 TB SSD. Since the Rasberry Pi 4 doesn't currently support booting from an SSD, I will use the workaround from James Chambers. Once the Rasberry Pi is set up and a basic configuration is done, I will then install IOTstack from Graham Garner. This will help me set up Docker, HASS.io, Portainer, Mosquitto, and Tasmoadmin.

What I am hoping to get out of this is a stable platform for learning and experimenting with Docker,  Home Automation, Internet of Things, and electronics.

Notes

Setting up the Raspberry Pi

  • Used balenaEtcher to flash Raspbian Buster with desktop to an 8 Gig SD Card
  • Used balenaEtcher to flash Raspbian Buster with desktop to a 1 TB SSD
  • Plugged in a monitor, keyboard, mouse, network cable and applied power
    • I waited for about 5 minutes for the system to run through its first-time power-up.
  • created an empty file named ssh and placed it the boot folder for both SD and SSD
    • created the file wpa_supplicant.conf and placed it in the boot folder for both SD and SSD
    • Set basic config values for keyboard
    • changed the pi default password
    • skipped the update
  • sudo fdisk /dev/sda
    • set the SSD PARTUUID to d34db33f
  • sudo blkid
  • made backup of /boot/cmdline.txt
  • edited /boot/cmdline.txt to use the PARTUUID that was set earlier
  • sudo reboot
  • verified the SSD is being used
    • findmnt -n -o SOURCE /
  • updated /etc/fstab 
    • changed the PARTUUID to d34db33f
  • sudo reboot
  • resizing filesystem
    • delete the partition /dev/sda2
    • create a new partition with a first sector at 532480
    • sudo resize2fs /dev/sda2
      • This takes a while
    • df -h 
      • verify /dev/root shows a size value close to the size of the drive
  • sudo reboot

Configuring the Raspberry Pi

  • set Hostname
  • boot to CLI
  • auto login: uncheck mark
  • sudo reboot
  • verified Raspberry Pi configuration
  • verified that I can ssh into the raspberry pi
  • sudo apt update
  • sudo apt upgrade

Install IOTstack

  • ensure git is installed
    • sudo apt install git
  • clone IOTstack
    • git clone https://github.com/gcgarner/IOTstack.git ~/IOTstack
  • run menu
    • cd IOTstack
    • ./menu.sh
      • installed docker
      • installed Hass.io
      • installed Portainer
      • installed Mosquitto
      • installed tasmoadmin
  • docker-compose up -d




References

Github gcgarner / IOTstack







26 January, 2020

Setting up Home Assistant on Raspberry Pi using generic Linux host method



Recently my installation of Home Assistant died. It appears that I burned up the SD card. I can no longer read the SD Card. When I run gparted against the SD card, the card mounts in read-only mode. After a bit of research, I came up with a plan.

Rough Plan


  • Install the latest Raspbian Buster image
  • Reconfigure the Pi to run from a hard drive
  • Install Home Assistant using the generic Linux host method

Steps

  • Downloaded Raspbian Buster with desktop from the Raspberry Pi website
  • Using balenaEtcher to flash Buster to an SD card (8GB)
  • Booted the Pi for the first time
  • Setup the language, keyboard, and timezone. 
    • Changed the pi user's password
    • Selected the WiFi network
    • Updated the software
  • Updated the packages
    • sudo apt update
    • sudo apt upgrade
    • sudo reboot
  • Followed the directions from the blog post 'Quick SD to SSD on the pi 4'
    • use the utility 'SD Card Copier' to copy the SD card to the hard drive
    • run 'sudo blkid' to get the UUID of the SD sard and the hard drive
    • backup the file /boot/cmdline.txt
    • edit the file /boot/cmdline.txt. Change the partuuid from using the SD card to the hard drive
    • reboot
    • Note: must leave the SD card in the Raspberry Pi
  • Changed the hostname of the Pi
  • Enabled SSH on the Pi
  • Setup ssh key on my main PC
    • ssh-keygen
    • ssh-copy -i ~/.ssh/key-name pi@ip.address
    • ssh-add
  • Disable Password Authentication on the Pi
    • sudo nano /etc/ssh/sshd_config
      • Uncommented the line 'PasswordAuthentication yes'
      • changed yes to no
  • Set a static IP for the Pi
    • edit /etc/dhcpcd.conf
      • added
        • interface wlan0
        • static ip_address=XXX.XXX.XXX.XXX
        • static routers=XXX.XXX.XXX.XXX
        • static domain_name_servers=XXX.XXX.XXX.XXX
    • reboot
  • Changed Pi Configuration to boot to cli
  • Followed the directions from the blog post 'Install Docker and Docker Compose on Raspberry pi 4(Raspbian Buster)'
    • curl -sSL get.docker.com | sh
    • sudo usermod -aG docker pi
    • sudo apt-get install libffi-dev libssl-dev
    • sudo apt install python3-dev
    • sudo apt-get install -y python3 python3-pip
    • sudo pip3 install docker-compose
  • Followed installer for a generic Linux system
    • sudo apt install bash
    • sudo apt install jq
    • sudo apt install curl
    • sudo apt install avahi-daemon
    • sudo apt install dbus
    • sudo apt install apparmor-utils
  • curl -sL https://raw.githubusercontent.com/home-assistant/hassio-installer/master/hassio_install.sh | bash -s -- -m raspberrypi4-64
  • Setup Home Assistant
There was an issue with Home Assistant connecting to the internet. I removed the changes to /etc/dhcpcd.config. This seems to have helped. 

23 December, 2019

Third stop on the Docker learning train - Quickstart: Compose and ASP.NET Core with SQL Server

The third stop on my Docker learning train is a recommendation from the Docker Samples. The tutorial is titled 'Quickstart: Compose and ASP.NET Core with SQL Server'. This is going to be a whistle-stop.

This tutorial shows using the Docker Engine running on Linux to set up and run an ASP.NET Core application using a .NET Cor SKD image and SQL Server on Linux image. The tutorial makes use of Docker Compose for defining and running multiple container applications.

The tutorial references an older version (2.1) of the .NET Core libraries. My computer is running a newer (3.1) version of the .NET Core libraries. I wanted to see what error messages and problems running this setup would be.

The first change I made was to the docker-compose.yml file. I changed the port from 8000 to 9090 because I already had a different container using port 8000 and 9000. The second change made was in the "startup.cs" file. I commented out the setting up the connection variable. I moved the value of the connection to the "DefaultConnection" in the "appsettings.json" file. I needed to make a few more changes to the rest of the "ConfigureServices" method.


public void ConfigureServices (IServiceCollection services) {
    // Database connection string.
    // Make sure to update the Password value below from "Your_password123" to your actual password.
    // var connection = @"Server=db;Database=master;User=sa;Password=Your_password123;";

    services.Configure (options => {
    // This lambda determines whether user consent for non-essential cookies is needed for a given request.
        options.CheckConsentNeeded = context => true;
        options.MinimumSameSitePolicy = SameSiteMode.None;
        });

    services.AddDbContext (options =>
        options.UseSqlServer (
            Configuration.GetConnectionString ("DefaultConnection")));
    services.AddDefaultIdentity ()
        .AddEntityFrameworkStores ();

    services.AddMvc ().SetCompatibilityVersion (CompatibilityVersion.Version_2_1);
}

On my computer, I encountered an issue with running 'docker-compose build'. The error message was
"ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable."
After a bit of research, I came to find out that this is a fairly generic and common message that could have several possible causes. For me, it turned out the I needed to change the permissions on the "docker.sock" file. I fixed the issue be running
sudo chmod 666 /var/run/docker.sock

The running of the command 'docker-compose up' wrote many messages to the console that gives the appearance of that the application and database are running. I can also have a couple of active containers. When I navigate to localhost:9090, the browser displays a "This site can't be reached" message. I am not going to take the time to figure out the problem at this time.



References 

Docker Compose - "Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration."

The article on Medium "If you faced an issue like 'Couldn't connect to Docker daemon at http+docker://localunixsocket -- is it running?'..." was helpful in dealing with an error from running 'docker-compose build'





19 December, 2019

Second stop on the Docker learning train - Dockerize an ASP.NET Core application

The second stop on my Docker learning train is a recommendation from the Docker Samples. The tutorial is titled 'Dockerize an ASP.NET Core application'.

When I first read through the tutorial, I figured this was going to be quick and easy. Due to making a few mistakes in creating the Dockerfile, I ended up with a few lessons learned.

The first lesson I learned was to read the output from running the docker build command carefully. It was interesting to see that the Docker daemon creates and removes intermediate containers.

The next lesson was when a container reports an error code 145 carefully double-check the Dockerfile. Pay close attention to what is being used for the DotNet Core runtime. The 'SDK' should be used for the restore and publish command while 'ASPNET' should be used for the runtime.

Another lesson learned is to use '.dockerignore' file to keep from unnecessary files from being included in the image.

The last lesson was to use interactive mode. The tutorial did not go through the steps I needed to do a bit of research to figure it out. The Alpine image does not include bash. I modified the Dockerfile to pull in bash and a few other tools. Rebuilt the image. The used the interactive mode switches to get a command prompt.

Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-alpine AS build
WORKDIR /app/aspnetapp

# copy csproj and restore as distinct layers
COPY myWebApp/*.csproj ./
RUN dotnet restore

# copy everything else and build app
COPY myWebApp/. ./
RUN dotnet publish -c Release -o out

# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-alpine AS runtime
WORKDIR /app
COPY --from=build /app/aspnetapp/out .

# Uncomment the following line to install bash and other tools useful for debugging
RUN apk add --no-cache bash coreutils grep sed

ENTRYPOINT ["dotnet", "myWebApp.dll"]

Command to build the image.

docker build -t {image name} .

Command to start the container in interactive mode and to override the default entry point

docker run -it --entrypoint /bin/bash {image name}

Reference

.dockerignore file
"Before the docker CLI sends the context to the docker daemon, it looks for a file named .dockerignore in the root directory of the context. If this file exists, the CLI modifies the context to exclude files and directories that match patterns in it. This helps to avoid unnecessarily sending large or sensitive files and directories to the daemon and potentially adding them to images using ADD or COPY." - https://docs.docker.com/engine/reference/#dockerignore-file

14 December, 2019

First stop on the Docker learning train - Docker for beginners

The first stop on my Docker learning train is from the Docker Samples with the tutorial Docker for beginners.

The tutorial has four chapters:
  • Setup
  • 1.0 Running your first container
  • 2.0 Webapps with Docker
  • 3.0 Deploying an app to a Swarm

Setup

The Setup chapter makes sure that Docker is installed and running on my machine. The tutorial points back to the Docker documentation for the installation steps. To verify that Docker is working run the following:
docker run hello-world

1.0 Running your first container

The key topics for this chapter include:
  • How to run a docker container.
  • How to list containers.
  • How to inspect a container.

Docker CLI commands

  • docker pull - pulls an image or repository from a registry
  • docker run - runs a container
    • This command can also pull a container
    • By default, the interactive shell will exit after running a script. Use the '-it' switch to keep the shell open.
      • Example: 
        docker run -it alpine /bin/sh
        
  • docker ps - list containers
    • By default, the command lists the running containers. 
    • To list all the containers use the switch '-a'
      • Example:
        docker ps -a
  • docker inspect - returns low-level information about a docker container

Terms

image - Docker images are the basis of containers. An Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime. An image typically contains a union of layered filesystems stacked on top of each other. An image does not have state and it never changes. - Docker Glossary
container - A container is a runtime instance of a docker image.
A Docker container consists of
  • A Docker image
  • An execution environment
  • A standard set of instructions
The concept is borrowed from Shipping Containers, which define a standard to ship goods globally. Docker defines a standard to ship software. - Docker Glossary

2.0 Webapps with Docker

This chapter is broken into a few sections.

2.1 Run a static website in a container

Key topics for the section include:
  • How to run a container in detached mode.
  • How to publish the ports the container exposes.
  • How to name a container.
  • How to pass environment variables to the container.
  • How to stop a container.
  • How to remove a container.

2.2 Docker Images

Key topics for this section include:
  • What is a Docker image?
  • How to use 'tag' to pull a specific version of a container.

2.3 Create your first image

Key topics for this section include:
  • How to write a Dockerfile.
  • How to build the image.
  • How to run the image.
Notes about Dockerfiles

  • By convention, an instruction is written in UPPERCASE. This makes it easier to distinguish an instruction from an argument. 
  • Docker runs instructions in order.
  • A Dockerfile must begin with a 'FROM' instruction. 
  • Lines the begin with '#' are treated as a comment.
  • Guide: Best practices for writing Dockerfiles

Terms

base image - A base image has no parent image specified in its Dockerfile. It is created using a Dockerfile with the FROM scratch directive.
Child images are images that build on base images and add additional functionality. 
Official images are Docker sanctioned images. 
User images are images created and shared by users like you.
Dockerfile is a text document that contain all the commands a user could call on the command line to assemble an image. 

Docker CLI commands

  • docker run
    • -d - detached mode
    • -P - publish exposed container ports to random ports on the host
    • -e - pass environment variables to the container.
    • --name - allows a name to be specified for the container
  • docker stop - stops one or more containers
  • docker rm - removes one or more containers
    • -f - Force removal of a running container
  • docker port - List port mapping or a specific mapping for a container
  • docker images - list images
  • docker pull 
    • by default, pull gets the latest version
    • to get a specific version include the tag 
      • Example: 
        docker pull ubuntu:12.04
        
  • docker search - search the Docker Hub for images
  • docker build - build an image from a Dockerfile
    • -t - Name and optional a tag in the 'name:tag' format
  • docker login - Log into a Docker registry
  • docker push - Push an image or a repository to a registry

3.0 Deploy an app to a Swarm

Key topics for this chapter include:

  • How to create a swarm.
  • How to deploy. 

I was unable to make the example work properly on my machine. The docker swarm and docker stack executed without reporting any errors. The command 'docker ps' reports multiple active containers. I will need to come back to this tutorial at a later date to investigate further.

Terms

swarm - A swarm is a cluster of one or more Docker Engines running in swarm mode
Docker Swarm - Do not confuse Docker Swarm with the swarm mode features in Docker Engine.
Docker Swarm is the name of a standalone native clustering tool for Docker. Docker Swarm pools together several Docker hosts and exposes them as a single virtual Docker host. It serves the standard Docker API, so any tool that already works with Docker can now transparently scale up to multiple hosts.
Also known as : docker-swarm
swarm mode - Swarm mode refers to cluster management and orchestration features embedded in Docker Engine. When you initialize a new swarm (cluster) or join nodes to a swarm, the Docker Engine runs in swarm mode.

Docker CLI Commands







07 December, 2019

Learning a bit about Docker


What is Docker?

Docker is a tool to develop, ship and run applications. The heart of Docker is the container. A Docker container contains all the parts needed to run an application.

Install

I am currently running Pop!_OS 19.10. Which made installing Docker a bit more interesting. Pop!_OS 19.10 is based on Ubuntu 19.10. The official Docker documentation shows support for Ubuntu 19.04. For the most part, I followed the official documentation to install. When it came time to add the source I used the following.

sudo add-apt-repository "deb https://download.docker.com/linux/ubuntu disco stable"

The rest of the instructions were easy to follow and didn't require any modifications.

I also run the 'Optional Linux post-installation steps'. These steps should allow me to run docker commands from a terminal without preface a command without the 'sudo' keyword. For some reason, this is not working for me. I have verified that the group exists and my user account is part of the group. Additional research is needed.

Reading

I spent a bit of time reading through the official Docker documentation. There is a lot of information to absorb. There is way too much information to summarize in a single blog post. I am going to need a break the information down into smaller chunks.

Plan

From the Docker Samples work the following tutorials:

From Play with Docker Classroom work the following tutorials:
From Docker Curriculum work the following tutorials:

Resources

Challenging myself to learn something new

I have recently set a big challenge for myself. I want to know about Machine Learning . To add to the challenge, I am trying out usin...