21 June, 2020

Restarting my 3D Printing hobby

Recently, I decided that I missed playing with, working with, and modifying my 3D printer. After a bit of research, I settled on purchasing the Ender 5 Plus.

While I was waiting for the printer to arrive I spent time watching YouTube videos on how to assembly the printer. I picked up several helpful tips and tricks. The Assembly of the printer was quick and easy. 

My first challenge came in leveling the bed. The blame for the challenge is all my fault. I didn't read the documentation carefully. I am made a bad assumption about how the software worked and what steps I needed to perform. Once I did read the documentation and watched a couple of videos, it took about 20 minutes to level the bed. I went through the AUX leveling steps twice just to make sure the bed as fairly level. For leveling, I had the build plate heated to 70 C and the hot-end to 190 C. The Auto-leveling sequence was run.

For the slicing program, I am using Cura 4.6.1. For my first print, the calibration cube was used. I took the defaults that Cura has for the Ender 5 Plus and send the gcode to the SD card. 

Here is where my second challenge appeared. When I put the card into the SD slot the printer didn't find the file. Turns out that the gcode needs to placed at the root level on the SD card and the file name can't be longer than 15 characters. 

Once the file was renamed, the printer recognized the existence of the file. I was then able to get the print started. When the hot-end starting moving from the home position to the starting point of the priming line the extruder sent filament to the hot-end. The priming line printed without issue. The skirt and cube printed without issue. 

I noticed during the print that the heated build got turned off or set to 0. I didn't manually adjust the bed's temperature settings. I also noted a strange noise when the infill was being printed. 



Took several measurements on the X, Y, and Z faces. I got a reading of +/- of .05 mm for most of the cube. The exception was the first two layers which were - .2 mm in the X and Y direction. 

This seems like a reasonable first print. 



ToDo
  • Examine the gcode for purge/prime sequence, to determine if I can get rid of the line the prints from the home position to the start of the purge/prime line.
  • Examine the gcode for changes to the bed temperature to determine if the gcode is turning off the bed or if the firmware is turning off the bed. 

16 February, 2020

Rebuild my docker lab setup

Background

Over the last few weeks, I have rebuilt my Docker lab and Home Assistant set up a few times.  The heart of the setup is a Raspberry 4. Finally, all the hardware I ordered has arrived and I have a bit of time to rebuild my setup. Hopefully, I won't need to rebuild for a while.

Plan

This time I want to set up a Rasberry Pi 4 Model B (2GB) with Raspbain Buster then configure the Pi to use a 1 TB SSD. Since the Rasberry Pi 4 doesn't currently support booting from an SSD, I will use the workaround from James Chambers. Once the Rasberry Pi is set up and a basic configuration is done, I will then install IOTstack from Graham Garner. This will help me set up Docker, HASS.io, Portainer, Mosquitto, and Tasmoadmin.

What I am hoping to get out of this is a stable platform for learning and experimenting with Docker,  Home Automation, Internet of Things, and electronics.

Notes

Setting up the Raspberry Pi

  • Used balenaEtcher to flash Raspbian Buster with desktop to an 8 Gig SD Card
  • Used balenaEtcher to flash Raspbian Buster with desktop to a 1 TB SSD
  • Plugged in a monitor, keyboard, mouse, network cable and applied power
    • I waited for about 5 minutes for the system to run through its first-time power-up.
  • created an empty file named ssh and placed it the boot folder for both SD and SSD
    • created the file wpa_supplicant.conf and placed it in the boot folder for both SD and SSD
    • Set basic config values for keyboard
    • changed the pi default password
    • skipped the update
  • sudo fdisk /dev/sda
    • set the SSD PARTUUID to d34db33f
  • sudo blkid
  • made backup of /boot/cmdline.txt
  • edited /boot/cmdline.txt to use the PARTUUID that was set earlier
  • sudo reboot
  • verified the SSD is being used
    • findmnt -n -o SOURCE /
  • updated /etc/fstab 
    • changed the PARTUUID to d34db33f
  • sudo reboot
  • resizing filesystem
    • delete the partition /dev/sda2
    • create a new partition with a first sector at 532480
    • sudo resize2fs /dev/sda2
      • This takes a while
    • df -h 
      • verify /dev/root shows a size value close to the size of the drive
  • sudo reboot

Configuring the Raspberry Pi

  • set Hostname
  • boot to CLI
  • auto login: uncheck mark
  • sudo reboot
  • verified Raspberry Pi configuration
  • verified that I can ssh into the raspberry pi
  • sudo apt update
  • sudo apt upgrade

Install IOTstack

  • ensure git is installed
    • sudo apt install git
  • clone IOTstack
    • git clone https://github.com/gcgarner/IOTstack.git ~/IOTstack
  • run menu
    • cd IOTstack
    • ./menu.sh
      • installed docker
      • installed Hass.io
      • installed Portainer
      • installed Mosquitto
      • installed tasmoadmin
  • docker-compose up -d




References

Github gcgarner / IOTstack







26 January, 2020

Setting up Home Assistant on Raspberry Pi using generic Linux host method



Recently my installation of Home Assistant died. It appears that I burned up the SD card. I can no longer read the SD Card. When I run gparted against the SD card, the card mounts in read-only mode. After a bit of research, I came up with a plan.

Rough Plan


  • Install the latest Raspbian Buster image
  • Reconfigure the Pi to run from a hard drive
  • Install Home Assistant using the generic Linux host method

Steps

  • Downloaded Raspbian Buster with desktop from the Raspberry Pi website
  • Using balenaEtcher to flash Buster to an SD card (8GB)
  • Booted the Pi for the first time
  • Setup the language, keyboard, and timezone. 
    • Changed the pi user's password
    • Selected the WiFi network
    • Updated the software
  • Updated the packages
    • sudo apt update
    • sudo apt upgrade
    • sudo reboot
  • Followed the directions from the blog post 'Quick SD to SSD on the pi 4'
    • use the utility 'SD Card Copier' to copy the SD card to the hard drive
    • run 'sudo blkid' to get the UUID of the SD sard and the hard drive
    • backup the file /boot/cmdline.txt
    • edit the file /boot/cmdline.txt. Change the partuuid from using the SD card to the hard drive
    • reboot
    • Note: must leave the SD card in the Raspberry Pi
  • Changed the hostname of the Pi
  • Enabled SSH on the Pi
  • Setup ssh key on my main PC
    • ssh-keygen
    • ssh-copy -i ~/.ssh/key-name pi@ip.address
    • ssh-add
  • Disable Password Authentication on the Pi
    • sudo nano /etc/ssh/sshd_config
      • Uncommented the line 'PasswordAuthentication yes'
      • changed yes to no
  • Set a static IP for the Pi
    • edit /etc/dhcpcd.conf
      • added
        • interface wlan0
        • static ip_address=XXX.XXX.XXX.XXX
        • static routers=XXX.XXX.XXX.XXX
        • static domain_name_servers=XXX.XXX.XXX.XXX
    • reboot
  • Changed Pi Configuration to boot to cli
  • Followed the directions from the blog post 'Install Docker and Docker Compose on Raspberry pi 4(Raspbian Buster)'
    • curl -sSL get.docker.com | sh
    • sudo usermod -aG docker pi
    • sudo apt-get install libffi-dev libssl-dev
    • sudo apt install python3-dev
    • sudo apt-get install -y python3 python3-pip
    • sudo pip3 install docker-compose
  • Followed installer for a generic Linux system
    • sudo apt install bash
    • sudo apt install jq
    • sudo apt install curl
    • sudo apt install avahi-daemon
    • sudo apt install dbus
    • sudo apt install apparmor-utils
  • curl -sL https://raw.githubusercontent.com/home-assistant/hassio-installer/master/hassio_install.sh | bash -s -- -m raspberrypi4-64
  • Setup Home Assistant
There was an issue with Home Assistant connecting to the internet. I removed the changes to /etc/dhcpcd.config. This seems to have helped. 

23 December, 2019

Third stop on the Docker learning train - Quickstart: Compose and ASP.NET Core with SQL Server

The third stop on my Docker learning train is a recommendation from the Docker Samples. The tutorial is titled 'Quickstart: Compose and ASP.NET Core with SQL Server'. This is going to be a whistle-stop.

This tutorial shows using the Docker Engine running on Linux to set up and run an ASP.NET Core application using a .NET Cor SKD image and SQL Server on Linux image. The tutorial makes use of Docker Compose for defining and running multiple container applications.

The tutorial references an older version (2.1) of the .NET Core libraries. My computer is running a newer (3.1) version of the .NET Core libraries. I wanted to see what error messages and problems running this setup would be.

The first change I made was to the docker-compose.yml file. I changed the port from 8000 to 9090 because I already had a different container using port 8000 and 9000. The second change made was in the "startup.cs" file. I commented out the setting up the connection variable. I moved the value of the connection to the "DefaultConnection" in the "appsettings.json" file. I needed to make a few more changes to the rest of the "ConfigureServices" method.


public void ConfigureServices (IServiceCollection services) {
    // Database connection string.
    // Make sure to update the Password value below from "Your_password123" to your actual password.
    // var connection = @"Server=db;Database=master;User=sa;Password=Your_password123;";

    services.Configure (options => {
    // This lambda determines whether user consent for non-essential cookies is needed for a given request.
        options.CheckConsentNeeded = context => true;
        options.MinimumSameSitePolicy = SameSiteMode.None;
        });

    services.AddDbContext (options =>
        options.UseSqlServer (
            Configuration.GetConnectionString ("DefaultConnection")));
    services.AddDefaultIdentity ()
        .AddEntityFrameworkStores ();

    services.AddMvc ().SetCompatibilityVersion (CompatibilityVersion.Version_2_1);
}

On my computer, I encountered an issue with running 'docker-compose build'. The error message was
"ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable."
After a bit of research, I came to find out that this is a fairly generic and common message that could have several possible causes. For me, it turned out the I needed to change the permissions on the "docker.sock" file. I fixed the issue be running
sudo chmod 666 /var/run/docker.sock

The running of the command 'docker-compose up' wrote many messages to the console that gives the appearance of that the application and database are running. I can also have a couple of active containers. When I navigate to localhost:9090, the browser displays a "This site can't be reached" message. I am not going to take the time to figure out the problem at this time.



References 

Docker Compose - "Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration."

The article on Medium "If you faced an issue like 'Couldn't connect to Docker daemon at http+docker://localunixsocket -- is it running?'..." was helpful in dealing with an error from running 'docker-compose build'





19 December, 2019

Second stop on the Docker learning train - Dockerize an ASP.NET Core application

The second stop on my Docker learning train is a recommendation from the Docker Samples. The tutorial is titled 'Dockerize an ASP.NET Core application'.

When I first read through the tutorial, I figured this was going to be quick and easy. Due to making a few mistakes in creating the Dockerfile, I ended up with a few lessons learned.

The first lesson I learned was to read the output from running the docker build command carefully. It was interesting to see that the Docker daemon creates and removes intermediate containers.

The next lesson was when a container reports an error code 145 carefully double-check the Dockerfile. Pay close attention to what is being used for the DotNet Core runtime. The 'SDK' should be used for the restore and publish command while 'ASPNET' should be used for the runtime.

Another lesson learned is to use '.dockerignore' file to keep from unnecessary files from being included in the image.

The last lesson was to use interactive mode. The tutorial did not go through the steps I needed to do a bit of research to figure it out. The Alpine image does not include bash. I modified the Dockerfile to pull in bash and a few other tools. Rebuilt the image. The used the interactive mode switches to get a command prompt.

Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-alpine AS build
WORKDIR /app/aspnetapp

# copy csproj and restore as distinct layers
COPY myWebApp/*.csproj ./
RUN dotnet restore

# copy everything else and build app
COPY myWebApp/. ./
RUN dotnet publish -c Release -o out

# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-alpine AS runtime
WORKDIR /app
COPY --from=build /app/aspnetapp/out .

# Uncomment the following line to install bash and other tools useful for debugging
RUN apk add --no-cache bash coreutils grep sed

ENTRYPOINT ["dotnet", "myWebApp.dll"]

Command to build the image.

docker build -t {image name} .

Command to start the container in interactive mode and to override the default entry point

docker run -it --entrypoint /bin/bash {image name}

Reference

.dockerignore file
"Before the docker CLI sends the context to the docker daemon, it looks for a file named .dockerignore in the root directory of the context. If this file exists, the CLI modifies the context to exclude files and directories that match patterns in it. This helps to avoid unnecessarily sending large or sensitive files and directories to the daemon and potentially adding them to images using ADD or COPY." - https://docs.docker.com/engine/reference/#dockerignore-file

14 December, 2019

First stop on the Docker learning train - Docker for beginners

The first stop on my Docker learning train is from the Docker Samples with the tutorial Docker for beginners.

The tutorial has four chapters:
  • Setup
  • 1.0 Running your first container
  • 2.0 Webapps with Docker
  • 3.0 Deploying an app to a Swarm

Setup

The Setup chapter makes sure that Docker is installed and running on my machine. The tutorial points back to the Docker documentation for the installation steps. To verify that Docker is working run the following:
docker run hello-world

1.0 Running your first container

The key topics for this chapter include:
  • How to run a docker container.
  • How to list containers.
  • How to inspect a container.

Docker CLI commands

  • docker pull - pulls an image or repository from a registry
  • docker run - runs a container
    • This command can also pull a container
    • By default, the interactive shell will exit after running a script. Use the '-it' switch to keep the shell open.
      • Example: 
        docker run -it alpine /bin/sh
        
  • docker ps - list containers
    • By default, the command lists the running containers. 
    • To list all the containers use the switch '-a'
      • Example:
        docker ps -a
  • docker inspect - returns low-level information about a docker container

Terms

image - Docker images are the basis of containers. An Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime. An image typically contains a union of layered filesystems stacked on top of each other. An image does not have state and it never changes. - Docker Glossary
container - A container is a runtime instance of a docker image.
A Docker container consists of
  • A Docker image
  • An execution environment
  • A standard set of instructions
The concept is borrowed from Shipping Containers, which define a standard to ship goods globally. Docker defines a standard to ship software. - Docker Glossary

2.0 Webapps with Docker

This chapter is broken into a few sections.

2.1 Run a static website in a container

Key topics for the section include:
  • How to run a container in detached mode.
  • How to publish the ports the container exposes.
  • How to name a container.
  • How to pass environment variables to the container.
  • How to stop a container.
  • How to remove a container.

2.2 Docker Images

Key topics for this section include:
  • What is a Docker image?
  • How to use 'tag' to pull a specific version of a container.

2.3 Create your first image

Key topics for this section include:
  • How to write a Dockerfile.
  • How to build the image.
  • How to run the image.
Notes about Dockerfiles

  • By convention, an instruction is written in UPPERCASE. This makes it easier to distinguish an instruction from an argument. 
  • Docker runs instructions in order.
  • A Dockerfile must begin with a 'FROM' instruction. 
  • Lines the begin with '#' are treated as a comment.
  • Guide: Best practices for writing Dockerfiles

Terms

base image - A base image has no parent image specified in its Dockerfile. It is created using a Dockerfile with the FROM scratch directive.
Child images are images that build on base images and add additional functionality. 
Official images are Docker sanctioned images. 
User images are images created and shared by users like you.
Dockerfile is a text document that contain all the commands a user could call on the command line to assemble an image. 

Docker CLI commands

  • docker run
    • -d - detached mode
    • -P - publish exposed container ports to random ports on the host
    • -e - pass environment variables to the container.
    • --name - allows a name to be specified for the container
  • docker stop - stops one or more containers
  • docker rm - removes one or more containers
    • -f - Force removal of a running container
  • docker port - List port mapping or a specific mapping for a container
  • docker images - list images
  • docker pull 
    • by default, pull gets the latest version
    • to get a specific version include the tag 
      • Example: 
        docker pull ubuntu:12.04
        
  • docker search - search the Docker Hub for images
  • docker build - build an image from a Dockerfile
    • -t - Name and optional a tag in the 'name:tag' format
  • docker login - Log into a Docker registry
  • docker push - Push an image or a repository to a registry

3.0 Deploy an app to a Swarm

Key topics for this chapter include:

  • How to create a swarm.
  • How to deploy. 

I was unable to make the example work properly on my machine. The docker swarm and docker stack executed without reporting any errors. The command 'docker ps' reports multiple active containers. I will need to come back to this tutorial at a later date to investigate further.

Terms

swarm - A swarm is a cluster of one or more Docker Engines running in swarm mode
Docker Swarm - Do not confuse Docker Swarm with the swarm mode features in Docker Engine.
Docker Swarm is the name of a standalone native clustering tool for Docker. Docker Swarm pools together several Docker hosts and exposes them as a single virtual Docker host. It serves the standard Docker API, so any tool that already works with Docker can now transparently scale up to multiple hosts.
Also known as : docker-swarm
swarm mode - Swarm mode refers to cluster management and orchestration features embedded in Docker Engine. When you initialize a new swarm (cluster) or join nodes to a swarm, the Docker Engine runs in swarm mode.

Docker CLI Commands







07 December, 2019

Learning a bit about Docker


What is Docker?

Docker is a tool to develop, ship and run applications. The heart of Docker is the container. A Docker container contains all the parts needed to run an application.

Install

I am currently running Pop!_OS 19.10. Which made installing Docker a bit more interesting. Pop!_OS 19.10 is based on Ubuntu 19.10. The official Docker documentation shows support for Ubuntu 19.04. For the most part, I followed the official documentation to install. When it came time to add the source I used the following.

sudo add-apt-repository "deb https://download.docker.com/linux/ubuntu disco stable"

The rest of the instructions were easy to follow and didn't require any modifications.

I also run the 'Optional Linux post-installation steps'. These steps should allow me to run docker commands from a terminal without preface a command without the 'sudo' keyword. For some reason, this is not working for me. I have verified that the group exists and my user account is part of the group. Additional research is needed.

Reading

I spent a bit of time reading through the official Docker documentation. There is a lot of information to absorb. There is way too much information to summarize in a single blog post. I am going to need a break the information down into smaller chunks.

Plan

From the Docker Samples work the following tutorials:

From Play with Docker Classroom work the following tutorials:
From Docker Curriculum work the following tutorials:

Resources

Challenging myself to learn something new

I have recently set a big challenge for myself. I want to know about Machine Learning . To add to the challenge, I am trying out usin...