Instructor Guide

What to do During the Demo

This demo is presentor led, with the intention of being delivered during a 75-90 minute session. The slide deck sets the stage for the presentation, then has 5 “demo stops” where functionality is showcased to the audience.

Checking On Demo Services

There’s a few different services involved in this demo. They all should be up and available after provisioning, but if they need convincing use the information below. Remember to connect to the approriate node when attempting to interact with services.

Non-containerized services:

Workshop Service Description Network Port Restart Command
Cockpit WebUI Web interface for system administration, accessing the Image Builder webUI tcp/9090 sudo systemctl restart cockpit.socket cockpit.service
rpm-ostree Web Server Apache web server hosting /var/www/html/rhde-image ostree repo tcp/11080 sudo systemctl restart httpd
Ansible Controller Ansible controller for automation against edge devices tcp/10443 sudo automation-controller-service restart

Containerized services:

Workshop Service Description Network Port Pod
Dnsmasq Provides DHCP and DNS for the workshop network udp/53 demo-summit_connect_2023-priv
Reverse Proxy Nginx reverse proxy for workshop services with valid wildcard certiticate tcp/80,tcp/443 demo-summit_connect_2023
iPXE Hosts iPXE menu file tcp/8081 demo-summit_connect_2023
Database Postgresql database for Gitea tcp/15432 demo-summit_connect_2023
Gitea Source control server for students tcp/3001 demo-summit_connect_2023
TFTP Hosts the installation environment files udp/69 demo-summit_connect_2023
OSTree Repo Hosts the ostree repo used to provision devices tcp/8080 demo-summit_connect_2023
Attendance Nodejs web server with student pages/information tcp/3000 demo-summit_connect_2023

To restart these services, restarting the pods is safe:

# Restart priv pod
sudo podman pod restart demo-summit_connect_2023-priv

# Restart non-priv pod
podman pod restart demo-summit_connect_2023

In addition, individual containers can be restarted.

Provisioning Ahead of Time

If you plan to provision a device then transport it to the demo location, run the start-workshop.yml playbook to bring the demo up after the device has been shutdown and moved:

ansible-navigator run provisioner/start_workshop.yml -e @your.extra-vars.yml -i your.local-inventory.yml -m stdout -v

If using virtual machines, it’s easiest to simply suspend them then resume when ready to present.

Note:

The internal interface of the device should be the same. If it is not, the dnsmasq container would need to be rebuilt.

Moving the Active Image in the rpm-ostree Repo

There are 5 images in the rpm-ostree repo, when starting the demo the active image version should be 1.0.0. As you progress through the slides, you’ll want to move the active image to 2.0.0, 3.0.0, etc, so you can update the edge device.

To update the active image, use the following command:

ansible-navigator run provisioner/set-active-image.yml -e @your.extra-vars.yml -i your.local-inventory.yml -e "desired_image_verion=1.0.0" -m stdout -v

Note:

Image version 1.0.0 is used as an example here, update accordingly.

Demo Sections

Note:

When moving between sections where the image version changes, it’s a good idea to in-place update the edge device to showcase the functionality. The most useful commands are: sudo rpm-ostree status, sudo rpm-ostree upgrade, and sudo systemctl reboot now.

Note:

The application WebUI is available at $(edge-device-ip):1881 for demonstration purposes when deployed via podman. When deploying via microshift, use DNS (probably via an /etc/hosts file) to access: http://$(edge-device-dns-name).

Section 1

Image Version Presentation Slides
1.0.0 18-22

Access the Image Builder webUI in Cockpit, walk through options for building image. Not necessary to actually build image.

Section 2

Image Version Presentation Slides
1.0.0 23-27

Showcase zero-touch provisioning of a RHDE system. Highlight that tftp/pxe/ipxe are used here, but multiple deployment options exist, such as flash drives, http boot, and imaging devices before shipping to the field.

Section 3

Image Version Presentation Slides
2.0.0 28-35

Showcase an image with the application data built in. Recommend logging in and running sudo podman images as well as sudo systemctl status process-control to show that systemd started the application.

Section 4

Image Version Presentation Slides
3.0.0 36-40

Add a custom health check for greenboot to run. Highlight greenboot functionality, and cat out /etc/greenboot/check/required.d/application-check.sh to show the health check script.

Section 5

Image Version Presentation Slides
4.0.0 36-40

Using the same set of slides, upgrade to a “bad” image, and allow the device to roll back automatically. Simply update the device, then wait for the 3 failed startup attempts leading to an automatic rollback.

Section 6

Image Version Presentation Slides
5.0.0 41-45

Upgrade in-place adding Microshift functionality with an application definition pre-staged. Recommend running some form of sudo watch -d "oc get all -A to view Microshift starting up. May take a few minutes on conference/hotel WiFi.

Note:

The generated kubeconfig is located at /var/lib/microshift/resources/kubeadmin/kubeconfig, accessable only by root. Feel free to copy to a different location or simply become root and set the path into KUBECONFIG: export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig.