Four total images are built by the provisioner and all added to the same rpm-ostree repo on the edge_manager
node. As students progress through the workshop, you’ll need to move the ref within the repo to the correct version.
The provisioner will automatically set the active image to version 1.0.0
during provisioning. Once everyone’s devices have been kickstarted and students approach Exercise 4.1, set the active image to version 2.0.0
:
ansible-playbook provisioner/set-active-image.yml --inventory /path/to/workshop/inventory.ini --extra-vars "desired_image_version=2.0.0"
Repeat this process according to the following:
Students at Exercise | Active Image Version |
---|---|
Exercise 3.1 | 1.0.0 |
Exercise 4.1 | 2.0.0 |
Exercise 5.1 | 3.0.0 |
Exercise 6.1 | 4.0.0 |
The provisioner generates an SSH keypair and an inventory file for you within the provisioner/
directory by default. If you also have a local system, the command above will take multiple inventories:
ansible-playbook provisioner/set-active-image.yml --inventory /path/to/workshop/inventory.ini --inventory /path/to/local/inventory.yml --extra-vars "desired_image_version=2.0.0"
There’s a few different services involved in this workshop. They all should be up and available after provisioning, but if they need convincing use the information below. Remember to connect to the approriate node when attempting to interact with services.
sudo automation-controller-service restart
sudo systemctl restrt httpd
podman pod start gitea
sudo systemctl restart nginx php-fpm
sudo systemctl restart dnsmasq
Gitea is not running as root, it runs as whatever user Ansible authenticates to the system as, so you don’t need sudo.
dnsmasq likes to think it’s running, but won’t pick up on interface changes, so if you connect to a wireless network/new ethernet connection, restart the service.
After the workshop is up and running, a list of students currently signed in will be available at ec2_name_prefix.workshop_dns_zone/list.php
. Enter the admin_password
to view the list.
A local edge manager can be built for on-site edge device management. Right now, the requirements for this device are:
What’s installed and configured on the system depends on group membership within the inventory.
Ansible Group | Installed components |
---|---|
controller | Ansible Controller installed and populated |
edge_management | Image Builder installed, images composed, rpm-ostree hosted on port 80, Gitea installed and populated |
local | Some non-aws specific conditionals set |
local/dns | dnsmasq installed, configured, and started, with some DNS records configured |
For example: in this inventory file, the host edge-managaer-local
is a member of edge_management
, controller
, and local/dns
.
all:
children:
edge_management:
hosts:
edge-manager-local:
controller:
hosts:
edge-manager-local:
local:
hosts:
edge-manager-local:
children:
dns:
hosts:
edge-manager-local:
vars:
local_domains:
controller:
domain: "controller.your-workshop-domain.lcl"
cockpit:
domain: "cockpit.your-workshop-domain.lcl"
gitea:
domain: "gitea.your-workshop-domain.lcl"
edge_manager:
domain: "edge-manager.your-workshop-domain.lcl"
vars:
ansible_host: 192.168.200.10
ansible_user: ansible
ansible_password: your-password
ansible_become_password: your-password