Workshop Exercise 4.2 - Applying Configuration to Controller

Table of Contents

Objective

Step 1 - Creating a Job to Configure Controller

These next two steps are the crux of this workshop: merging declarative and procedural tooling together to achieve a true end-to-end deployment, using the tools provided on top of a single platform.

Two key items will be used to handle this integration - k8s concepts, such as jobs and configmaps paired with ArgoCD hooks, and a shim container image using ansible-runner.

Container Image using ansible-runner

Execution environments are container images, meaning they can be leveraged by Controller to execute automation, or run as simple containers in other environments to run automation.

To act as a shim between our declarative tooling (argoCD) and the procedural tooling (Controller), a container image has been built off of an execution environment that specifically looks for variables to go apply to Ansible Controller. Built into the image is a simple playbook:

---
- name: Configure Ansible Automation Platform
  hosts:
    - all
  gather_facts: false
  connection: local
  vars:
    controller_configuration_secure_logging: false
    aap_configuration_secure_logging: false
  pre_tasks:
    - name: Import variables from /runner/variables
      ansible.builtin.include_vars:
        dir: /runner/variables
  tasks:
    - name: Include when needed
      ansible.builtin.include_role:
        name: redhat_cop.controller_configuration.license
      when:
        - controller_license is defined
        
    - name: Include organizations role
      ansible.builtin.include_role:
        name: redhat_cop.controller_configuration.organizations
      when:
        - controller_organizations is defined

    - name: Include users role
      ansible.builtin.include_role:
        name: redhat_cop.controller_configuration.users
      when:
        - controller_users is defined

    - name: Include roles role
      ansible.builtin.include_role:
        name: redhat_cop.controller_configuration.roles
      when:
        - controller_roles is defined

    - name: Include credentials role
      ansible.builtin.include_role:
        name: redhat_cop.controller_configuration.credentials
      when:
        - controller_credentials is defined

    - name: Include projects role
      ansible.builtin.include_role:
        name: redhat_cop.controller_configuration.projects
      when:
        - controller_projects is defined

    - name: Include inventories role
      ansible.builtin.include_role:
        name: redhat_cop.controller_configuration.inventories
      when:
        - controller_inventories is defined

    - name: Include hosts role
      ansible.builtin.include_role:
        name: redhat_cop.controller_configuration.hosts
      when:
        - controller_hosts is defined

    - name: Include groups role
      ansible.builtin.include_role:
        name: redhat_cop.controller_configuration.groups
      when:
        - controller_groups is defined

    - name: Include job_templates role
      ansible.builtin.include_role:
        name: redhat_cop.controller_configuration.job_templates
      when:
        - controller_templates is defined

    - name: Include workflow_job_templates role
      ansible.builtin.include_role:
        name: redhat_cop.controller_configuration.workflow_job_templates
      when:
        - controller_workflows is defined

    - name: Include workflow_launch role
      ansible.builtin.include_role:
        name: redhat_cop.controller_configuration.workflow_launch
      when:
        - controller_workflow_launch_jobs is defined

This playbook loads in variables from the /runner/variables directory, then run various roles that apply those variables to Controller.

Since this will be run as a kubernetes job, we have the option of mounting items into that directory, and the playbook will automatically pick them up for us.

Jobs, ConfigMaps, and Sync Hooks

Since Controller doesn’t take configuration declaratively like other k8s resources do, we’ll need to run something that goes and applies our desired state to Controller.

Within kubernetes, we can use a job and a configmap to handle this, along with the image from above. In addition, we’ll apply an annotation to signifiy when ArgoCD should trigger the job.

---
apiVersion: batch/v1
kind: Job
metadata:
  name: configure-network-automation
  annotations:
    argocd.argoproj.io/hook: Sync
spec:
  template:
    spec:
      containers:
        - name: configure-network-automation
          image: quay.io/device-edge-workshops/configure-controller:latest
          volumeMounts:
            - name: controller-vars
              mountPath: /runner/variables
            - name: tmp
              mountPath: /tmp
      restartPolicy: OnFailure
      volumes:
        - name: controller-vars
          configMap:
            name: configure-network-automation-configmap
        - name: tmp
          emptyDir:
            sizeLimit: 100Mi

This job definition provides a few things:

The next steps will wire these various elements up to provide our desired experience of merging declarative and procedural tooling.

Step 2 - Creating a Job and ConfigMap

Return to the network-automation helm chart we created earlier, and create a new directory named templates. Within that new directory, we’ll create a few files that define resources we want created.

First, create job.yaml in the templates/ directory with the following contents:

---
apiVersion: batch/v1
kind: Job
metadata:
  generateName: configure-network-automation-
  annotations:
    argocd.argoproj.io/hook: Sync
spec:
  template:
    spec:
      containers:
        - name: configure-network-automation
          image: quay.io/device-edge-workshops/configure-controller:latest
          volumeMounts:
            - name: controller-vars
              mountPath: /runner/variables
            - name: tmp
              mountPath: /tmp
      restartPolicy: OnFailure
      volumes:
        - name: controller-vars
          configMap:
            name: configure-network-automation-configmap
        - name: tmp
          emptyDir:
            sizeLimit: 100Mi

Additionally, create a file named configmap.yaml. This is where we’ll leverage the variables from our configure_controller_for_network_automation.yaml file, with a bit of customization to match the configmap spec:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: configure-network-automation-configmap
data:
  controller-configuration.yaml: |
    # Ensure you replace these with the correct information!
    controller_hostname: REPLACE_WITH_CONTROLLER_URL_FROM_STUDENT_PAGE
    controller_username: REPLACE_WITH_CONTROLLER_USERNAME
    controller_password: REPLACE_WITH_CONTROLLER_PASSWORD
    controller_validate_certs: 'false'

    # Remember to modify for your team if you're not Team 1!
    controller_hosts:
      - name: cisco-8000v
        inventory: team1 Network Infrastructure
        variables:
          ansible_host: cisco-8000v-ssh.team1.svc.cluster.local
    
    # Remember to modify for your team if you're not Team 1!
    controller_groups:
      - name: cisco_ios
        inventory: team1 Network Infrastructure
        hosts:
          - cisco-8000v

    # Remember to modify for your team if you're not Team 1!
    controller_credentials:
      - name: Network Appliance Credentials
        organization: Team 1
        credential_type: Machine
        inputs:
          username: ansible
          password: PASSWORDSETEARLIER

    # Remember to modify for your team if you're not Team 1!
    # Also, remember to enter the correct information from Gitea!

    controller_projects:
      - name: Code Repository
        organization: Team 1
        scm_branch: main
        scm_type: git
        scm_url: "YOUR_GIT_URL_HERE"
        update_project: true
        credential: team{{ number }} Code Repository Credentials

    controller_templates:
      - name: Configure NTP
        organization: Team 1
        project: Code Repository
        inventory: team1 Network Infrastructure
        credentials:
          - Network Appliance Credentials
        playbook: playbooks/ntp.yaml
      - name: Setup SNMPv2
        organization: Team 1
        project: Code Repository
        inventory: team1 Network Infrastructure
        credentials:
          - Network Appliance Credentials
        playbook: playbooks/snmpv2.yaml
      - name: Set System Hostname
        organization: Team 1
        project: Code Repository
        inventory: team1 Network Infrastructure
        credentials:
          - Network Appliance Credentials
        playbook: playbooks/hostname.yaml
      - name: Configure VLAN Interfaces
        organization: Team 1
        project: Code Repository
        inventory: team1 Network Infrastructure
        credentials:
          - Network Appliance Credentials
        playbook: playbooks/vlan-interfaces.yaml
      - name: Configure Static Routes
        organization: Team 1
        project: Code Repository
        inventory: team1 Network Infrastructure
        credentials:
          - Network Appliance Credentials
        playbook: playbooks/static-routes.yaml
      - name: Configure OSPF
        organization: Team 1
        project: Code Repository
        inventory: team1 Network Infrastructure
        credentials:
          - Network Appliance Credentials
        playbook: playbooks/ospf.yaml

    controller_workflows:
      - name: Run Network Automation
        organization: Team 1
        simplified_workflow_nodes:
          - identifier: Configure NTP
            unified_job_template: Configure NTP
            success_nodes:
              - Setup SNMPv2
            lookup_organization: Team 1
          - identifier: Setup SNMPv2
            unified_job_template: Setup SNMPv2
            success_nodes:
              - Set System Hostname
            lookup_organization: Team 1
          - identifier: Set System Hostname
            unified_job_template: Set System Hostname
            success_nodes:
              - Configure VLAN Interfaces
            lookup_organization: Team 1
          - identifier: Configure VLAN Interfaces
            unified_job_template: Configure VLAN Interfaces
            lookup_organization: Team 1
            success_nodes:
              - Configure Static Routes
          - identifier: Configure Static Routes
            unified_job_template: Configure Static Routes
            lookup_organization: Team 1
            success_nodes:
              - Configure OSPF
          - identifier: Configure OSPF
            unified_job_template: Configure OSPF
            lookup_organization: Team 1

This configmap will take our desired controller configuration we built in the previous exercises, and mount it into a file called controller-configuration.yaml, located in /runner/variables within the container. Then, the embedded automation will read it in, and apply our desired configuration.

Ensure you’ve replaced the variables at the top with the correct values, so the automation knows how to authenticate with controller.

Note:

Team1 is used as an example here, replace with your team number


Navigation

Previous Exercise Next Exercise

Click here to return to the Workshop Homepage