SIGN UP

[fibosearch]

8. Upgrading an Existing Tower Installation

You can upgrade your existing Tower installation to the latest version easily. Tower looks for existing configuration files and recognizes when an upgrade should be performed instead of an installation.

As with installation, the upgrade process requires that the Tower server be able to access the Internet. The upgrade process takes roughly the same amount of time as a Tower installation, plus any time needed for data migration.

This upgrade procedure assumes that you have a working installation of Ansible and Tower.

Note:
You can not convert an embedded-database Tower to an Active/Passive Redundancy mode installation as part of an upgrade. Users who want to deploy Tower in a Redundant configuration should back up their Tower database, install a new Redundant configuration on a different VM or physical host, and then restore the database. It is possible to add a primary or secondary instance later on to Tower if it is already operating on an external database. Refer to the Active/Passive Redundancy chapter of the Ansible Tower Administration Guide.

If Tower is on a version of RHEL older than RHEL 8 and you want to upgrade to Ansible Tower on RHEL 8, follow the sequence outlined below:

  1. Obtain the Ansible Automation Platform Installation Program and upgrade to Ansible Tower 3.8 on RHEL 7

  2. Run Tower Backup included in the Tower setup playbook. See Backing Up and Restoring Tower in the Ansible Tower Administration Guide for details.

  3. Obtain the Ansible Automation Platform Installation Program and install a fresh version of Ansible Tower 3.8 on RHEL 8

  4. Run Tower Restore included in the Tower setup playbook. See Backing Up and Restoring Tower in the Ansible Tower Administration Guide for details.

This process ensures that the PostgreSQL database is also properly migrated to the latest version if you are upgrading an embedded-database Tower. Depending on the size of your Ansible Automation Platform installation, this may take some time. Note that if you upgrade a Tower with an external database, the client libraries will be upgraded as well, but you will need to upgrade your external PostgreSQL server manually. Be sure the check the release notes to see if this applies to you before upgrading.

If the upgrade of Tower fails or if you need assistance, please contact Ansible via the Red Hat Customer portal at https://access.redhat.com/.

8.1. Requirements

Before upgrading your Tower installation, refer to Requirements to ensure you have enough disk space and RAM as well as to review any software needs. For example, you should have the latest stable release of Ansible installed before performing an upgrade.

Note:
All upgrades should be no more than two major versions behind what you are currently upgrading to. For example, in order to upgrade to Ansible Tower 3.6.x, you must first be on version 3.4.x; i.e., there is no direct upgrade path from version 3.3.x. Refer to the recommended upgrade path article off your customer portal. In order to run Ansible Tower 3.8 on RHEL 8, you must also have Ansible 2.9 or later installed.

8.2. Back Up Your Tower Installation

It is advised that you create a backup before upgrading the system. After the backup process has been accomplished, proceed with OS/Ansible/Tower upgrades.

Refer to Backing Up and Restoring Tower in the Ansible Tower Administration Guide.

8.3. The Setup Playbook

The Tower setup playbook script uses the inventory file and is invoked as ./setup.sh from the path where you unpacked the Tower installer tarball.

root@localhost:~$ ./setup.sh

The setup script takes the following arguments:

  • -h – Show this help message and exit

  • -i INVENTORY_FILE – Path to Ansible inventory file (default: inventory)

  • -e EXTRA_VARS – Set additional Ansible variables as key=value or YAML/JSON (i.e. -e bundle_install=false forces an online installation)

  • -b – Perform a database backup in lieu of installing

  • -r – Perform a database restore in lieu of installing (a default restore path is used unless EXTRA_VARS are provided with a non-default path, as shown in the code example below)

./setup.sh -e 'restore_backup_file=/path/to/nondefault/location' -r

9. Usability Analytics and Data Collection

Usability data collection is included with Tower to collect data to better understand how Tower users specifically interact with Tower, to help enhance future releases, and to continue streamlining your user experience.

Only users installing a trial of Tower or a fresh installation of Tower are opted-in for this data collection.

If you want to change how you participate in this analytics collection, you can opt out or change your settings using the Configure Tower user interface, accessible from the Settings (settings) icon from the left navigation bar.

Ansible Tower collects user data automatically to help improve the Tower product. You can control the way Tower collects data by setting your participation level in the User Interface tab in the settings menu.

_images/configure-tower-ui-tracking_state.png

  1. Select the desired level of data collection from the User Analytics Tracking State drop-down list:

  • Off: Prevents any data collection.

  • Anonymous: Enables data collection without your specific user data.

  • Detailed: Enables data collection including your specific user data.

  1. Click Save to apply the settings or Cancel to abandon the changes.

For more information, see the Red Hat privacy policy at https://www.redhat.com/en/about/privacy-policy.

10. Supported Inventory Plugin Templates

Configuring inventory plugins in Ansible Tower 3.8 is different from previous versions. On upgrade, existing configurations will be migrated to the new format that will produce a backwards compatible inventory output. Use the templates below to help aid in migrating your inventories to the new style inventory plugin output.

10.1. Amazon Web Services EC2

compose:
  ansible_host: public_ip_address
  ec2_account_id: owner_id
  ec2_ami_launch_index: ami_launch_index | string
  ec2_architecture: architecture
  ec2_block_devices: dict(block_device_mappings | map(attribute='device_name') | list | zip(block_device_mappings | map(attribute='ebs.volume_id') | list))
  ec2_client_token: client_token
  ec2_dns_name: public_dns_name
  ec2_ebs_optimized: ebs_optimized
  ec2_eventsSet: events | default("")
  ec2_group_name: placement.group_name
  ec2_hypervisor: hypervisor
  ec2_id: instance_id
  ec2_image_id: image_id
  ec2_instance_profile: iam_instance_profile | default("")
  ec2_instance_type: instance_type
  ec2_ip_address: public_ip_address
  ec2_kernel: kernel_id | default("")
  ec2_key_name: key_name
  ec2_launch_time: launch_time | regex_replace(" ", "T") | regex_replace("(\+)(\d\d):(\d)(\d)$", ".\g<2>\g<3>Z")
  ec2_monitored: monitoring.state in ['enabled', 'pending']
  ec2_monitoring_state: monitoring.state
  ec2_persistent: persistent | default(false)
  ec2_placement: placement.availability_zone
  ec2_platform: platform | default("")
  ec2_private_dns_name: private_dns_name
  ec2_private_ip_address: private_ip_address
  ec2_public_dns_name: public_dns_name
  ec2_ramdisk: ramdisk_id | default("")
  ec2_reason: state_transition_reason
  ec2_region: placement.region
  ec2_requester_id: requester_id | default("")
  ec2_root_device_name: root_device_name
  ec2_root_device_type: root_device_type
  ec2_security_group_ids: security_groups | map(attribute='group_id') | list |  join(',')
  ec2_security_group_names: security_groups | map(attribute='group_name') | list |  join(',')
  ec2_sourceDestCheck: source_dest_check | default(false) | lower | string
  ec2_spot_instance_request_id: spot_instance_request_id | default("")
  ec2_state: state.name
  ec2_state_code: state.code
  ec2_state_reason: state_reason.message if state_reason is defined else ""
  ec2_subnet_id: subnet_id | default("")
  ec2_tag_Name: tags.Name
  ec2_virtualization_type: virtualization_type
  ec2_vpc_id: vpc_id | default("")
filters:
  instance-state-name:
  - running
groups:
  ec2: true
hostnames:
  - network-interface.addresses.association.public-ip
  - dns-name
  - private-dns-name
keyed_groups:
  - key: image_id | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: images
    prefix: ''
    separator: ''
  - key: placement.availability_zone
    parent_group: zones
    prefix: ''
    separator: ''
  - key: ec2_account_id | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: accounts
    prefix: ''
    separator: ''
  - key: ec2_state | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: instance_states
    prefix: instance_state
  - key: platform | default("undefined") | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: platforms
    prefix: platform
  - key: instance_type | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: types
    prefix: type
  - key: key_name | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: keys
    prefix: key
  - key: placement.region
    parent_group: regions
    prefix: ''
    separator: ''
  - key: security_groups | map(attribute="group_name") | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list
    parent_group: security_groups
    prefix: security_group
  - key: dict(tags.keys() | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list | zip(tags.values()
      | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list))
    parent_group: tags
    prefix: tag
  - key: tags.keys() | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list
    parent_group: tags
    prefix: tag
  - key: vpc_id | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: vpcs
    prefix: vpc_id
  - key: placement.availability_zone
    parent_group: '{{ placement.region }}'
    prefix: ''
    separator: ''
plugin: amazon.aws.aws_ec2
use_contrib_script_compatible_sanitization: true

10.2. Google Compute Engine

auth_kind: serviceaccount
compose:
  ansible_ssh_host: networkInterfaces[0].accessConfigs[0].natIP | default(networkInterfaces[0].networkIP)
  gce_description: description if description else None
  gce_id: id
  gce_image: image
  gce_machine_type: machineType
  gce_metadata: metadata.get("items", []) | items2dict(key_name="key", value_name="value")
  gce_name: name
  gce_network: networkInterfaces[0].network.name
  gce_private_ip: networkInterfaces[0].networkIP
  gce_public_ip: networkInterfaces[0].accessConfigs[0].natIP | default(None)
  gce_status: status
  gce_subnetwork: networkInterfaces[0].subnetwork.name
  gce_tags: tags.get("items", [])
  gce_zone: zone
hostnames:
- name
- public_ip
- private_ip
keyed_groups:
- key: gce_subnetwork
  prefix: network
- key: gce_private_ip
  prefix: ''
  separator: ''
- key: gce_public_ip
  prefix: ''
  separator: ''
- key: machineType
  prefix: ''
  separator: ''
- key: zone
  prefix: ''
  separator: ''
- key: gce_tags
  prefix: tag
- key: status | lower
  prefix: status
- key: image
  prefix: ''
  separator: ''
plugin: google.cloud.gcp_compute
retrieve_image_info: true
use_contrib_script_compatible_sanitization: true

10.3. Microsoft Azure Resource Manager

conditional_groups:
  azure: true
default_host_filters: []
fail_on_template_errors: false
hostvar_expressions:
  computer_name: name
  private_ip: private_ipv4_addresses[0] if private_ipv4_addresses else None
  provisioning_state: provisioning_state | title
  public_ip: public_ipv4_addresses[0] if public_ipv4_addresses else None
  public_ip_id: public_ip_id if public_ip_id is defined else None
  public_ip_name: public_ip_name if public_ip_name is defined else None
  tags: tags if tags else None
  type: resource_type
keyed_groups:
- key: location
  prefix: ''
  separator: ''
- key: tags.keys() | list if tags else []
  prefix: ''
  separator: ''
- key: security_group
  prefix: ''
  separator: ''
- key: resource_group
  prefix: ''
  separator: ''
- key: os_disk.operating_system_type
  prefix: ''
  separator: ''
- key: dict(tags.keys() | map("regex_replace", "^(.*)$", "\1_") | list | zip(tags.values() | list)) if tags else []
  prefix: ''
  separator: ''
plain_host_names: true
plugin: azure.azcollection.azure_rm
use_contrib_script_compatible_sanitization: true

10.4. VMware vCenter

compose:
  ansible_host: guest.ipAddress
  ansible_ssh_host: guest.ipAddress
  ansible_uuid: 99999999 | random | to_uuid
  availablefield: availableField
  configissue: configIssue
  configstatus: configStatus
  customvalue: customValue
  effectiverole: effectiveRole
  guestheartbeatstatus: guestHeartbeatStatus
  layoutex: layoutEx
  overallstatus: overallStatus
  parentvapp: parentVApp
  recenttask: recentTask
  resourcepool: resourcePool
  rootsnapshot: rootSnapshot
  triggeredalarmstate: triggeredAlarmState
filters:
- runtime.powerState == "poweredOn"
keyed_groups:
- key: config.guestId
  prefix: ''
  separator: ''
- key: '"templates" if config.template else "guests"'
  prefix: ''
  separator: ''
plugin: community.vmware.vmware_vm_inventory
properties:
- availableField
- configIssue
- configStatus
- customValue
- datastore
- effectiveRole
- guestHeartbeatStatus
- layout
- layoutEx
- name
- network
- overallStatus
- parentVApp
- permission
- recentTask
- resourcePool
- rootSnapshot
- snapshot
- triggeredAlarmState
- value
- capability
- config
- guest
- runtime
- storage
- summary
strict: false
with_nested_properties: true

10.5. Red Hat Satellite 6

group_prefix: foreman_
keyed_groups:
- key: foreman['environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_') | regex_replace('none', '')
  prefix: foreman_environment_
  separator: ''
- key: foreman['location_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
  prefix: foreman_location_
  separator: ''
- key: foreman['organization_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
  prefix: foreman_organization_
  separator: ''
- key: foreman['content_facet_attributes']['lifecycle_environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
  prefix: foreman_lifecycle_environment_
  separator: ''
- key: foreman['content_facet_attributes']['content_view_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
  prefix: foreman_content_view_
  separator: ''
legacy_hostvars: true
plugin: theforeman.foreman.foreman
validate_certs: false
want_facts: true
want_hostcollections: false
want_params: true

10.6. OpenStack

expand_hostvars: true
fail_on_errors: true
inventory_hostname: uuid
plugin: openstack.cloud.openstack

10.7. Red Hat Virtualization

compose:
  ansible_host: (devices.values() | list)[0][0] if devices else None
keyed_groups:
- key: cluster
  prefix: cluster
  separator: _
- key: status
  prefix: status
  separator: _
- key: tags
  prefix: tag
  separator: _
ovirt_hostname_preference:
- name
- fqdn
ovirt_insecure: false
plugin: ovirt.ovirt.ovirt

10.8. Ansible Tower

include_metadata: true
inventory_id: <inventory_id or url_quoted_named_url>
plugin: awx.awx.tower
validate_certs: <true or false>

11. Supported Attributes for Custom Notifications

This section describes the list of supported job attributes and the proper syntax for constructing the message text for notifications. The supported job attributes are:

  • allow_simultaneous – (boolean) indicates if multiple jobs can run simultaneously from the JT associated with this job

  • controller_node – (string) the instance that managed the isolated execution environment

  • created – (datetime) timestamp when this job was created

  • custom_virtualenv – (string) custom virtual environment used to execute job

  • description – (string) optional description of the job

  • diff_mode – (boolean) if enabled, textual changes made to any templated files on the host are shown in the standard output

  • elapsed – (decimal) elapsed time in seconds that the job ran

  • execution_node – (string) node the job executed on

  • failed – (boolean) true if job failed

  • finished – (datetime) date and time the job finished execution

  • force_handlers – (boolean) when handlers are forced, they will run when notified even if a task fails on that host (note that some conditions – e.g. unreachable hosts – can still prevent handlers from running)

  • forks – (int) number of forks requested for job

  • id – (int) database id for this job

  • job_explanation – (string) status field to indicate the state of the job if it wasn’t able to run and capture stdout

  • job_slice_count – (integer) if run as part of a sliced job, the total number of slices (if 1, job is not part of a sliced job)

  • job_slice_number – (integer) if run as part of a sliced job, the ID of the inventory slice operated on (if not part of a sliced job, attribute is not used)

  • job_tags – (string) only tasks with specified tags will execute

  • job_type – (choice) run, check, or scan

  • launch_type – (choice) manual, relaunch, callback, scheduled, dependency, workflow, sync, or scm

  • limit – (string) playbook execution limited to this set of hosts, if specified

  • modified – (datetime) timestamp when this job was last modified

  • name – (string) name of this job

  • playbook – (string) playbook executed

  • scm_revision – (string) scm revision from the project used for this job, if available

  • skip_tags – (string) playbook execution skips over this set of tag(s), if specified

  • start_at_task – (string) playbook execution begins at the task matching this name, if specified

  • started – (datetime) date and time the job was queued for starting

  • status – (choice) new, pending, waiting, running, successful, failed, error, canceled

  • timeout – (int) amount of time (in seconds) to run before the task is canceled

  • type – (choice) data type for this job

  • url – (string) URL for this job

  • use_fact_cache – (boolean) if enabled for job, Tower acts as an Ansible Fact Cache Plugin, persisting facts at the end of a playbook run to the database and caching facts for use by Ansible

  • verbosity – (choice) 0 through 5 (corresponding to Normal through WinRM Debug)

  • host_status_counts (count of hosts uniquely assigned to each status)
    • skipped (integer)

    • ok (integer)

    • changed (integer)

    • failures (integer)

    • dark (integer)

    • processed (integer)

    • rescued (integer)

    • ignored (integer)

    • failed (boolean)

  • summary_fields:
    • inventory
      • id – (integer) database ID for inventory

      • name – (string) name of the inventory

      • description – (string) optional description of the inventory

      • has_active_failures – (boolean) (deprecated) flag indicating whether any hosts in this inventory have failed

      • total_hosts – (deprecated) (int) total number of hosts in this inventory.

      • hosts_with_active_failures – (deprecated) (int) number of hosts in this inventory with active failures

      • total_groups – (deprecated) (int) total number of groups in this inventory

      • groups_with_active_failures – (deprecated) (int) number of hosts in this inventory with active failures

      • has_inventory_sources – (deprecated) (boolean) flag indicating whether this inventory has external inventory sources

      • total_inventory_sources – (int) total number of external inventory sources configured within this inventory

      • inventory_sources_with_failures – (int) number of external inventory sources in this inventory with failures

      • organization_id – (id) organization containing this inventory

      • kind – (choice) (empty string) (indicating hosts have direct link with inventory) or ‘smart’

    • project
      • id – (int) database ID for project

      • name – (string) name of the project

      • description – (string) optional description of the project

      • status – (choices) one of new, pending, waiting, running, successful, failed, error, canceled, never updated, ok, or missing

      • scm_type (choice) – one of (empty string), git, hg, svn, insights

    • job_template
      • id – (int) database ID for job template

      • name – (string) name of job template

      • description – (string) optional description for the job template

    • unified_job_template
      • id – (int) database ID for unified job template

      • name – (string) name of unified job template

      • description – (string) optional description for the unified job template

      • unified_job_type – (choice) unified job type (job, workflow_job, project_update, etc.)

    • instance_group
      • id – (int) database ID for instance group

      • name – (string) name of instance group

    • created_by
      • id – (int) database ID of user

      • username – (string) username

      • first_name – (string) first name

      • last_name – (string) last name

    • labels
      • count – (int) number of labels

      • results – list of dictionaries representing labels (e.g. {“id”: 5, “name”: “database jobs”})

Information about a job can be referenced in a custom notification message using grouped curly braces {{ }}. Specific job attributes are accessed using dotted notation, for example {{ job.summary_fields.inventory.name }}. Any characters used in front or around the braces, or plain text, can be added for clarification, such as ‘#’ for job ID and single-quotes to denote some descriptor. Custom messages can include a number of variables throughout the message:

{{ job_friendly_name }} {{ job.id }} ran on {{ job.execution_node }} in {{ job.elapsed }} seconds.

In addition to the job attributes, there are some other variables that can be added to the template:

  • approval_node_name – (string) the approval node name

  • approval_status – (choice) one of approved, denied, and timed_out

  • url – (string) URL of the job for which the notification is emitted (this applies to start, success, fail, and approval notifications)

  • workflow_url – (string) URL to the relevant approval node. This allows the notification recipient to go to the relevant workflow job page to see what’s going on (i.e., This node can be viewed at: {{ workflow_url }}). In cases of approval-related notifications, both url and workflow_url are the same.

  • job_friendly_name – (string) the friendly name of the job

  • job_metadata – (string) job metadata as a JSON string, for example:

    {'url': 'https://towerhost/$/jobs/playbook/13',
     'traceback': '',
     'status': 'running',
     'started': '2019-08-07T21:46:38.362630+00:00',
     'project': 'Stub project',
     'playbook': 'ping.yml',
     'name': 'Stub Job Template',
     'limit': '',
     'inventory': 'Stub Inventory',
     'id': 42,
     'hosts': {},
     'friendly_name': 'Job',
     'finished': False,
     'credential': 'Stub credential',
     'created_by': 'admin'}

12. Glossary

Ad Hoc

Refers to running Ansible to perform some quick command, using /usr/bin/ansible, rather than the orchestration language, which is /usr/bin/ansible-playbook. An example of an ad hoc command might be rebooting 50 machines in your infrastructure. Anything you can do ad hoc can be accomplished by writing a Playbook, and Playbooks can also glue lots of other operations together.

Callback Plugin

Refers to some user-written code that can intercept results from Ansible and do something with them. Some supplied examples in the GitHub project perform custom logging, send email, or even play sound effects.

Control Groups

Also known as ‘cgroups’, a control group is a feature in the Linux kernel that allows resources to be grouped and allocated to run certain processes. In addition to assigning resources to processes, cgroups can also report actual resource usage by all processes running inside of the cgroup.

Check Mode

Refers to running Ansible with the --check option, which does not make any changes on the remote systems, but only outputs the changes that might occur if the command ran without this flag. This is analogous to so-called “dry run” modes in other systems, though the user should be warned that this does not take into account unexpected command failures or cascade effects (which is true of similar modes in other systems). Use this to get an idea of what might happen, but it is not a substitute for a good staging environment.

Container Groups

Container Groups are a type of Instance Group that specify a configuration for provisioning a pod in a Kubernetes or OpenShift cluster where a job is run. These pods are provisioned on-demand and exist only for the duration of the playbook run.

Credentials

Authentication details that may be utilized by Tower to launch jobs against machines, to synchronize with inventory sources, and to import project content from a version control system.

Credential Plugin

Python code that contains definitions for an external credential type, its metadata fields, and the code needed for interacting with a secret management system.

Distributed Job

A job that consists of a job template, an inventory, and slice size. When executed, a distributed job slices each inventory into a number of “slice size” chunks, which are then used to run smaller job slices.

External Credential Type

A managed credential type for Tower used for authenticating with a secret management system.

Facts

Facts are simply things that are discovered about remote nodes. While they can be used in playbooks and templates just like variables, facts are things that are inferred, rather than set. Facts are automatically discovered when running plays by executing the internal setup module on the remote nodes. You never have to call the setup module explicitly, it just runs, but it can be disabled to save time if it is not needed. For the convenience of users who are switching from other configuration management systems, the fact module also pulls in facts from the ‘ohai’ and ‘facter’ tools if they are installed, which are fact libraries from Chef and Puppet, respectively.

Forks

Ansible and Tower talk to remote nodes in parallel and the level of parallelism can be set serveral ways–during the creation or editing of a Job Template, by passing --forks, or by editing the default in a configuration file. The default is a very conservative 5 forks, though if you have a lot of RAM, you can easily set this to a value like 50 for increased parallelism.

Group

A set of hosts in Ansible that can be addressed as a set, of which many may exist within a single Inventory.

Group Vars

The group_vars/ files are files that live in a directory alongside an inventory file, with an optional filename named after each group. This is a convenient place to put variables that will be provided to a given group, especially complex data structures, so that these variables do not have to be embedded in the inventory file or playbook.

Handlers

Handlers are just like regular tasks in an Ansible playbook (see Tasks), but are only run if the Task contains a “notify” directive and also indicates that it changed something. For example, if a config file is changed then the task referencing the config file templating operation may notify a service restart handler. This means services can be bounced only if they need to be restarted. Handlers can be used for things other than service restarts, but service restarts are the most common usage.

Host

A system managed by Tower, which may include a physical, virtual, cloud-based server, or other device. Typically an operating system instance. Hosts are contained in Inventory. Sometimes referred to as a “node”.

Host Specifier

Each Play in Ansible maps a series of tasks (which define the role, purpose, or orders of a system) to a set of systems. This “hosts:” directive in each play is often called the hosts specifier. It may select one system, many systems, one or more groups, or even some hosts that are in one group and explicitly not in another.

Instance Group

A group that contains instances for use in a clustered environment. An instance group provides the ability to group instances based on policy.

Inventory

A collection of hosts against which Jobs may be launched.

Inventory Script

A very simple program (or a complicated one) that looks up hosts, group membership for hosts, and variable information from an external resource–whether that be a SQL database, a CMDB solution, or something like LDAP. This concept was adapted from Puppet (where it is called an “External Nodes Classifier”) and works more or less exactly the same way.

Inventory Source

Information about a cloud or other script that should be merged into the current inventory group, resulting in the automatic population of Groups, Hosts, and variables about those groups and hosts.

Job

One of many background tasks launched by Tower, this is usually the instantiation of a Job Template; the launch of an Ansible playbook. Other types of jobs include inventory imports, project synchronizations from source control, or administrative cleanup actions.

Job Detail

The history of running a particular job, including its output and success/failure status.

Job Slice

See Distributed Job.

Job Template

The combination of an Ansible playbook and the set of parameters required to launch it.

JSON

Ansible and Tower use JSON for return data from remote modules. This allows modules to be written in any language, not just Python.

Metadata

Information for locating a secret in the external system once authenticated. The uses provides this information when linking an external credential to a target credential field.

Notification Template

An instance of a notification type (Email, Slack, Webhook, etc.) with a name, description, and a defined configuration.

Notification

A manifestation of the notification template; for example, when a job fails a notification is sent using the configuration defined by the notification template.

Notify

The act of a task registering a change event and informing a handler task that another action needs to be run at the end of the play. If a handler is notified by multiple tasks, it will still be run only once. Handlers are run in the order they are listed, not in the order that they are notified.

Organization

A logical collection of Users, Teams, Projects, and Inventories. The highest level in the Tower object hierarchy is the Organization.

Organization Administrator

An Tower user with the rights to modify the Organization’s membership and settings, including making new users and projects within that organization. An organization admin can also grant permissions to other users within the organization.

Permissions

The set of privileges assigned to Users and Teams that provide the ability to read, modify, and administer Projects, Inventories, and other Tower objects.

Plays

A playbook is a list of plays. A play is minimally a mapping between a set of hosts selected by a host specifier (usually chosen by groups, but sometimes by hostname globs) and the tasks which run on those hosts to define the role that those systems will perform. There can be one or many plays in a playbook.

Playbook

An Ansible playbook. Refer to http://docs.ansible.com/ for more information.

Policy

Policies dictate how instance groups behave and how jobs are executed.

Project

A logical collection of Ansible playbooks, represented in Tower.

Roles

Roles are units of organization in Ansible and Tower. Assigning a role to a group of hosts (or a set of groups, or host patterns, etc.) implies that they should implement a specific behavior. A role may include applying certain variable values, certain tasks, and certain handlers–or just one or more of these things. Because of the file structure associated with a role, roles become redistributable units that allow you to share behavior among playbooks–or even with other users.

Secret Management System

A server or service for securely storing and controlling access to tokens, passwords, certificates, encryption keys, and other sensitive data.

Schedule

The calendar of dates and times for which a job should run automatically.

Sliced Job

See Distributed Job.

Source Credential

An external credential that is linked to the field of a target credential.

Sudo

Ansible does not require root logins and, since it is daemonless, does not require root level daemons (which can be a security concern in sensitive environments). Ansible can log in and perform many operations wrapped in a sudo command, and can work with both password-less and password-based sudo. Some operations that do not normally work with sudo (like scp file transfer) can be achieved with Ansible’s copytemplate, and fetch modules while running in sudo mode.

Superuser

An admin of the Tower server who has permission to edit any object in the system, whether associated to any organization. Superusers can create organizations and other superusers.

Survey

Questions asked by a job template at job launch time, configurable on the job template.

Target Credential

A non-external credential with an input field that is linked to an external credential.

Team

A sub-division of an Organization with associated Users, Projects, Credentials, and Permissions. Teams provide a means to implement role-based access control schemes and delegate responsibilities across Organizations.

User

An Tower operator with associated permissions and credentials.

Webhook

Webhooks allow communication and information sharing between apps. They are used to respond to commits pushed to SCMs and launch job templates or workflow templates.

Workflow Job Template

A set consisting of any combination of job templates, project syncs, and inventory syncs, linked together in order to execute them as a single unit.

YAML

Ansible and Tower use YAML to define playbook configuration languages and also variable files. YAML has a minimum of syntax, is very clean, and is easy for people to skim. It is a good data format for configuration files and humans, but is also machine readable. YAML is fairly popular in the dynamic language community and the format has libraries available for serialization in many languages (Python, Perl, Ruby, etc.).