Kinetic Service Orchestration
Automated Orchestration of Openstack Infrastructure Services
To initiate a full orchestration of the Openstack environment:
salt-run state.orch orch
This rebuilds the environment based on if [type]
is enabled in
kinetic-pillar in the /environment/hosts.sls
file.
Example:
hosts: cache: (1) style: virtual enabled: False (2) count: 1 ram: 8192000 cpu: 2 os: ubuntu2004 disk: 512G networks: management: interfaces: [ens3]
1 | This is the [type] value |
2 | If the enabled parameter is set to the value True the virtualmachine or physical host will rebuild/build. If enabled is set tothe value False it will not rebuild/build. |
Manual Orchestration of Openstack Infrastructure Services
Destroying the entire environment isn’t always a necessity during rebuild, and make sure you are in the correct salt master before proceeding with these steps |
Zeroize Storage Nodes
If not rebuilding storage, you will only need to destroy Ceph pools |
Destroying Ceph Pools
#!/bin/bash
ceph tell mon.* injectargs --mon_allow_pool_delete true
ceph osd pool delete vms vms --yes-i-really-really-mean-it
ceph osd pool delete images images --yes-i-really-really-mean-it
ceph osd pool delete volumes volumes --yes-i-really-really-mean-it
ceph osd pool delete device_health_metrics device_health_metrics --yes-i-really-really-mean-it
ceph osd pool delete .rgw.root .rgw.root --yes-i-really-really-mean-it
ceph osd pool delete default.rgw.log default.rgw.log --yes-i-really-really-mean-it
ceph osd pool delete default.rgw.control default.rgw.control --yes-i-really-really-mean-it
ceph osd pool delete default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it
ceph osd pool delete default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it
ceph osd pool delete default.rgw.buckets.index default.rgw.buckets.index --yes-i-really-really-mean-it
ceph osd pool delete default.rgw.buckets.data default.rgw.buckets.data --yes-i-really-really-mean-it
ceph tell mon.* injectargs --mon_allow_pool_delete false
If rebuilding Ceph, this will power off all of the physical storage nodes as well as remove their public keys. |
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"cephmon"}'; salt-key -d cephmon* -y
salt 'storage*' system.poweroff
salt-key -d 'storage*' -y
Zeroize Virtual OpenStack Service Nodes:
All currently running Openstack services must be zeriozed. This will vary based on the currently deployed services:
#!/bin/bash
# These are the Core OpenSTack Services used in Kinetic
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"volume"}'; salt-key -d 'volume*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"heat"}'; salt-key -d 'heat*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"swift"}'; salt-key -d 'swift*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"designate"}'; salt-key -d 'designate*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"cinder"}'; salt-key -d 'cinder*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"horizon"}' ; salt-key -d 'horizon*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"neutron"}' ; salt-key -d 'neutron*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"nova"}' ; salt-key -d 'nova*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"glance"}' ; salt-key -d 'glance*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"keystone"}' ; salt-key -d 'keystone*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"bind"}' ; salt-key -d 'bind*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"rabbitmq"}' ; salt-key -d 'rabbitmq*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"memcached"}' ; salt-key -d 'memcached*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"mysql"}' ; salt-key -d 'mysql*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"etcd"}' ; salt-key -d 'etcd*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"cinder"}'; salt-key -d 'cinder*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"network"}'; salt-key -d 'network*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"guacamole"}'; salt-key -d 'guacamole*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"placement"}'; salt-key -d 'placement*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"haproxy"}'; salt-key -d 'haproxy*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"cache"}'; salt-key -d 'cache*' -y
# These are the optional OpenStack Services used in Kinetic
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"mds"}' ; salt-key -d 'mds*' -y
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"zun"}'; salt-key -d 'zun*' -y
# If using network-ovn as the network backend, the ovsdb node needs to be zeroized, otherwise disregard
salt 'controller*' state.apply /orch/states/virtual_zero pillar='{"type":"ovsdb"}' ; salt-key -d 'ovsdb*' -y
Zeroize Compute Nodes
This will power off all of the physical compute and container nodes and remove their public keys |
salt 'compute*' system.poweroff
salt-key -d 'compute*' -y
salt 'container*' system.poweroff
salt-key -d 'container*' -y
Zeroize Controller Nodes
If rebuilding the entire environment, this will power off all of the physical controller nodes as well as remove their public keys. |
salt 'controller*' system.poweroff
salt-key -d 'controller*' -y
Our next step will be to clear any and all cached data leveraging our salt master:
#!/bin/bash
# clear any cached assignments for physical pxe boots
salt pxe* cmd.run 'rm -f /var/www/html/assignments/*'
salt pxe* cmd.run 'rm -rf /srv/tftp/assignments/*'
rm -rf /var/cache/salt/master/*
# These system restarts will ensure that Salt pulls up-to-data code
systemctl restart salt-master
systemctl restart salt-minion
# this will ensure all proper salt values are populated
salt '*' saltutil.sync_all
salt '*' saltutil.refresh_pillar
salt '*' mine.update
salt 'salt*' mine.get pxe* redfish.gather_endpoints
Provision Openstack Services
Ensure everything has been high stated before building the services |
salt \* state.highstate
Order of operations matter to meet dependency requirements that are built into the automation |
Phase 0
If this is an initial build, the physical controller node will be need to be built first.
salt-run state.orch orch.generate pillar='{"type":"controller"}'
This step requires building of cache first to help save bandwidth for package installations
salt-run state.orch orch.generate pillar='{"type":"cache"}'
salt-run state.orch orch.generate pillar='{"type":"haproxy"}'
Due to current development for a TNSR the your NAT configurations may need to be manually pointed to the correct internal address for the haproxy service. |
Phase 1
salt-run state.orch orch.generate pillar='{"type":"mysql"}'
If you are rebuilding CEPH, the cephmon nodes need to be rebuilt during this phase. |
salt-run state.orch orch.generate pillar='{"type":"cephmon"}'
If using network-ovn as the network backend, the ovsdb node needs to be created
salt-run state.orch orch.generate pillar='{"type":"ovsdb"}'
salt-run state.orch orch.generate pillar='{"type":"etcd"}'
salt-run state.orch orch.generate pillar='{"type":"rabbitmq"}'
salt-run state.orch orch.generate pillar='{"type":"memcached"}'
salt-run state.orch orch.generate pillar='{"type":"bind"}'
These four services can be provisioned simultaneously |
Phase 2
salt-run state.orch orch.generate pillar='{"type":"keystone"}'
If you are rebuilding CEPH, then storage node needs to be rebuilt during this phase. |
salt-run state.orch orch.generate pillar='{"type":"storage"}'
Phase 3
# Dependincies Completed in Prior Phases
salt-run state.orch orch.generate pillar='{"type":"glance"}'
salt-run state.orch orch.generate pillar='{"type":"horizon"}'
salt-run state.orch orch.generate pillar='{"type":"guacamole"}'
salt-run state.orch orch.generate pillar='{"type":"heat"}'
salt-run state.orch orch.generate pillar='{"type":"designate"}'
salt-run state.orch orch.generate pillar='{"type":"swift"}'
salt-run state.orch orch.generate pillar='{"type":"zun"}'
# Dependancy Order Group
salt-run state.orch orch.generate pillar='{"type":"cinder"}'
salt-run state.orch orch.generate pillar='{"type":"volume"}'
# Dependancy Order Group
salt-run state.orch orch.generate pillar='{"type":"placement"}'
salt-run state.orch orch.generate pillar='{"type":"nova"}'
Optional Kinetic Services
The following services are optional integrated services that can be used within the environment for additional capabilities, but are not routinely validated for functionality through orchestration.
salt-run state.orch orch.generate pillar='{"type":"sahara"}'
salt-run state.orch orch.generate pillar='{"type":"barbican"}'
salt-run state.orch orch.generate pillar='{"type":"magnum"}'
salt-run state.orch orch.generate pillar='{"type":"share"}'
salt-run state.orch orch.generate pillar='{"type":"mds"}'
salt-run state.orch orch.generate pillar='{"type":"cyborg"}'
salt-run state.orch orch.generate pillar='{"type":"jproxy"}'
salt-run state.orch orch.generate pillar='{"type":"gpu"}'
If using network-ovn as the network backend
salt-run state.orch orch.generate pillar='{"type":"neutron"}'
If using openvswitch as the network backend
# Dependancy Order Group
salt-run state.orch orch.generate pillar='{"type":"neutron"}'
salt-run state.orch orch.generate pillar='{"type":"network"}'
Troubleshooting
Outlined are just a few examples and is not meant to be a full troubleshooting guide.
Dependancy Errors
This is simply showing that a dependancy for the service was not met. This may happen when a service doesn’t complete a build phase.
[ERROR ] {'return': {'ready': False, 'type': 'neutron', 'comment': ['ovsdb-b5111677-cd25-5af8-8f04-f9169bbd685c is install but needs to be configure', 'ovsdb-c3906691-96df-5818-a688-eac4edd3d939 is install but needs to be configure', 'ovsdb-e1346c3d-b25e-5ade-b539-a659d208af6c is install but needs to be configure']}}
Alternatively this may happen if a service was started too early after troubleshooting issues with a broken build. The build_phase can be set manually with the following commands:
salt '<service>' grains.setval build_phase configure
salt '<service>' mine.update