Prerequisites and Guidelines
For all new deployments, we recommend using Cisco Application Services Engine as described in Deploying in Cisco Application Services Engine instead. However, if you still want to deploy the Orchestrator cluster in VMware ESX VMs directly, you can follow the guidelines and procedures in this chapter.
This chapter covers deployment of a 3-node Multi-Site Orchestrator cluster. If you want to set up a single-node Orchestrator (for example, for testing purposes), follow the instruction in the Installing Single Node Orchestrator chapter instead.
Deployment Method
When deploying in ESX VMs, you can choose one of the following 2 approaches:
-
Use Cisco-provided Python scripts to deploy the entire Multi-Site Orchestrator cluster. The scripts allow you to execute the deployment and later upgrades remotely, for example from your laptop, as long as you have access to the vCenter where the Orchestrator VMs are to be deployed.
This is the preferred approach when deploying an Orchestrator cluster in ESX VMs as it automates a number of manual steps and allows remote execution of Cisco ACI Multi-Site Orchestrator installation and subsequent software upgrades.
-
Using an OVA image to deploy each Orchestrator VM individually. In this case you can also choose to deploy the image either using the vCenter or directly on the ESX server.
Docker Subnet Considerations
The Multi-Site Orchestrator application services run in Docker containers. When
deployed, Docker uses a number of internal networks for its own
application services (bridge
,
docker_gwbridge
) as well as the
Orchestrator services (msc_msc
).
You can configure custom networks for the Docker services during Orchestrator deployment. Two additional parameters are available in the Python configuration file or the OVA template:
Note |
When configuring these networks, ensure that they are unique and do not overlap with any existing networks in the environment. |
-
Application overlay: The default address pool to be used for Docker internal bridge networks.
Application overlay must be a
/16
network. Docker then splits this network into two/24
subnets used for the internalbridge
anddocker_gwbridge
networks.For example, if you set the application overlay pool to
192.168.0.0/16
, Docker will use192.168.0.0/24
for thebridge
network and192.168.1.0/24
for thedocker_gwbridge
network. -
Service overlay: The default Docker overlay network IP.
Service overlay must be a
/24
network and is used for themsc_msc
Orchestrator Docker service network.
Network Time Protocol (NTP)
Multi-Site Orchestrator uses NTP for clock synchronization, so you must have an NTP server configured in your environment. You provide NTP server information as part of the Orchestrator installation procedure.
Note |
VMware Tools provides an option to synchronize VMs' time with the host, however you should use only one type of periodic time synchronization in your VMs. Because you will enable NTP during Multi-Site Orchestrator deployment, ensure that VMware Tools periodic time synchronization is disabled for the Orchestrator VMs. |
VMware vSphere Requirements
The following table summarizes the VMware vSphere requirements for Multi-Site Orchestrator:
-
You must not enable vMotion for Multi-Site Orchestrator VMs.
vMotion is not supported with docker swarm, which is used by the Multi-Site Orchestrator.
-
You must ensure that the following vCPUs, memory, and disk space requirements are reserved for each VM and are not part of a shared resource pool:
Orchestrator Version | Requirements |
---|---|
Release 3.0(1) or later |
|