Last year I presented at the local Cisco DCUG to a warm and receptive audience about Cisco UCS Director being deployed on a global scale. At the time I was working for a global pharmaceutical company and following some organisational changes the requirements of the business and in turn IT changed to match. A key part of the changes focused on global standardisation of IT infrastructure to ensure 24 x 7 operational support. The best way to achieve that goal was to look at automation and orchestration. Cisco UCS Director was the tool chosen at the time. UCS Director is an absolute beast of a product and it reflects badly on Cisco as to how they have marketed and managed the product. It has potential to be the one stop shop for infrastructure management.
Create a global platform to enable physical and virtual automation based on standardised templates and processes.
- Drive standardisation across 14 global sites, reduce management overheads and complexities
- Put the company in a position to leverage follow the sun support for infrastructure to minimise out of hours support at each local site
- Provide a secure platform that could easily meet strict auditing guidelines
- Deliver a mechanism to allow end-users to quickly and easily request new virtual machines
- Streamline the request for infrastructure processes and remove existing bottlenecks
- Drive the business towards a Private Cloud architecture rather than individual silos
- Reduce licensing costs across the business for multiple existing automation and orchestration platforms.
- The ability to provide a cost model and service catalog and quickly inform projects on the estimated potential costs of their projects.
- Integration into the existing service management tool
- Integration into HP Quality Control for auditing and quality control purposes. This allowed for installation verification scripts to be completed.
During the project it was decided early on to leverage Cisco UCS Director. It was a product that was already installed in the Australian site and with the server hardware and general data center networking hardware being Cisco aligned it made sense to leverage the existing licensing for these products to enable automation and orchestration. A number of products were considered, such as VMware vRealize Automation, but with the requirement for physical equipment automation and the financial benefits it was a no-brainer to go with UCS Director. It must be said however that UCSD has a big learning curve and can be quite complex so that was also factored into the decision.
I reached out to out Cisco SE’s and discussed the options available to us for deployment. The options were:
1. Install a separate instance on each site – this would mean manual transfer of data between sites and reduce overall management but licensing would be locally financed which followed the existing finance model
2. Install one instance in a centralised location – overall this was the best option and least complex but latency between sites could cause problems across the WAN links
3. Install one instance but break out the services into multiple servers rather than a single appliance for all services – this would be necessary if it was a huge infrastructure but even taking all global infrastructure into account a single appliance could easily manage the load.
The system was going to span 3 primary zones, APAC (Australia), EMEA (Germany), US (East Coast). Latency was always going to be the killer for such a deployment so a proof of concept was run to verify no performance issues would be experiences. Two POCs were run. The key metric was the deployment times for VMs within the APAC region from the other two zones. A number of self-provisioning requests were carried out and the timings were recorded. Due to the time zone differences and backup windows there were times when the deployment was either excessively slow or failed but overall it was decided that a centralised instance based in the East Coast US was the best option.
Prior to the implementation there were some heated discussions around the best way to architect and design the platform. With UCSD it’s not always possible to go back at a later date and modify all the configurations so it was best to design correctly from the outset. Each site, or at least primary sites within the regions, would be set up as it’s own POD. The POD would in turn be configured with the relevant infrastructure items related to that site. Standardised policies would then be leveraged for production, non-production and manufacturing across all sites with each site having it’s own dedicated policy which could be updated independently if needed without impact other locations. This allowed for changed to local legislation without impacting global requirements.
The steps to getting the sites up and running involved:
1. Set up the integration accounts for the physical equipment
2. Create a site
3. Create a POD
4. Add physical accounts (Storage, Compute)
5. Add Managed Network Elements (Network)
6. Add Virtual Accounts (VMware, Hyper-V)
7. Define your policies
The policies are really the beating heart of UCS Director and where you can define what can and cannot be achieved within the various physical and virtual components of the infrastructure. I’m not going to go into the policies here but if you would like to know more then look no further than Eric Shanks’ guide over on The IT Hollow. It’s is so good and just an outstanding resource. At a high level you’ll needed to define your polices for each vDC (Virtual Data Center) which is a container for how you want to carve up the infrastructure within your POD. For example, you could have a Prod vDC and a non-Prod vDC within each site. Following the core configuration pieces the next step is to start building out the catalog of items you want to make available for consumption by the business. This is done by creating new workflows and publishing them via the Service Delivery mechanism. In the case of our deployment a standardised workflow was created for a Windows Server template to could be deployed to all VMware vCenter instances globally and contained the required settings for service accounts, domain joins, NTP settings etc. An individual workflow was created for each site but the content of the workflow was the same, just references to different infrastructure items were different. Also, due to each site being managed independently initially it was going to take some time to standardise infrastructure from a global perspective but in the long term it was envisaged to have minimal workflows being used to deploy to multiple sites.
The outcome of the project was a platform that could manage, interact and configure the required physical and virtual infrastructure for each site from any other site and a standardised approach across all sites for VM deployment. That was the first phase of the project and additional phases were planned to incorporate Linux VM deployment, storage LUNs/share configuration and deployment and network infrastructure configuration. The intention was to also provide showback to other departments and assist in the decision making process for business initiatives and projects.
The outcome of presenting at the DCUG personally was a foundational confidence in presenting to small groups. This meant I put myself forward to do presentations at other user groups and IT events.
I would highly recommend reading all the information over on the IT Hollow as well as checking out the Cisco UCSD Workflow index over at the Cisco communities site. I have also provided a link to the presentation that was made at the DCUG which will hopefully be of some use to someone.