Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Building VMware Software-Defined Data Centers

You're reading from   Building VMware Software-Defined Data Centers Make the most of software-defined data centers with revolutionary VMware technologies

Arrow left icon
Product type Paperback
Published in Dec 2016
Publisher
ISBN-13 9781786464378
Length 358 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Valentin Hamburger Valentin Hamburger
Author Profile Icon Valentin Hamburger
Valentin Hamburger
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. The Software-Defined Data Center 2. Identify Automation and Standardization Opportunities FREE CHAPTER 3. VMware vSphere: The SDDC Foundation 4. SDDC Design Considerations 5. VMware vRealize Automation 6. vRealize Orchestrator 7. Service Catalog Creation 8. Network Virtualization using NSX 9. DevOps Considerations 10. Capacity Management with vRealize Operations 11. Troubleshooting and Monitoring 12. Continuous Improvement

The implementation journey

While a big part of this book focuses on building and configuring the SDDC, it is important to mention that there are also non-technical aspects to consider. Creating a new way of operating and running your data center will always involve people. It is important to also briefly touch this part of the SDDC. Basically, there are three major players when it comes to a fundamental change in any data center, as shown in the following image:

The implementation journey

Basically, there are three major topics relevant for every successful SDDC deployment. Same as for the tools principle, these three disciplines need to work together in order to enable the change and make sure that all benefits can be fully leveraged.

These three categories are:

  • People
  • Process
  • Technology

The process category

Data center processes are as established and settled as IT itself. Beginning with the first operator tasks like changing tapes or starting procedures up to highly sophisticated processes to ensure that the service deployment and management is working as expected they have already come a long way. However, some of these processes might not be fit for purpose anymore, once automation is applied to a data center. To build an SDDC it is very important to revisit data center processes and adapt them to work with the new automation tasks. The tools will offer integration points into processes, but it is equally important to remove bottlenecks for the processes as well. However, keep in mind that if you automate a bad process, the process will still be bad, but fully automated. So it is also necessary to revisit those processes so that they can become slim and effective as well.

Remember Tom, the data center manager. He has successfully identified that they need an SDDC to fulfill the business requirements and also did a use case to IT capabilities mapping. While this mapping is mainly talking about what the IT needs to deliver technically, it will also imply that the current IT processes need to adapt to this new delivery model.

The process change example in Tom's organization

If the compute department works on a service involving OS deployment, they need to fill out an Excel sheet with IP addresses and server names and send it to the networking department. The network admins will ensure that there is no double booking by reserving the IP address and approve the requested hostname. After successfully proving the uniqueness of this data, name and IP get added to the organization's DNS server.

The manual part of this process is no longer feasible once the data center enters the automation era, imagine that every time somebody orders a service involving a VM/OS deploy, the network department gets an e-mail containing the Excel with the IP and hostname combination. The whole process will have to stop until this step is manually finished.

To overcome this, the process has to be changed to use an automated solution for IPAM. The new process has to track IP and hostnames programmatically to ensure there is no duplication within the entire data center. Also, after successfully checking the uniqueness of the data, it has to be added to the Domain Name System (DNS).

While this is a simple example of one small process, normally there is a large number of processes involved which need to be reviewed for a fully automated data center. This is a very important task and should not be underestimated since it can be a differentiator for success or failure of an SDDC.

Think about all other processes in place, which are used to control the deploy/enable/install mechanics in your data center. Here is a small example list of questions to ask regarding established processes:

  • What is our current IPAM/DNS process?
  • Do we need to consider a CMDB integration?
  • What is our current ticketing process? (ITSM)
  • What is our process to get resources from the network, storage, and compute?
  • What OS/VM deployment process is currently in place?
  • What is our process to deploy an application (handovers, steps, or departments involved)?
  • What does our current approval process look like?
    • Do we need a technical approval to deliver a service?
    • Do we need a business approval to deliver a service?
  • What integration process do we have for a service/application deployment?
    • DNS, Active Directory (AD), Dynamic Host Configuration Protocol (DHCP), routing, Information Technology Infrastructure Library (ITIL), and so on

Now for the approval question, normally these are an exception for the automation part since approvals are meant to be manual in the first place (either technical or business). If all the other answers to this example questions involve human interaction as well, consider to changing these processes to be fully automated by the SDDC.

Since human intervention creates waiting times, it has to be avoided during service deployments in any automated data center. Think of it as the robotic construction bands today's car manufacturers are using. The processes they have implemented, developed over ages of experience, are all designed to stop the band only in case of an emergency.

The same comes true for the SDDC; try to enable the automated deployment through your processes, stop the automation only in case of an emergency.

Identifying processes is the simple part, changing them is the tricky part. However, keep in mind that this is an all-new model of IT delivery, therefore there is no golden way of doing it. Once you have committed to change those processes, keep monitoring if they truly fulfill their requirement.

This leads to another process principle in the SDDC: Continual Service Improvement (CSI). Revisit what you have changed from time to time and make sure that those processes are still working as expected, if they don't, change them again.

The people category

Since every data center is run by people, it is important to also consider that a change of technology will also impact those people. There are some claims that an SDDC can be run with only half of the staff or save a couple of employees since all is automated.

The truth is, an SDDC will transform IT roles in a data center. This means that some classic roles might vanish, while others will be added by this change.

It is unrealistic to say that you can run an automated data center with half the staff than before. But it is realistic to say that your staff can concentrate on innovation and development instead of working a 100% to keep the lights on. And this is the change an automated data center introduces. It opens up the possibilities to evolve into a more architecture and design focused role for current administrators.

The people example in Tom's organization

Currently, there are two admins in the compute department working for Tom. They are managing and maintaining the virtual environment, which is largely VMware vSphere. They are creating VMs manually, deploying an OS by a network install routine (which was a requirement for physical installs - so they kept the process) and then handing the ready VMs over to the next department to finish installing the service they are meant for.

Recently they have experienced a lot of demand for VMs and each of them configures 10 to 12 VMs per day. Given this, they cannot concentrate on other aspects of their job, like improving OS deployments or the handover process.

At a first look, it seems like the SDDC might replace these two employees since the tools will largely automate their work. But that is like saying a jackhammer will replace a construction worker.

Actually, their roles will shift to a more architectural aspect. They need to come up with a template for OS installations and an improvement how to further automate the deployment process. Also, they might need to add new services/parts to the SDDC in order to fulfill the business needs continuously.

So instead of creating all the VMs manually, they are now focused on designing a blueprint, able to be replicated as easy and efficient as possible.

While their tasks might have changed, their workforce is still important to operate and run the SDDC. However, given that they focus on design and architectural tasks now, they also have the time to introduce innovative functions and additions to the data center.

Keep in mind that an automated data center affects all departments in an IT organization. This means that also the tasks of the network and storage as well as application and database teams will change. In fact, in an SDDC it is quite impossible to still operate the departments disconnected from each other since a deployment will affect all of them.

This also implies that all of these departments will have admins shifting to higher-level functions in order to make the automation possible. In the industry, this shift is also often referred to as Operational Transformation. This basically means that not only the tools have to be in place, you also have to change the way how the staff operates the data center. In most cases organizations decide to form a so-called center of excellence (CoE) to administer and operate the automated data center.

The people example in Tom's organization

This virtual group of admins in a data center is very similar to project groups in traditional data centers. The difference is that these people should be permanently assigned to the CoE for an SDDC. Typically you might have one champion from each department taking part in this virtual team.

Each person acts as an expert and ambassador for their department. With this principle, it can be ensured that decisions and overlapping processes are well defined and ready to function across the departments. Also, as an ambassador, each participant should advertise the new functionalities within their department and enable their colleagues to fully support the new data center approach.

It is important to have good expertise in terms of technology as well as good communication skills for each member of the CoE.

The technology category

This is the third aspect of the triangle to successfully implement an SDDC in your environment. Often this is the part where people spend most of their attention, sometimes by ignoring one of the other two parts. However, it is important to note that all three topics need to be equally considered. Think of it like a three-legged chair, if one leg is missing it can never stand.

The term technology does not necessarily only refer to new tools required to deploy services. It also refers to already established technology, which has to be integrated with the automation toolset (often referred to as third-party integration). This might be your AD, DHCP server, e-mail system, and so on.

There might be technology which is not enabling or empowering the data center automation, so instead of only thinking about adding tools, there might also be tools to be removed or replaced. This is a normal IT lifecycle task and has been gone through many iterations already. Think of things like a fax machine or the telex; you might not use them anymore, they have been replaced by e-mail and messaging.

The technology example in Tom's organization

The team uses some tools to make their daily work easier when it comes to new service deployments. One of the tools is a little graphical user interface to quickly add content to AD. The admins use it to insert the hostname, organizational unit (OU) as well as creating the computer account with it. This was meant to save admin time since they don't have to open all the various menus in the AD configuration to accomplish these tasks.

With the automated service delivery, this has to be done programmatically. Once a new OS is deployed it has to be added to the AD including all requirements by the deployment tool. Since AD offers an API this can be easily automated and integrated into the deployment automation. Instead of painfully integrating the graphical tool, this is now done directly by interfacing the organization's AD, ultimately replacing the old graphical tool.

The automated deployment of a service across the entire data center requires a fair amount of communication. Not in a traditional way, but machine-to-machine communication leveraging programmable interfaces. Using such APIs is another important aspect of the applied data center technologies. Most of the today's data center tools, from backup all the way up to web servers, do come with APIs. The better the API is documented, the easier the integration into the automation tool. In some cases, you might need the vendors to support you with the integration of their tools.

If you have identified a tool in the data center, which does not offer any API or even command-line interface (CLI) option at all, try to find a way around this software or even consider replacing it with a new tool.

APIs are the equivalent of handovers in the manual world. The better the communication works between tools, the faster and easier the deployment will be completed. To coordinate and control all this communication, you will need far more than scripts to run. This is a task for an orchestrator, which can run all necessary integration workflows from a central point. This orchestrator will act as a conductor for a big orchestra. It will form the backbone of your SDDC.

Why are these three topics so important?

The technology aspect closes the triangle and brings the people and the processes parts together. If the processes are not altered to fit the new deployment methods, automation will be painful and complex to implement. If the deployment stops at some point, since the processes require manual intervention, the people will have to fill in this gap.

This means that they now have new roles, but also need to maintain some of their old tasks to keep the process running. By introducing such an unbalanced implementation of an automated data center, the workload for people can actually increase, while the service delivery times may not dramatically decrease. This may lead to an avoidance of the automated tasks since the manual intervention might be seen as faster by individual admins.

So it is very important to accept all three aspects as the main part of the SDDC implementation journey. They all need to be addressed equally and thoughtfully to unveil the benefits and improvements an automated data center has to offer.

However, keep in mind that this truly is a journey. An SDDC is not implemented in days but in months. Given this, also the implementation team in the data center has this time to adopt themselves and their process to this new way of delivering IT services. Also, all necessary departments and their lead need to be involved in this procedure.

An SDDC implementation is always a team effort.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image