Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learning OpenStack Networking (Neutron), Second Edition

You're reading from   Learning OpenStack Networking (Neutron), Second Edition Wield the power of OpenStack Neutron networking to bring network infrastructure and capabilities to your cloud

Arrow left icon
Product type Paperback
Published in Nov 2015
Publisher Packt
ISBN-13 9781785287725
Length 462 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
James Denton James Denton
Author Profile Icon James Denton
James Denton
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Preparing the Network for OpenStack 2. Installing OpenStack FREE CHAPTER 3. Installing Neutron 4. Building a Virtual Switching Infrastructure 5. Creating Networks with Neutron 6. Managing Security Groups 7. Creating Standalone Routers with Neutron 8. Router Redundancy Using VRRP 9. Distributed Virtual Routers 10. Load Balancing Traffic to Instances 11. Firewall as a Service 12. Virtual Private Network as a Service A. Additional Neutron Commands B. Virtualizing the Environment Index

Physical server connections

The number of interfaces needed per host is dependent on the purpose of the cloud, the security and performance requirements of the organization, and the availability of hardware.

A single interface per server that results in a combined control and data plane is all that is needed for a fully operational OpenStack cloud. Many organizations choose to deploy their cloud this way, especially when port density is at a premium. Collapsed networking may also be used when the environment is simply used for testing, or network failure at the node level is a non-impacting event. It is my recommendation, however, that you split control and data traffic across multiple interfaces whenever possible.

Single interface

For hosts using a single interface, all traffic to and from instances as well as internal OpenStack, SSH management, and API traffic traverse the same interface. This configuration can result in severe performance degradation, as a guest can create a denial of service attack against its host by consuming all available bandwidth.

The following diagram demonstrates the use of a single physical interface for all traffic when using the Open vSwitch driver. On the controller node, a single interface is connected to a bridge and handles external, guest, management, and API service traffic. On the compute node, a single interface handles guest and management traffic:

Single interface

Figure 1.2

In the preceding diagram, all OpenStack service and management traffic traverses the same physical interface as guest traffic.

Multiple interfaces

To reduce the likelihood of guest network bandwidth consumption impacting management traffic and to maintain a more robust security posture, segregation of traffic between multiple physical interfaces is recommended. At a minimum, two interfaces should be used: one that serves as a dedicated interface for management and API traffic and another that serves as a dedicated interface for external and guest traffic. Additional interfaces can be used to further segregate traffic. The following diagram demonstrates traffic split across two physical interfaces when using the Open vSwitch driver:

Multiple interfaces

Figure 1.3

In the preceding diagram, a dedicated physical interface on the controller node handles OpenStack API and management traffic. Another interface is connected to a bridge that handles external and overlay traffic to and from Neutron routers and other network resources. On the compute node, a dedicated physical interface is used for management traffic and another for overlay traffic to and from instances.

In this book, the environment will be built using three interfaces: one for management and API traffic, one for external or VLAN tenant network traffic, and another for overlay network traffic:

Multiple interfaces

Figure 1.4

In the preceding diagram, a dedicated interface on the controller node handles OpenStack API and management traffic. Another dedicated interface handles overlay traffic, and another is connected to a bridge that handles external traffic. On the compute node, a dedicated interface is used for management traffic, another dedicated interface handles overlay traffic, and another is connected to a bridge that handles VLAN traffic when VLAN tenant networks are in use.

Bonding

NIC bonding offers users the ability to multiply available bandwidth by aggregating links. Two or more physical interfaces can be combined to create a single virtual interface, or bond, which can then be placed in a bridge or used as a regular interface. The technology used to accomplish this is known as IEEE 802.1ad Link Aggregation Control Protocol, or LACP. The physical switching infrastructure must be capable of supporting this type of bond. Legacy switching devices required the multiple links of a bond to be connected to the same switch. Modern devices support technology such as vPC and MLAG. They allow links of a bond to be connected to two switches, providing hardware redundancy between switches while allowing users the full bandwidth of the bond in normal operating conditions, with no changes to the server configuration necessary.

In addition to aggregating interfaces, bonding can also refer to the ability to create redundant links in an active/passive manner. Both links are simultaneously cabled to a switch or pair of switches, but only one interface is active at any given time. Both types of bonds can be created within the operating system when the appropriate kernel module is installed. Bonding can be configured in Open vSwitch if desired, but is outside the scope of OpenStack and this book.

Bonding interfaces can be an inexpensive way to provide hardware-level network redundancy to the cloud infrastructure. If you are interested in configuring NIC bonding on your hosts, refer to the Ubuntu bonding page at https://help.ubuntu.com/community/UbuntuBonding.

Note

Bonding configurations vary greatly between Linux distributions. Refer to the respective documentation of your Linux distribution for assistance in configuring bonding.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image