[Nutanix] Networking in AHV for vSphere administrators


Acropolis Hypervisor (AHV) is growing in features and adoption. What is most impressive is the way Nutanix is leveraging KVM (Open Source) making it very easy to use. Networking in AHV is one of the brilliant examples. In this post, I would like to throw some thoughts on Host Networking and VM Networking. Open vSwitch (OVS) is the core of the Networking in AHV. OVS functions as a Layer-2 switch which learns and maintain MAC Address table. Each AHV instance has OVS. These instances combine to form one logical switch. To state it simply if you have six node Nutanix cluster then on each node you will have OVS instance. 6 Instances of OVS combine to form a single logical Switch. So by default, it is distributed switch. New learning and therefore new terminology


In other words, it is NIC Teaming if you are coming from Windows world. By default, br0-up is created during initial imaging of hypervisor which later on is changed to bond0 if you follow admin guide Network traffic needs load balancing. By default, following load balancing algorithms are available.


Table will be right way to list down the various port types

Port Type Description
Tap This is the interface which is created when VM is powered ON. However, the interface is deleted when VM is powered Off.
Internal For every Bridge you create, you will have one single internal port set up with the same name as Bridge. This internal port provides access to AHV host. This interface has external facing IP address of AHV. In my case, AHV IP Address is
VXLAN It is created per Bridge for IP Address Management (IPAM)
Bond Port is deployed for each Bond you create
Vnet0 It is special port on br0 which where CVM internal interface terminates

In the below screen I have captured all four ports types (in red)

4 ports types (in red)
Four ports types (in red)

Load Balancing Modes

As we speak on bonds, we eventually get keen to know how the load is balanced between the NICs. In the table below I have explained each of the algorithms and recommendations from Nutanix

Load Balancing Modes Descriptions
Active-Backup It is default mode and the one recommended by Nutanix. This method is similar to vSphere Route based on Originating Virtual Port ID. Only one interface is active at a time which is kind of downside if you like to see it that way. The simplest method as it doesn’t need changes on the Physical Switch side.
Balance-SLB In this case both the networks cards are active. As a result, AHV will have 20g throughput with two 10Gbps adapters. It is recommended that physical switches be stacked/inter-connected as this will reduce the impact MAC address movement. In ordinary cases, both these switches will be connected to the same core in which case interconnection is not required. In this case, each node will have 20 Gbps throughput, but individual VM cannot go beyond 10 Gbps. Traffic movement is decided based on Source MAC (which is similar to VMware’s Route based on Source Mac), but it is little intelligent. Every 10 seconds it keeps checking the load on NICs based on source MAC hashes and moves it based on load an algorithm similar in nature is adopted in DVS Route based on Physical NIC Load. Nutanix recommends changing this value to 60 seconds.
Balance-tcp Here you need two switches and they must be aggregated using LACP which results in 20g throughput per VM in an AHV node with two 10Gbps adapters, but Nutanix doesn’t recommend unless switches are already configured in LACP mode.


How to change the load balancing mode


How to change the default rebalancing interval


NB: Please note above command must be used only when balance-slb is deployed

How to check the current bond configuration


How to configure fallback mode

Please note below command should be combined with LACP and should be done BEFORe changing the mode to LACP



To Learn AHV Networking quickly let me put across a scenario. The best way to do this is to use examples of tasks we normally do in vCenter. We either create an additional switch in vCenter or add port groups in vSwitch0. I’m focusing on both these cases. Case:01 Create a standard switch and Case:02 Create a Portgroups in Standard Switch.

We will create three different Portgroups as listed below on AHV node and attach VMs to them.

 Item Portgroup Name VLAN
1 Production 130
2 Development 140
3 Testing 150


Below diagram depicts the physical layout of networking we aim to achieve.

Networking in Nutanix AHV
Networking in Nutanix AHV

Exercise:01 Create a Standard Switch

In AHV it is not standard switch, but I used this term to make yourself familiar. Since AHV is based on KVM, it is referred as Open Virtual Switch (OVS). OVS is abbreviation with which most of the commands starts. In OVS when you wish to connect Virtual World to the Physical world you must have a bridge (brx). All bridges are abbreviated as brx where x refer to number. In the vCenter world, you deploy virtual switch and connect physical Network Interface Cards (NIC) to connect to the outside world. When you use ESXi host, by default virtual switch is created. Likewise, when you deploy AHV by default br0 bridge is created. In this case, we do not need to build a bridge? Well not exactly. As per AHV best practices, you must segregate between 10g and 1g interface using bridges.

As a first step, we need to create a bridge. This is the simplest part, but the hardest part is from where to execute the command. The official guide is advising to do so from CVM using allssh. For a reason I’m using Nutanix CE edition Single Node Cluster, I have little choice but to use AHV. Login to AHV with root credentials and run following command


I have created br1 for this exercise. Since I’m beginner, it is a good idea to confirm it. Typing command repeatedly also ingrains command in your memory



I have created br1 for this exercise.

Typing command repeatedly also ingrains command in your memory


List the Bridges created in Nutanix AHV
List the Bridges created in Nutanix AHV

What next? Well isn’t that obvious? Attached Network card to it? Yeah, it is. How do I determine which NIC to attach to this switch? By default, all NICs get attached to br0. Before I proceed, we have to explicitly create a bond (it is teaming in the normal world).

In my case, I just took a guess, but in the real world, it is going to very ease. I selected the last two NICs and decided to use these interface for creating Bond1

Open putty session on CVM and run the following command


How to create bond in Nutanix AHV
How to create a bond in Nutanix AHV

eno50332184 & eno67109408  is the interface name. Since I have virtualized Nutanix CE, these names are reflection of it

At this stage, we have created a Bond (bond1) which will be balancing traffic on these two interfaces.

Let me validate it. From CVM run following command


Validate if bond created in Nutanix AHV
Validate if bond created in Nutanix AHV

Below is logical relationship between Bridge, Bond, and NICs

Schematic exhibiting Bridge, Bond and Interface in Nutanix AHV
Schematic exhibiting Bridge, Bond, and Interface in Nutanix AHV

Exercise:02 Create Port Groups

Now let’s move towards creating Port groups which are the goal of Case:02. Creating port groups is way too simple. But how do we ensure VMs are attached to br1 and not to br0. By default, from Prism interface, you cannot select where the port group will be created. You have to use acli.


Let me explain above command. Production is the label of the port group. vswitch_name is the virtual switch name which in our case is br1. vswitch_name is the attribute which decides where the port group will be created. There are other attributes which I will explain in a bit.

Create PortGroups in Nutanix in AHV
Create PortGroups in Nutanix in AHV

Similarly, I have created Development and Testing port groups.

Let me show what prism portal shows you


So you might notice I have created port groups with VLAN ‘0’. Since in my lab I do not have a physical switch I can’t do routing between VLANs with NSX or Physical switch.

Now let’s create VMs. This is bit important. You must attach VMs to right port groups (Bridges) as we have removed the Physical NICs from Br0 and attached it to Br1.

NB: We are not touching CVM here. CVM remains on br0 and NICs are teamed.

IP Address Management (IPAM)

To assign IP address automatically to servers IPAM is an option available in AHV networking. If you have worked on remote sites e.g. branch offices where deploying DHCP is not economical, in such cases, you can use IPAM services. VXLAN is used to offer IP Address to the requestor.

Network IP Address/Prefix Length: It is the subnet you wish to provide via DHCP server. Ensure it is in CIDR notation.

When you specific network address please note that network address specified is instead of the incorrect which is the first address in the range. A lot of non-network guys might make this mistakes.

Gateway IP Address: Is the default Gateway address of the Subnet.

Configure Domain Settings: In this section, you have standard DHCP options like

  1. DNS IP Address
  2. Domain Search/ DNS Suffix
  3. Domain Name
  4. TFTP Server Name
  5. Boot File Name
Enable IPAM in Nutanix AHV
Enable IPAM in Nutanix AHV

The ip address is immediately assigned when the VM NIC is created in IPAM enabled network. IP Address remains with the VM NIC for its entire lifecycle except when you delete the NIC and when VM is removed. DHCP IP Address leased to the VM NIC for 20 years (Highlighted in orange). Since DHCP IP Address is not override, DHCP offer address is (highlighted in yellow). By default, the last address in the range is used unless you explicitly override it.

DHCP Lease in Nutanix AHV
DHCP Lease in Nutanix AHV