How to Use the VTEP Emulator¶
This document explains how to use ovs-vtep, a VXLAN Tunnel Endpoint (VTEP) emulator that uses Open vSwitch for forwarding. VTEPs are the entities that handle VXLAN frame encapsulation and decapsulation in a network.
The VTEP emulator is a Python script that invokes calls to tools like vtep-ctl and ovs-vsctl. It is only useful when Open vSwitch daemons like ovsdb-server and ovs-vswitchd are running and installed. To do this, either:
At the end of this process, you should have the following setup:
Architecture +---------------------------------------------------+ | Host Machine | | | | | | +---------+ +---------+ | | | | | | | | | VM1 | | VM2 | | | | | | | | | +----o----+ +----o----+ | | | | | | br0 +------o-----------o--------------------o--+ | | p0 p1 br0 | | | | | | +------+ +------+ | +------------------------------| eth0 |---| eth1 |--+ +------+ +------+ 10.1.1.1 10.2.2.1 MANAGEMENT | | +-----------------o----+ | | DATA/TUNNEL | +-----------------o---+
Some important points.
We will use Open vSwitch to create our “physical” switch labeled
Our “physical” switch
br0will have one internal port also named
br0and two “physical” ports, namely
The host machine may have two external interfaces. We will use
eth0for management traffic and
eth1for tunnel traffic (One can use a single interface to achieve both). Please take note of their IP addresses in the diagram. You do not have to use exactly the same IP addresses. Just know that the above will be used in the steps below.
You can optionally connect physical machines instead of virtual machines to switch
br0. In that case:
Make sure you have two extra physical interfaces in your host machine,
In the rest of this doc, replace
In addition to implementing
p1as physical interfaces, you can also optionally implement them as standalone TAP devices, or VM interfaces for simulation.
Creating and attaching the VMs is outside the scope of this document and is included in the diagram for reference purposes only.
These instructions describe how to run with a single ovsdb-server instance that handles both the OVS and VTEP schema. You can skip steps 1-3 if you installed using the debian packages as mentioned in step 2 of the “Requirements” section.
Create the initial OVS and VTEP schemas:
$ ovsdb-tool create /etc/openvswitch/ovs.db vswitchd/vswitch.ovsschema $ ovsdb-tool create /etc/openvswitch/vtep.db vtep/vtep.ovsschema
Start ovsdb-server and have it handle both databases:
$ ovsdb-server --pidfile --detach --log-file \ --remote punix:/var/run/openvswitch/db.sock \ --remote=db:hardware_vtep,Global,managers \ /etc/openvswitch/ovs.db /etc/openvswitch/vtep.db
Start ovs-vswitchd as normal:
$ ovs-vswitchd --log-file --detach --pidfile \ unix:/var/run/openvswitch/db.sock
Create a “physical” switch and its ports in OVS:
$ ovs-vsctl add-br br0 $ ovs-vsctl add-port br0 p0 $ ovs-vsctl add-port br0 p1
Configure the physical switch in the VTEP database:
$ vtep-ctl add-ps br0 $ vtep-ctl set Physical_Switch br0 tunnel_ips=10.2.2.1
Start the VTEP emulator. If you installed the components following Open vSwitch on Linux, FreeBSD and NetBSD, run the following from the
$ ./ovs-vtep --log-file=/var/log/openvswitch/ovs-vtep.log \ --pidfile=/var/run/openvswitch/ovs-vtep.pid \ --detach br0
If the installation was done by installing the openvswitch-vtep package, you can find ovs-vtep at
Configure the VTEP database’s manager to point at an NVC:
$ vtep-ctl set-manager tcp:<CONTROLLER IP>:6640
<CONTROLLER IP>is your controller’s IP address that is accessible via the Host Machine’s eth0 interface.
Simulating an NVC¶
A VTEP implementation expects to be driven by a Network Virtualization Controller (NVC), such as NSX. If one does not exist, it’s possible to use vtep-ctl to simulate one:
Create a logical switch:
$ vtep-ctl add-ls ls0
Bind the logical switch to a port:
$ vtep-ctl bind-ls br0 p0 0 ls0 $ vtep-ctl set Logical_Switch ls0 tunnel_key=33
Direct unknown destinations out a tunnel.
For handling L2 broadcast, multicast and unknown unicast traffic, packets can be sent to all members of a logical switch referenced by a physical switch. The “unknown-dst” address below is used to represent these packets. There are different modes to replicate the packets. The default mode of replication is to send the traffic to a service node, which can be a hypervisor, server or appliance, and let the service node handle replication to other transport nodes (hypervisors or other VTEP physical switches). This mode is called service node replication. An alternate mode of replication, called source node replication, involves the source node sending to all other transport nodes. Hypervisors are always responsible for doing their own replication for locally attached VMs in both modes. Service node mode is the default. Service node replication mode is considered a basic requirement because it only requires sending the packet to a single transport node. The following configuration is for service node replication mode as only a single transport node destination is specified for the unknown-dst address:
$ vtep-ctl add-mcast-remote ls0 unknown-dst 10.2.2.2
Optionally, change the replication mode from a default of
source_node, which can be done at the logical switch level:
$ vtep-ctl set-replication-mode ls0 source_node
Direct unicast destinations out a different tunnel:
$ vtep-ctl add-ucast-remote ls0 00:11:22:33:44:55 10.2.2.3