Quantum OpenVSwitch plugin and GRE tunnels

First off let me just say that I knew nothing about OpenVSwitch prior to using OpenStack. OpenVSwitch is essentially an open source piece of software that gives one the ability to create switches in software… and is OpenFlow compatible. So naturally it is a great candidate for the OpenStack project. In my last couple of blog posts, I didn’t know that much about how OpenVswitch uses GRE tunnels to achieve tenant segregation. For example I had no idea how virtual machines that existed in one vlan could communicate with a default gateway for their subnet that WASN’T in their vlan. I FINALLY understand how. It’s actually very clever, because the people who wrote the code for this plugin really leverage the functionality of OpenVSwitch by using OpenFlow rules manipulating vlan tags on frames when they come into a virtual bridge. Let me show you what I mean.

Essentially what you have on each compute node is two switches that exist in software and are created by the OpenVSwitch software. These exist on each of the compute nodes and on the network node. You have what’s called the integration bridge or br-int and the tunnel bridge or Br-Tun. The GRE tunnels terminate on the tunnel bridge. The VMs of course connect to the Br-int bridge. Both the br-tun and br-int switches are connected together with what’s called a patch port. (Figure 1.1)


Figure 1.1

So why is OpenVSwich so neat? I guess an example would probably be best to describe it.

As soon as the virtual machine is spun up and attached to a network, The virtual machine will attach to the br-int bridge on the compute node. It will also be put into a vlan on that br-int. HOWEVER the vlan tag is only used to separate traffic locally on the compute node. For example if you have a network that is created in one tenant, and the SAME network is created in another tenant. And you have a virtual machine on one network, and another virtual machine on the other network. AND they exist on the same compute node they WILL be in separate vlans. Ya, it’s a little crazy.

The thing that I couldn’t figure out in my last post is how a virtual machine gets to it’s default gateway when the gateway and the virtual machines aren’t in the same vlans. And the answer is a combination of GRE tunnels AND  OpenFlow rules on the br-tun bridge. Here’s a really high level view of how it works.

Virtual machine needs to go to it’s gateway

Assume the virtual machine already has an ARP entry for the gateway.

So the virtual machine starts sending out traffic tagged with the vlan that it is in on through br-int on the compute node.

Refer to the flow of traffic below. This flow is from the virtual machine to the default gateway for the virtual network that was created previously.


NOW when the traffic gets to br-tun, this is where the magic/sdn part happens. There is a flow rule on br-tun that basically says, anything from a particular vlan, slap a GRE key tag on the traffic that leaves the br-tun bridge. Figure 1.2 shows the packet capture with the GRE key appended and Figure 1.3 shows the actual rule in place on br-tun that makes it happen.


Figure 1.2


Figure 1.3 – Flow rules on br-tun

^If you want to see the above flows go to a compute node and issue the following command: ovs-ofctl dump-flows br-tun

The bottom two rules are what I am talking about. It’s very clever. And very powerful.

Now when the traffic ends up getting to br-tun on the network node. There is ANOTHER flow rule that does the opposite on br-tun on the network node. It says anything coming into the br-tun bridge with a GRE tag/key of X CHANGE the vlan tag to what is locally configured on the br-int. For example if the gateway exists on Vlan 5 but the traffic from the vm was tagged with vlan 2 when it left the vm, a flow rule will change that tag when it reaches the br-tun bridge on the network node. Then it will be forwarded like a normal switch out the patch port towards br-int with the new vlan tag.

So what I learned is OpenFlow rules have the ability to change VLAN tags, which is sort of an interesting concept. In my next post I’m going to talk about how OpenStack leverages network namespaces to make overlapping ip addresses between tenants possible.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s