OpenStack Folsom Quantum OVS agent GRE Tunnels

I used this guide to http://docs.openstack.org/folsom/basic-install/content/basic-install_intro.html

Couple problems I encountered.

I was able to spin up vms, but the vms weren’t getting IP addresses from the dhcp agent. I realized the issue was the ovs agent wasn’t running on the compute node where the virtual machines were being run. I posted this issue here https://answers.launchpad.net/quantum/+question/215244

Once the OVS agent was running. I got rid of the ovs configuration that I currently had in place. I followed this page http://docs.openstack.org/folsom/basic-install/content/basic-install_compute.html#basic-install_compute-quantum. I restarted the ovs agent. Then I had a configuration that looked like this

root@ComputeNode

e1:~# ovs-vsctl show
331fd29b-b70f-4f54-983a-d6749f8cf1ed
Bridge br-int
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qvo235c2e1a-dd”
tag: 1
Interface “qvo235c2e1a-dd”
Port “qvo002cc4c6-47”
tag: 1
Interface “qvo002cc4c6-47”
Port “qvo984554c8-98”
tag: 3
Interface “qvo984554c8-98”
Port br-int
Interface br-int
type: internal
Bridge br-tun
Port “gre-1”
Interface “gre-1″
type: gre
options: {in_key=flow, out_key=flow, remote_ip=”10.1.1.1”}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
ovs_version: “1.4.0+build0”

I then checked the ovs configuration on the controller. For whatever reason the GRE tunnel to the compute node did not exist. I had to manually add the gre config on the ovs where the layer 3 agent was being run. I used this little guide for setting up a gre tunnel by hand in ovs: http://networkstatic.net/open-vswitch-gre-tunnel-configuration/

My final configuration on the node that was running the quantum-layer3-agent, quantum-dhcp-agent, quantum-ovs-agent was the following:

root@controller:~# ovs-vsctl show
195ff79e-e01a-4e93-84bc-868d8782f284
Bridge br-ex
Port “qg-2b9a29bb-4b”
Interface “qg-2b9a29bb-4b”
type: internal
Port “eth2”
Interface “eth2”
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
Port “gre-1”
Interface “gre-1″
type: gre
options: {remote_ip=”10.1.1.2”}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap882a8e75-ee”
tag: 1
Interface “tap882a8e75-ee”
type: internal
Port “tap4265fa27-08”
tag: 4095
Interface “tap4265fa27-08”
type: internal
Port br-int
Interface br-int
type: internal
Port “qr-1549a07f-3a”
tag: 4095
Interface “qr-1549a07f-3a”
type: internal
Port “tap3680cb06-ab”
tag: 4
Interface “tap3680cb06-ab”
type: internal
ovs_version: “1.4.0+build0”

Tips to troubleshoot this stuff.

Ping from the vm to the gateway on the layer 3 agent. Run tcpdump on the compute node where the vm is running and look at the virtual interface first to make sure the arp is leaving that interface. Then run tcpdump on the physical interface to check to see if the arp is going over the gre tunnel. You will see something like this:

22:52:03.741329 IP 10.1.1.2 > 10.1.1.1: GREv0, key=0x0, length 110: IP 192.168.4.3 > 192.168.4.2: ICMP echo request, id 8706, seq 12, length 64
22:52:03.741710 IP 10.1.1.1 > 10.1.1.2: GREv0, length 106: IP 192.168.4.2 > 192.168.4.3: ICMP echo reply, id 8706, seq 12, length 64
22:52:04.741475 IP 10.1.1.2 > 10.1.1.1: GREv0, key=0x0, length 110: IP 192.168.4.3 > 192.168.4.2: ICMP echo request, id 8706, seq 13, length 64
22:52:04.741987 IP 10.1.1.1 > 10.1.1.2: GREv0, length 106: IP 192.168.4.2 > 192.168.4.3: ICMP echo reply, id 8706, seq 13, length 64
22:52:05.741594 IP 10.1.1.2 > 10.1.1.1: GREv0, key=0x0, length 110: IP 192.168.4.3 > 192.168.4.2: ICMP echo request, id 8706, seq 14, length 64
22:52:05.741963 IP 10.1.1.1 > 10.1.1.2: GREv0, length 106: IP 192.168.4.2 > 192.168.4.3: ICMP echo reply, id 8706, seq 14, length 64
22:52:06.741664 IP 10.1.1.2 > 10.1.1.1: GREv0, key=0x0, length 110: IP 192.168.4.3 > 192.168.4.2: ICMP echo request, id 8706, seq 15, length 64
22:52:06.742150 IP 10.1.1.1 > 10.1.1.2: GREv0, length 106: IP 192.168.4.2 > 192.168.4.3: ICMP echo reply, id 8706, seq 15, length 64
22:52:07.741785 IP 10.1.1.2 > 10.1.1.1: GREv0, key=0x0, length 110: IP 192.168.4.3 > 192.168.4.2: ICMP echo request, id 8706, seq 16, length 64
22:52:07.742183 IP 10.1.1.1 > 10.1.1.2: GREv0, length 106: IP 192.168.4.2 > 192.168.4.3: ICMP echo reply, id 8706, seq 16, length 64
22:52:08.741895 IP 10.1.1.2 > 10.1.1.1: GREv0, key=0x0, length 110: IP 192.168.4.3 > 192.168.4.2: ICMP echo request, id 8706, seq 17, length 64
22:52:08.742271 IP 10.1.1.1 > 10.1.1.2: GREv0, length 106: IP 192.168.4.2 > 192.168.4.3: ICMP echo reply, id 8706, seq 17, length 64

What makes this stuff so bad ass is what you can put in those gre tunnels. And the “stuff” is network traffic from virtual networks. Imagine how simple you can make your physical network that connects all your compute nodes together…

Advertisements

3 thoughts on “OpenStack Folsom Quantum OVS agent GRE Tunnels

  1. Did you make it work in the end?
    I’m struggling with a similar setup (three nodes with GRE tunnel) and my instances do not get IP addresses from the dnsmasq running on the network node.
    I see the traffic traversing the GRE tunnel but the tap interface where dnsmasq is attached is down.

    Would you mind posting a follow-up to this post, preferably with the solution or some working configs?

    Thanks in advance.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s