Table of Contents
https://git.opendaylight.org/gerrit/#/q/topic:directly_connected_pnf_discovery
This features enables discovering and directing traffic to Physical Network Functions (PNFs) in Flat/VLAN provider and tenant networks, by leveraging Subnet-Route feature.
PNF is a device which has not been created by Openstack but connected to the hypervisors L2 broadcast domain and configured with ip from one of the neutron subnets.
Ideally, L2/L3 communication between VM instances and PNFs on flat/VLAN networks would be routed similarly to inter-VM communication. However, there are two main issues preventing direct communication to PNFs.
We want to leverage the Subnet-Route and Aliveness-Monitor features in order to address the above issues.
Today, Subnet-Route feature enables ODL to route traffic to a destination IP address, even for ip addresses that have not been statically configured by OpenStack, in the FIB table. To achieve that, the FIB table contains a flow that match all IP packets in a given subnet range. How that works?
Current limitations of Subnet-Route feature:
After ODL learns a mac that is associated with an ip address, ODL schedule an arp monitor task, with the purpose of verifying that the device is still alive and responding. This is done by periodically sending arp requests to the device.
Current limitation: Aliveness monitor was not designed for monitoring devices behind flat/VLAN provider network ports.
In this scenario a VM in a private tenant network wants to communicate with a PNF in the (external) provider network
In this scenario a VM and a PNF, in different private networks of the same tenant, wants to communicate. For each subnet prefix, a designated switch will be chosen to communicate directly with the PNFs in that subnet prefix. That means sending ARP requests to the PNFs and receiving their traffic.
Note: IP traffic from VM instances will retain the src MAC of the VM instance, instead of replacing it with the router-interface-mac, in order to prevent MAC momvements in the underlay switches. This is a limitation until NetVirt supports a MAC per hypervisor implementation.
ARP messages in the Flat/Vlan provider and tenant networks will be punted from a designated switch, in order to avoid a performance issue in the controller, of dealing with broadcast packets that may be received in multiple provider ports. In external networks this switch is the NAPT switch.
First use-case depends on hairpinning spec [2], the flows presented here reflects that dependency.
Packets in FIB table after translation to FIP, will match on subnet flow and will be punted to controller from Subnet Route table. Then, ARP request will be generated and be sent to the PNF. No flow changes are required in this part.
l3vpn service: set vpn-id=router-id
=>match: vpn-id=router-id,dst-mac=router-interface-mac
=>match: vpn-id=router-id
=>match: vpn-id=router-id,src-ip=vm-ip
set vpn-id=ext-subnet-id,src-ip=fip
=>match: vpn-id=ext-subnet-id,src-ip=fip set src-mac=fip-mac
=>match: vpn-id=ext-subnet-id, dst-ip=ext-subnet-ip
=>After receiving ARP response from the PNF a new exact IP flow will be installed in table 21. No other flow changes are required.
l3vpn service: set vpn-id=router-id
=>match: vpn-id=router-id,dst-mac=router-interface-mac
=>match: vpn-id=router-id
=>match: vpn-id=router-id,src-ip=vm-ip
set vpn-id=ext-subnet-id,src-ip=fip
=>match: vpn-id=ext-subnet-id,src-ip=fip set src-mac=fip-mac
=>match: vpn-id=ext-subnet-id, dst-ip=pnf-ip,
set dst-mac=pnf-mac, reg6=provider-lport-tag
=>Ingress-DPN is not the NAPT switch, no changes required. Traffic will be directed to NAPT switch and directed to the outbound NAPT table straight from the internal tunnel table
l3vpn service: set vpn-id=router-id
=>match: vpn-id=router-id,dst-mac=router-interface-mac
=>match: vpn-id=router-id
=>match: vpn-id=router-id
=>output to tunnel port of NAPT switch
Ingress-DPN is the NAPT switch. Packets in FIB table after translation to NAPT, will match on subnet flow and will be punted to controller from Subnet Route table. Then, ARP request will be generated and be sent to the PNF. No flow changes are required.
l3vpn service: set vpn-id=router-id
=>match: vpn-id=router-id,dst-mac=router-interface-mac
=>match: vpn-id=router-id
=>match: vpn-id=router-id
=>match: src-ip=vm-ip,port=int-port
set src-ip=router-gw-ip,vpn-id=router-gw-subnet-id,port=ext-port
=>match: vpn-id=router-gw-subnet-id
=>match: vpn-id=ext-subnet-id, dst-ip=ext-subnet-ip
=>After receiving ARP response from the PNF a new exact IP flow will be installed in table 21. No other changes required.
l3vpn service: set vpn-id=router-id
=>match: vpn-id=router-id,dst-mac=router-interface-mac
=>match: vpn-id=router-id
=>match: vpn-id=router-id
=>match: vpn-id=router-id TBD set vpn-id=external-net-id
=>match: vpn-id=external-net-id
=>match: vpn-id=ext-network-id, dst-ip=pnf-ip
set dst-mac=pnf-mac, reg6=provider-lport-tag
=>Packet from a VM is punted to the controller, no flow changes are required.
l3vpn service: set vpn-id=router-id
=>match: vpn-id=router-id,dst-mac=router-interface-mac
=>match: vpn-id=router-id dst-ip=subnet-ip
=>After receiving ARP response from the PNF a new exact IP flow will be installed in table 21.
l3vpn service: set vpn-id=router-id
=>match: vpn-id=router-id,dst-mac=router-interface-mac
=>match: vpn-id=router-id dst-ip=pnf-ip
set dst-mac=pnf-mac, reg6=provider-lport-tag
=>New flow in table 19, to distinguish our new use-case, in which we want to decrease the TTL of the packet
l3vpn service: set vpn-id=router-id
=>match: lport-tag=provider-port, vpn-id=router-id, dst-mac=router-interface-mac,
set split-horizon-bit = 0, decrease-ttl
=>match: vpn-id=router-id dst-ip=vm-ip
set dst-mac=vm-mac reg6=provider-lport-tag
=>In odl-l3vpn module, adjacency-list grouping will be enhanced with the following field
grouping adjacency-list {
list adjacency {
key "ip_address";
...
leaf phys-network-func {
type boolean;
default false;
description "Value of True indicates this is an adjacency of a device in a provider network";
}
}
}
An adjacency that is added as a result of a PNF discovery, is a primary adjacency with an empty next-hop-ip list. This is not enough to distinguish PNF at all times. This new field will help us identify this use-case in a more robust way.
A configuration mode will be available to turn this feature ON/OFF.
All traffic of PNFs in each subnet-prefix sends their traffic to a designated switch.
Carbon
None
neutron net-create public-net -- --router:external --is-default --provider:network_type=flat
--provider:physical_network=physnet1
neutron subnet-create --ip_version 4 --gateway 10.64.0.1 --name public-subnet1 <public-net-uuid> 10.64.0.0/16
-- --enable_dhcp=False
neutron net-create private-net1
neutron subnet-create --ip_version 4 --gateway 10.0.123.1 --name private-subnet1 <private-net1-uuid>
10.0.123.0/24
neutron net-create private-net2
neutron subnet-create --ip_version 4 --gateway 10.0.124.1 --name private-subnet2 <private-net2-uuid>
10.0.124.0/24
This will allow communication with PNFs in provider network
neutron router-create router1
neutron router-interface-add <router1-uuid> <private-subnet1-uuid>
neutron router-gateway-set --fixed-ip subnet_id=<public-subnet1-uuid> <router1-uuid> <public-net-uuid>
This will allow East/West communication between VMs and PNFs
neutron router-create router1
neutron router-interface-add <router1-uuid> <private-subnet1-uuid>
neutron router-interface-add <router1-uuid> <private-subnet2-uuid>
odl-netvirt-openstack
This feature depends on hairpinning feature [2]
Unit tests will be added for the new functionality