This documentation provides critical information needed to help you write ODL Applications/Projects that can co-exist with other ODL Projects.
Contents:
This document captures current OpenFlow pipeline as use by Genius and projects using Genius for app-coexistence.
+---------+
| In Port |
+----+----+
|
|
+---------v---------+
| (0) Classifier |
| Table |
+-------------------+
| VM Port +------+
+-------------------+ +----------+
| Provider Network +------+ |
+-------------------+ |
+-------------------+ Internal Tunnel | |
| +-------------------+ |
| +------+ External Tunnel | |
| | +-------------------+ +---------v---------+
| | | (17) Dispatcher |
| | | Table |
| +----------v--------+ +-------------------+
| | (18,20,38) | +-------------+Ing.ACL Service (1)|
| | Services External | | +-------------------+
| | Pipeline | | +-----------+IPv6 Service (2)|
| +-------------------+ | | +-------------------+
| | | |L3 Service (3)+-+
| | | +-------------------+ |
| | | +-+L2 Service (4)| |
| | | | +-------------------+ |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| +------------------+ | | |
| | | | |
| +--------v--------+ | | |
| | (40 to 42) | | | |
| | Ingress ACL | | | |
| | Pipeline | | | |
| +-------+---------+ | | |
| | | | |
| +--v-+ +------------v------+ | |
| |(17)| | (45) | | |
| +----+ | | | |
| | IPv6 Pipeline | | |
+----------+ +--+-------+--------+ | |
| | | | |
+----------v--------+ +--v--+ +--v-+ +-----v-----------+ |
| (36) | | ODL | |(17)| | (50 to 55) | |
| Internal | +-----+ +----+ | | |
| Tunnel | | L2 Pipeline | |
+-------+-----------+ +------+----------+ |
| | |
| | +------------v----+
| | | (19 to 47) |
+---------------------------------+ | +----+ |
| | | | | L3 Pipeline |
| | | | +----+-------+----+
| | | | | |
|(itm-direct-tunnels enabled) | | | +--v--+ +--v-+
| | | | | ODL | |(17)|
| | | | +-----+ +----+
| | | |
| +---v----v----v-----+
+-------v-----------+ | (220) Egress |
| Tunnel Group | | Dispatcher Table | +------------------+
+-------+-----------+ +-------------------+ | |
| | VM Port, +----------> (251 to 253) |
| | Provider Network <----------+ Pipeline |
| +-------------------+ | Egress ACL |
| | External Tunnel | | |
| +-------------------+ +------------------+
| | Internal Tunnel |
| +---------+---------+
| |
+------------------------------------+ |
| |
+--v--v----+
| Out Port |
+----------+
+-----------------+
| (17) |
+------------+ Dispatcher <---------------------------+
| | Table | |
| +-----------------+ |
| |
+--------v--------+ |
| (40) | |
| Ingress ACL | +-----------------+ |
| Table | | (41) | |
+-----------------+ | Ingress ACL 2 | +-----------------+ |
| Match Allowed +----> Table | | (42) | |
+-----------------+ +-----------------+ | Ingress ACL 2 +---+
| Match Allowed +----> Table |
+-----------------+ +-----------------+
Owner Project: Netvirt
TBD.
+-----------------+ +--------v--------+
| (17) | | (45) |
| Dispatcher +----> IPv6 |
| Table | | Table |
+--------^--------+ +-----------------+ +---+
| | IPv6 ND for +---->ODL|
| | Router Interface| +---+
| +-----------------+
+-------------+ Other Packets |
+-----------------+
Owner Project: Netvirt
TBD.
+-----------------+
| (17) |
| Dispatcher |
| Table |
+--------+--------+
|
|
+--------v--------+
| (50) |
| L2 SMAC Learning|
| Table |
+-----------------+ +--------v--------+
| Known SMAC +----> (51) |
+-----------------+ | L2 DMAC Filter |
| Unknown SMAC +----> Table |
+-------+---------+ +-----------------+
| | Known DMAC +--------------------+
| +-----------------+ |
+-v-+ | Unknown DMAC | |
|ODL| | | |
+---+ +--------+--------+ |
| |
| |
+--------v--------+ |
| (52) | |
| Unknown DMACs | |
| Table | |
+-----------------+ |
+----+ Tunnel In Port | |
| +-----------------+ |
| | VM In Port | |
| +------+----------+ |
| | |
| +------v-----+ |
| | Group | |
| | Full BCast +------+ |
| +-----+------+ | |
| | | |
| +-----v------+ | +---v-------------+
+----> Group +--+ | | (220) |
| Local BCast| | | |Egress Dispatcher|
+------------+ | | +--->+ Table |
| | | +-----------------+
| | |
| | |
+-------v---v-----+ |
| (55) | |
| Filter Equal | |
| Table | |
+-----------------+ |
| L Register +---+
| and Egress |
+-----------------+
| ? Match Drop |
+-----------------+
Owner Project: Netvirt
TBD.
+-----------------+
| Coming |
| Soon! |
+-----------------+
Owner Project: Netvirt
TBD.
+-----------------+
| (220) Egress |
+------------+ Dispatcher <---------------------------+
| | Table | |
| +-----------------+ |
| |
+--------v--------+ |
| (251) | |
| Egress ACL | +-----------------+ |
| Table | | (252) | |
+-----------------+ | Egress ACL 2 | +-----------------+ |
| Match Allowed +----> Table | | (253) | |
+-----------------+ +-----------------+ | Egress ACL 2 +---+
| Match Allowed +----> Table |
+-----------------+ +-----------------+
Owner Project: Netvirt
TBD.
Table of Contents
Genius CSIT requires very minimal testbed topology, and it is very easy to run the same on your laptop, with the below steps. This will help you run tests yourself on the code changes you are making in genius locally, without the need for waiting in jenkins job queue for long.
Test setup consists of ODL with odl-genius-rest feature installed and two switches (DPNs) connected to ODL over OVSDB and OpenflowPlugin channels.
This setup uses the default Genius test topology.
+--------+ +--------+
| BR1 | data | BR2 |
| <-------> |
+---^----+ +----^---+
| mgmt |
+---v-----------------v---+
| |
| ODL |
| |
| odl-genius |
| |
+-------------------------+
We can run ODL on laptop, and OVS on two VMs. RobotFramework can be installed on of the two OVS VMs. The documentation is based on ubuntu Desktop VMs, which were started using virtual box.
Most of the genius developers already know this. Just for completion sake, on both the VMs, OVS has to be installed.
Please refer to the script below for the latest uptodate requirement versions supported, the script also has more information on how to setup the robot environment.
Below are the requirements for running genius CSIT.
Genius project provides generic infrastructure services and utilities for integration and co-existance of mulltiple networking services/applications. Following image presents a top level view of Genius framework -
Genius modules are developed as karaf features which can be independently installed. However, there is some dependency among these modules. The diagram below provides a dependency relationship of these modules.
All these modules expose Yang based API which can be used to configure/interact with these modules and fetch services provided by these modules. Thus all these modules can be used/configured by other ODL modules and can also be accessed via REST interface.
Following picture presents an example of packet pipeline based on Genius framework. It also presents the functions of diffrent genius components -
Following sections provide details about each of these components.
The Interface Manager (IFM) uses MD-SAL based architecture, where different software components operate on, and interact via a set of data-models. Interface manager defines configuration data-stores where other OpenDaylight modules can write interface configurations and register for services. These configuration data-stores can also be accessed by external entities through REST interface. IFM listens to changes in these config data-stores and accordingly programs the data-plane. Data in Configuration data-stores remains persistent across controller restarts.
Operational data like network state and other service specific operational data are stored in operational data-stores. Change in network state is updated in southbound interfaces (OFplugin, OVSDB) data-stores. Interface Manager uses ODL Inventory and Topology datastores to retrive southbound configurations and events. IFM listens to these updates and accordingly updates its own operational data-stores. Operational data stores are cleaned up after a controller restart.
Additionally, a set of RPCs to access IFM data-stores and provide other useful information. Following figure presents different IFM data-stores and its interaction with other modules.
Follwoing diagram provides a toplevel architecture of Interface Manager.
Interface Manager uses other Genius modules for its operations. It mainly interacts with following other genius modules-
Following picture shows interface manager dependencies
Interface manager code is organized in following folders -
interfacemanager-api
└───main
├───java
│ └───org
│ └───opendaylight
│ └───genius
│ └───interfacemanager
│ ├───exceptions
│ ├───globals
│ └───interfaces
└───yang
interfacemanager-impl
├───commons <--- contains common utility functions
├───listeners <--- Contains interfacemanager DCN listenenrs for differnt MD-SAL datastores
├───renderer <--- Contains different southbound renderers' implementation
│ ├───hwvtep <--- HWVTEP specific renderer
│ │ ├───confighelpers
│ │ ├───statehelpers
│ │ └───utilities
│ └───ovs <--- OVS specific SBI renderer
│ ├───confighelpers
│ ├───statehelpers
│ └───utilities
├───servicebindings <--- contains interface service binding DCN listener and corresponding implementation
│ └───flowbased
│ ├───confighelpers
│ ├───listeners
│ ├───statehelpers
│ └───utilities
├───rpcservice <--- Contains interfacemanager RPCs' implementation
├───pmcounters <--- Contains PM counters gathering
└───statusanddiag <--- contains status and diagnostics implementations
‘interfacemanager-shell
FOllowing picture shows different MD-SAL datastores used by intetrface manager. These datastores are created based on YANG datamodels defined in interfacemanager-api.
InterfaceManager mainly uses following two datastores to accept configurations.
In addition to these datamodels, it also implements several RPCs for accessing interface operational data. Details of these datamodels and RPCs are described in following sections.
Interface config datamodel is defined in odl-interface.yang . It is based on ‘ietf-interfaces’ datamodel (imported in odl_interface.yang) with additional augmentations to it. Common interface configurations are –
Additional configuration parameters are defined for specific interface type. Please see the table below.
Vlan-xparent | Vlan-trunk | Vlan-trunk-member | vxlan | gre |
Name =uuid | Name =uuid | Name =uuid | Name =uuid | Name =uuid |
description | description | description | description | description |
Type =l2vlan | Type =l2valn | Type =l2vlan | Type =tunnel | Type =tunnel |
enabled | enabled | enabled | enabled | enabled |
Parent-if = port-name | Parent-if = port-name | Parent-if = vlan-trunkIf | Vlan-id | Vlan-id |
vlan-mode = transparent | vlan-mode = trunk | vlan-mode = trunk-member | tunnel-type = vxlan | tunnel-type = gre |
vlan-list= [trunk-member-list] | Vlan-Id = trunk-vlanId | dpn-id | dpn-id | |
Parent-if = vlan-trunkIf | Vlan-id | Vlan-id | ||
local-ip | local-ip | |||
remote-ip | remote-ip | |||
gayeway-ip | gayeway-ip |
Yang Data Model odl-interface-service-bindings.yang contains the service binding configuration datamodel.
An application can bind services to a particular interface by configuring MD-SAL data node at path /config/interface-service-binding. Binding services on interface allows particular service to pull traffic arriving on that interface, depending upon the a service priority. It is possible to bind services at ingress interface (when packet enters into the packet-pipeline from particular interface) as well as on the egress Interface (before the packet is sent out on particular interafce). Service modules can specify openflow-rules to be applied on the packet belonging to the interface. Usually these rules include sending the packet to specific service table/pipeline. Service modules/applications are responsible for sending the packet back (if not consumed) to service dispatcher table, for next service to process the packet.
Following are the service binding parameters –
When a service is bind to an interface, Interface Manager programs the service dispatcher table with a rule to match on the interface data-plane-id and the service-index (based on priority) and the instruction-set provided by the service/application. Every time when the packet leaves the dispatcher table the service-index (in metadata) is incremented to match the next service rule when the packet is resubmitted back to dispatcher table. Following table gives an example of the service dispatcher flows, where one interface is bind to 2 services.
Service Dispatcher Table | |
Match | Actions |
|
|
|
|
miss | Drop |
Interface Manager programs openflow rules in the service dispatcher table.
There are services that need packet processing on the egress, before sending the packet out to particular port/interface. To accommodate this, interface manager also supports egress service binding. This is achieved by introducing a new “egress dispatcher table” at the egress of packet pipeline before the interface egress groups.
On different application request, Interface Manager returns the egress actions for interfaces. Service modules program use these actions to send the packet to particular interface. Generally, these egress actions include sending packet out to port or appropriate interface egress group. With the inclusion of the egress dispatcher table the egress actions for the services would be to
IFM shall add a default entry in Egress Dispatcher Table for each interface With -
On Egress Service binding, IFM shall add rules to Egress Dispatcher table with following parameters –
Egress Services will be responsible for sending packet back to Egress Dispatcher table, if the packet is not consumed (dropped/ send out). In this case the packet will hit the lowest priority default entry and the packet will be send out.
Interface Manager uses ODL Inventory and Topology datastores to retrive southbound configurations and events.
Interface manager is designed in a modular fashion to provide a flexible way to support multiple southbound protocols. North-bound interface/data-model is decoupled from south bound plugins. NBI Data change listeners select and interact with appropriate SBI renderers. The modular design also allows addition of new renderers to support new southbound interfaces, protocols plugins. Following figure shows interface manager modules –
InterfaceManager uses the datastore-job-coordinator module for all its operations.
Datastore job coordinator solves the following problems which is observed in the previous Li-based interface manager :
IFM listeners listen to data change events for different MD-SAL data-stores. On the NBI side it implements data change listeners for interface config data-store and the service-binding data store. On the SBI side IFM implements listeners for Topology and Inventory data-stores in opendaylight.
Interface config change listener listens to ietf-interface/interfaces data node.
Interface config change listener listens to ietf-interface/interfaces data node.
Interface config change listener listens to ietf-interface/interfaces data node.
+++ this page is under construction +++
tpAugmentationBuilder.setName(portName);
tpAugmentationBuilder.setInterfaceType(type);
options.put(
“key
”,
“flow
”);
options.put(
“local_ip
”, localIp.getIpv4Address().getValue());
options.put(
“remote_ip
”, remoteIp.getIpv4Address().getValue());
tpAugmentationBuilder.setOptions(options);
OVSDB plugin acts upon this data change and configures the tunnel end
points on the switch with the supplied information.
Following gallery contains sequence diagrams for different IFM operations -
Internal Transport Manager creates and maintains mesh of tunnels of type VXLAN or GRE between Openflow switches forming an overlay transport network. ITM also builds external tunnels towards DC Gateway. ITM does not provide redundant tunnel support.
The diagram below gives a pictorial representation of the different modules and data stores and their interactions.
ITM mainly interacts with following other genius modules-
Following picture shows interface manager dependencies
As shown in the diagram, ITM has a common placeholder for various datastore listeners, RPC implementation, config helpers. Config helpers are responsible for creating / delete of Internal and external tunnel.
ITM uses the following data model to create and manage tunnel interfaces Tunnels interfces are created by writing to Interface Manager’s Config DS.
follwoing datamodel is defined in itm.yang This DS stores the transport zone information populated through REST or Karaf CLI
This DS stores the tunnel end point information populated through REST or Karaf CLI. The internal and external tunnel interfaces are also stored here.
ITM uses the datastore job coordinator module for all its operations.
When tunnel end point are configured in ITM datastores by CLI or REST, corresponding DTCNs are fired. ITM TransportZoneListener listens to the . Based on the add/remove end point operation, the transport zone listener queues the approporiate job ( ItmInternalTunnelAddWorker or ItmInternalTunnelDeleteWorker) to the DataStoreJob Coordinator. Jobs within transport zones are queued to be executed serially and jobs across transport zones are done parallel.
ITM will iterate over all the tunnel end points in each of the transport zones and build the tunnels between every pair of tunnel end points in the given transport zone. The type of the tunnel (GRE/VXLAN) will be indicated in the YANG model as part of the transport zone.
ITM builds the tunnel infrastructure and maintains them. ITM builds two types of tunnels namely, internal tunnels between openflow switches and external tunnels between openflow switches and an external device such as datacenter gateway. These tunnels can be Vxlan or GRE. The tunnel endpoints are configured using either individual endpoint configuration or scheme based auto configuration method or REST. ITM will iterate over all the tunnel end points in each of the transport zones and build the tunnels between every pair of tunnel end points in the given transport zone.
ITM creates tunnel interfaces in Interface manager Config DS.
Stores the tunnel mesh information in tunnel end point format in ITM config DS
ITM stores the internal and external trunk interface names in itm-state yang
Creates external tunnels to DC Gateway when VPN manager calls the RPCs for creating tunnels towards DC gateway.
ITM depends on interface manager for the following functionality.
Provides interface to create tunnel interfaces
Provides configuration option to enable monitoring on tunnel interfaces.
Registers tunnel interfaces with monitoring enabled with alivenessmonitor.
ITM depends on Aliveness monitor for the following functionality.
Tunnel states for trunk interfaces are updated by alivenessmonitor. Sets OperState for tunnel interfaces
The following are the RPCs supported by ITM
Starting from Carbon, Genius uses RST format Design Specification document for all new features. These specifications are perfect way to understand various Genius features.
Contents:
Table of Contents
[link to gerrit patch]
Brief introduction of the feature.
Detailed description of the problem being solved by this feature
Details of the proposed change.
Any changes to pipeline must be captured explicitly in this section.
This should detail any changes to yang models.
Any configuration parameters being added/deprecated for this feature? What will be defaults for these? How will it impact existing deployments?
Note that outright deletion/modification of existing configuration is not allowed due to backward compatibility. They can only be deprecated and deleted in later release(s).
This should capture how clustering will be supported. This can include but not limited to use of CDTCL, EOS, Cluster Singleton etc.
This should capture impact from/to different infra components like MDSAL Datastore, karaf, AAA etc.
Document any security related issues impacted by this feature.
What are the potential scale and performance impacts of this change? Does it help improve scale and performance or make it worse?
What release is this feature targeted for?
Alternatives considered and why they were not selected.
How will end user use this feature? Primary focus here is how this feature will be used in an actual deployment.
For most Genius features users will be other projects but this should still capture any user visible CLI/API etc. e.g. ITM configuration.
This section will be primary input for Test and Documentation teams. Along with above this should also capture REST API and CLI.
odl-genius-ui
Identify existing karaf feature to which this change applies and/or new karaf features being introduced. These can be user facing features which are added to integration/distribution or internal features to be used by other projects.
Who is implementing this feature? In case of multiple authors, designate a primary assignee and other contributors.
Break up work into individual items. This should be a checklist on Trello card for this feature. Give link to trello card or duplicate it.
Any dependencies being added/removed? Dependencies here refers to internal [other ODL projects] as well as external [OVS, karaf, JDK etc.] This should also capture specific versions if any of these dependencies. e.g. OVS version, Linux kernel version, JDK etc.
This should also capture impacts on existing project that depend on Genius. Following projects currently depend on Genius: * Netvirt * SFC
What is impact on documentation for this change? If documentation change is needed call out one of the <contributors> who will work with Project Documentation Lead to get the changes done.
Don’t repeat details already discussed but do reference and call them out.
Add any useful references. Some examples:
[1] OpenDaylight Documentation Guide
[2] https://specs.openstack.org/openstack/nova-specs/specs/kilo/template.html
Note
This template was derived from [2], and has been modified to support our project.
This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode
Table of Contents
https://git.opendaylight.org/gerrit/#/q/topic:ITM-Scale-Improvements
ITM creates tunnel mesh among switches with the help of interfacemanager. This spec describes re-designing of ITM to create the tunnel mesh independently without interface manager. This is expected to improve ITM performance and therefore support a larger tunnel mesh.
ITM creates tunnels among the switches. When ITM receives the configuration from NBI, it creates interfaces on ietf interface config DS, which the interface manager listens to and creates the tunnel constructs on the switches. This involves an additional hop from ITM to interface manager which constitutes many DCNs and DS reads and writes. This induces a lot of load on the system, especially in a scale setup. Also, tunnel interfaces are catagorized as generic ietf-interface along with tap and vlan interfaces. Interface manager deals with all these interfaces. Applications listening for interface state gets updates on tunnel interfaces both from interface manager and ITM. This degrades the performance and hence the internal tunnel mesh creation does not scale up very well beyond 80 switches.
This feature will support the following use cases.
In order to improve the scale numbers, handling of tunnel interface is separated from other interfaces. Hence, ITM module is being re-architectured to by-pass interface manager and create/delete the tunnels between the switches directly. ITM will also provide the tunnel status without the support of interface manager.
By-passing interface manager provides the following advantage * removes the creation of ietf interfaces in config DS * reduces a number of DCN being generated * reduces the number of datastore reads and writes. * Applications to get tunnel updates only from ITM
All this should improves the performance and thereby the scale numbers.
Further improvements that can be done is to
This feature will not be used along with per-tunnel specific service binding use case as both use cases together are not supported. Multiple Vxlan Tunnel feature will not work with this feature as it needs service binding on tunnels.
Most of the code for this proposed changes will be in separate package for code maintainability. There will be minimal changes in some common code in ITM and interface manager to switch between the old and the new way of tunnel creation
If itm-direct-tunnels
flag is ON, then
– itm:transport-zones listener will trigger the new code upon receiving transport zone configuration.
– interface manager will ignore events pertaining to OVSDB tunnel port and tunnel interface related inventory
changes.
– When ITM gets the NBI tep configuration
o ITM wires the tunnels by forming tunnel interface name and stores the Tep information in dpn ITM does not
create the tunnel interfaces in the ietf-interface config DS. Stores the tunnel name in the dpn-teps-state
in itm-state.yang
.
group id
in all other CSSs in order to reach this tep.
if-index
for each tep interface. This will be storedin if-indexes-interface-map
in odl-itm-meta.yang
irrespective of the switch being connected.
information from the odl-itm-meta.yang
.
OvsdbBridgeAugmentation
. When switch getsconnected add ports to the bridge (in the pre-configured case)
FlowCapableNodeConnector
.– push the table 0 flow entries
– populate the tunnels_state
in itm-state.yang
tunnel state that comes in OF Port status.
– update the group with watch-port for handling traffic switchover in dataplane.
– If this feature is not enabled, then ITM will take the usual route of configuring ietf-interfaces.
If the alarm-generation-enabled
is enabled, then register for changes in tunnels_state
to generate the alarms.
ITM will support individual tunnels to be monitored.
If Global monitoring flag is enabled, then all tunnels will be monitored.
If Global flag is turned OFF, then individual per tunnel monitoring flag will take effect.
ITM will support dynamic enable/disable of bfd global flag / individual flag.
BFD dampening logic for bfd states is as follows,
– On tunnel creation, ITM will consider initial tunnel status to be UP and LIVE and mark it as in ‘dampening’ state
– If it receives UP and LIVE event, the tunnel will come out of dampening state, no change/event will be is triggered
– If it does not receive UP and LIVE, for a configured duration, it will set the tunnel state to DOWN
– There be a configuration parameter for the above - bfd-dampening-timeout
.
External Tunnel (HWVTEP and DC Gateway) Handling will take same existing path, that is through interfacemanager.
OF Tunnel (flow based tunnelling) implementation will also be done directly by ITM following the same approach.
Pipeline will change as the egress action will be pointing to a group instead of output on port
A new container dpn-teps-state
will be added. This will be a config DS
list dpns-teps {
key "source-dpn-id";
leaf source-dpn-id {
type uint64;
}
leaf tunnel-type {
type identityref {
base odlif:tunnel-type-base;
}
}
leaf group-id {
type uint32;
}
/* Remote DPNs to which this DPN-Tep has a tunnel */
list remote-dpns {
key "destination-dpn-id";
leaf destination-dpn-id {
type uint64;
}
leaf tunnel-name {
type string;
}
leaf monitor-enabled { // Will be enhanced to support monitor id.
type boolean;
default true;
}
}
}
}
A new Yang ''odl-itm-meta.yang'' will be create to store OVS bridge related information.
container bridge-tunnel-info {
description "Contains the list of dpns along with the tunnel interfaces configured on them.";
list ovs-bridge-entry {
key dpid;
leaf dpid {
type uint64;
}
leaf ovs-bridge-reference {
type southbound:ovsdb-bridge-ref;
description "This is the reference to an ovs bridge";
}
list ovs-bridge-tunnel-entry {
key tunnel-name;
leaf tunnel-name {
type string;
}
}
}
}
container ovs-bridge-ref-info {
config false;
description "The container that maps dpid with ovs bridge ref in the operational DS.";
list ovs-bridge-ref-entry {
key dpid;
leaf dpid {
type uint64;
}
leaf ovs-bridge-reference {
type southbound:ovsdb-bridge-ref;
description "This is the reference to an ovs bridge";
}
}
}
container if-indexes-tunnel-map {
config false;
list if-index-tunnel {
key if-index;
leaf if-index {
type int32;
}
leaf interface-name {
type string;
}
}
}
New config parameters to be added to ``interfacemanager-config``
New config parameters to be added to ``itm-config``
The RPC call itm-rpc:get-egress-action
will return the group Id which will point to tunnel port (when the tunnel
port is created on the switch) between the source and destination dpn id.
rpc get-egress-action {
input {
leaf source-dpid {
type uint64;
}
leaf destination-dpid {
type uint64;
}
leaf tunnel-type {
type identityref {
base odlif:tunnel-type-base;
}
}
}
output {
leaf group-id {
type uint32;
}
}
}
ITM will also support another RPC ``get-tunnel-type``
rpc get-tunnel-type {
description "to get the type of the tunnel interface(vxlan, vxlan-gpe, gre, etc.)";
input {
leaf intf-name {
type string;
}
}
output {
leaf tunnel-type {
type identityref {
base odlif:tunnel-type-base;
}
}
}
}
For the two above RPCs, when this feature is enabled ITM will service the two RPCs for internal tunnels and for the external tunnels, ITM will forward it to interfacemanager. When this feature is disabled, ITM will forward the RPCs for both internal and external to interfacemanager. Applications should now start using the above two RPCs from ITM and not interfacemanager.
ITM will enhance the existing RPCs
create-terminating-service-actions
andremove-terminating-service-actions
.New RPC will be supported by ITM to enable monitoring of individual tunnels - internal or external.
rpc set-bfd-enable-on-tunnel {
description "used for turning ON/OFF to monitor individual tunnels";
input {
leaf source-node {
type string;
}
leaf destination-node {
type string;
}
leaf monitoring-params {
type itmcfg:tunnel-monitor-params;
}
}
}
Following are the configuration changes and impact in the OpenDaylight.
genius-interfacemanager-config.xml
:itm-direct-tunnels
: this is boolean type parameter which enables or disables the new ITM realization
of the tunnel mesh. Default value is false
.genius-itm-config.xml
:alarm-generation-enabled
: this is boolean type parameter which enables or disables the new generation of alarms
by ITM. Default value is true
.bfd-dampening-timeout
: timeout in seconds. Config parameter which the dampening logic will use <interfacemanager-config xmlns="urn:opendaylight:genius:itm:config">
<itm-direct-tunnels>false</itm-direct-tunnels>
</interfacemanager-config>
<itm-config xmlns="urn:opendaylight:genius:itm:config">
<alarm-generation-enabled>true</alarm-generation-enabled>
<bfd-dampening-timeout>30</bfd-dampening-timeout> -- value in seconds. Whats the ideal default value ?
</itm-config>
Runtime changes to the parameters of this config file will not be taken into consideration.
The solution is supported on a 3-node cluster.
Upgrading ODL versions from the previous ITM tunnel mesh creation logic to this new tunnel mesh creation logic will
be supported. When the itm-direct-tunnels
flag changes from disable
from previous version to enable
in this
version, ITM will automatically mesh tunnels in the new way and clean up any data that was persisted in the previous
tunnel creation method.
This solution will improve scale numbers by reducing no. of interfaces
created in ietf-interfaces
and this will cut down on the additional processing done by interface manager.
This feature will provide fine granularity in bfd monitoring per tunnels. This should considerably reduce the
number bfd events generated for all the tunnels, instead monitoring only those tunnels that are required.
Overall this should improve the ITM performance and scale numbers.
Oxygen
N.A
This feature doesn’t add any new karaf feature.Installing any of the below features can enable the service:
odl-genius-rest odl-genius
Before starting the controller, enable this feature in genius-interfacemanager-config.xml, by editing it as follows:-
<interfacemanager-config xmlns="urn:opendaylight:genius:interface:config">
<itm-direct-tunnels>true</itm-direct-tunnels>
</interfacemanager-config>
Post the ITM transport zone configuration from the REST.
URL: restconf/config/itm:transport-zones/
Sample JSON data
{
"transport-zone": [
{
"zone-name": "TZA",
"subnets": [
{
"prefix": "192.168.56.0/24",
"vlan-id": 0,
"vteps": [
{
"dpn-id": "1",
"portname": "eth2",
"ip-address": "192.168.56.101",
},
{
"dpn-id": "2",
"portname": "eth2",
"ip-address": "192.168.56.102",
}
],
"gateway-ip": "0.0.0.0"
}
],
"tunnel-type": "odl-interface:tunnel-type-vxlan"
}
]
}
URL: restconf/operations/itm-rpc:get-egress-action
emphasize-lines: 495-501
This feature will not add any new CLI for configuration. Some debug CLIs to dump the cache information may be added for debugging purpose.
Trello card:
itm-direct-tunnels
.OvsdbBridgeAugmentation
.FlowCapableNodeConnector
.bridge-interface-info
, bridge-ref-info
from odl-itm-meta.yang
.alarm-generation-enabled
.dpn-teps-state
in itm-state.yang
.The following work items will be taken up later
This requires minimum of OVS 2.8
where the BFD state can be received in of-port events.
The dependent applications in netvirt and SFC will have to use the ITM RPC to get the Egress actions. ITM will respond with egress actions for internal tunnels and for external tunnels ITM will forward the RPC to to interface manager, fetch the output and forward it to the applications.
Appropriate UTs will be added for the new code coming in for this feature. This includes but not limited to :-
alarm-generation-enabled
and check if the alarm were generated / supressed based on the flag.The following test cases will be added to genius CSIT.
alarm-generation-enabled
and check if the alarm were generated / supressed based on the flag.This will require changes to User Guide and Developer Guide.
User Guide will need to add information for below details: For the scale setup, this feature needs to be enabled so as to support tunnel mesh among scaled number of DPNs.
itm-direct-tunnels
flag to true.Developer Guide will need to capture how to use the ITM RPC -
https://git.opendaylight.org/gerrit/#/q/topic:itm-auto-config
Internal Transport Manager (ITM) Tunnel Auto configuration feature proposes a solution to migrate from REST/CLI based Tunnel End Point (TEP) configuration to automatic learning of Openvswitch (OVS) TEPs from the switches, thereby triggering automatic configuration of tunnels.
User has to use ITM REST APIs for addition/deletion of TEPs into/from Transport zone. But, OVS and other TOR switches that support OVSDB can be configured for TEP without requring TEP configuration through REST API, which leads to redundancy and makes the process cumbersome and error-prone.
This feature will support following use cases:
br-int
from SBI.def-tz-enabled
in config file.def-tz-tunnel-type
in config file.def-tz-enabled
configurable parameter from
OFF to ON during OpenDaylight controller restart.def-tz-enabled
configurable parameter from
ON to OFF during OpenDaylight controller restart.def-tz-enabled
is OFF
and if it is not changed by user, then it will be OFF after OpenDaylight
controller restart as well.local_ip
tep configuration via change in
Openvswitch table’s other_config parameter local_ip
.Following use cases will not be supported:
default-transport-zone
from REST, such
scenario will be taken as incorrect configuration.of-tunnel
tep configuration via change in Openvswitch table’s
external_ids parameter of-tunnel
is not supported.def-tz-enabled
and
def-tz-tunnel-type
is not supported.ITM will create a default transport zone on OpenDaylight start-up if configurable parameter
def-tz-enabled
is true
in genius-itm-config.xml
file (by default, this flag
is false). When the flag is true, default transport zone is created and configured with:
default-transport-zone
.default-transport-zone
.
Tunnel-type value cannot be changed dynamically. It will take value of
def-tz-tunnel-type
parameter from config file genius-itm-config.xml
on startup.def-tz-tunnel-type
parameter is changed and def-tz-enabled
remains true
during OpenDaylight restart, then default-transport-zone
with previous value of
tunnel-type would be first removed and then default-transport-zone
would be created
with newer value of tunnel-type.If def-tz-enabled
is configured as false
, then ITM will delete
default-transport-zone
if it is present already.
When transport-zone is added from northbound i.e. REST interface. Few of the transport-zone parameters are mandatory and fewer are optional now.
Status | Transport zone parameters |
---|---|
Mandatory | transport-zone name, tunnel-type |
Optional | TEP IP-Address, Subnet prefix, Dpn-id, Gateway-ip, Vlan-id, Portname |
When a new transport zone is created, check for any TEPs if present in
tepsInNotHostedTransportZone
container in Oper DS for that transport zone.
If present, remove from tepsInNotHostedTransportZone
and then add them
under the transport zone and include the TEP in the tunnel mesh.
ITM will register listeners to the Node of network topology Operational DS
to receive Data Tree Change Notification (DTCN) for add/update/delete
notification in the OVSDB node so that such DTCN can be parsed and changes
in the other_config
and external_ids
columns of openvswitch table for
TEP parameters can be determined to perform TEP add/update/delete operations.
URL: restconf/operational/network-topology:network-topology/topology/ovsdb:1
Sample JSON output
{
"topology": [
{
"topology-id": "ovsdb:1",
"node": [
{
"node-id": "ovsdb://uuid/83192e6c-488a-4f34-9197-d5a88676f04f",
"ovsdb:db-version": "7.12.1",
"ovsdb:ovs-version": "2.5.0",
"ovsdb:openvswitch-external-ids": [
{
"external-id-key": "system-id",
"external-id-value": "e93a266a-9399-4881-83ff-27094a648e2b"
},
{
"external-id-key": "transport-zone",
"external-id-value": "TZA"
},
{
"external-id-key": "of-tunnel",
"external-id-value": "true"
}
],
"ovsdb:openvswitch-other-configs": [
{
"other-config-key": "provider_mappings",
"other-config-value": "physnet1:br-physnet1"
},
{
"other-config-key": "local_ip",
"other-config-value": "20.0.0.1"
}
],
"ovsdb:datapath-type-entry": [
{
"datapath-type": "ovsdb:datapath-type-system"
},
{
"datapath-type": "ovsdb:datapath-type-netdev"
}
],
"ovsdb:connection-info": {
"remote-port": 45230,
"local-ip": "10.111.222.10",
"local-port": 6640,
"remote-ip": "10.111.222.20"
}
...
...
}
]
}
]
}
Below table covers how ITM TEP parameter are mapped with OVSDB and which fields of OVSDB would provide ITM TEP parameter values.
ITM TEP parameter | OVSDB field |
---|---|
DPN-ID | ovsdb:datapath-id from bridge whose name is pre-configured
with openvswitch:external_ids:br-name:value |
IP-Address | openvswitch:other_config:local_ip :value |
Transport Zone Name | openvswitch:external_ids:transport-zone :value |
of-tunnel | openvswitch:external_ids:of-tunnel :value |
NOTE: If openvswitch:external_ids:br-name
is not configured, then by default
br-int
will be considered to fetch DPN-ID which in turn would be used for
tunnel creation. Also, openvswitch:external_ids:of-tunnel
is not required to be
configured, and will default to false, as described below in Yang changes section.
getDpnId()
method is added into MDSALUtil.java.
/**
* This method will be utility method to convert bridge datapath ID from
* string format to BigInteger format.
*
* @param datapathId datapath ID of bridge in string format
*
* @return the datapathId datapath ID of bridge in BigInteger format
*/
public static BigInteger getDpnId(String datapathId);
N.A.
Changes are needed in itm.yang
and itm-config.yang
which are described in
below sub-sections.
Following changes are done in itm.yang
file.
tepsInNotHostedTransportZone
under Oper DS will be added
for storing details of TEP received from southbound having transport zone
which is not yet hosted from northbound.transport-zone
would be modified for leaf zone-name
and tunnel-type
to make them mandatory parameters. list transport-zone {
ordered-by user;
key zone-name;
leaf zone-name {
type string;
mandatory true;
}
leaf tunnel-type {
type identityref {
base odlif:tunnel-type-base;
}
mandatory true;
}
}
container not-hosted-transport-zones {
config false;
list tepsInNotHostedTransportZone {
key zone-name;
leaf zone-name {
type string;
}
list unknown-vteps {
key "dpn-id";
leaf dpn-id {
type uint64;
}
leaf ip-address {
type inet:ip-address;
}
leaf of-tunnel {
description "Use flow based tunnels for remote-ip";
type boolean;
default false;
}
}
}
}
itm-config.yang
file is modified to add new container to contain following parameters
which can be configured in genius-itm-config.xml
on OpenDaylight controller startup.
def-tz-enabled
: this is boolean type parameter which would create or delete
default-transport-zone
if it is configured true or false respectively. By default,
value is false
.def-tz-tunnel-type
: this is string type parameter which would allow user to
configure tunnel-type for default-transport-zone
. By default, value is vxlan
. container itm-config {
config true;
leaf def-tz-enabled {
type boolean;
default false;
}
leaf def-tz-tunnel-type {
type string;
default "vxlan";
}
}
When TEP IP other_config:local_ip
and external_ids:transport-zone
are configured
at OVS side using ovs-vsctl
commands to add TEP, then TEP parameters details are
passed to the OVSDB plugin via OVSDB connection which in turn, is updated into Network
Topology Operational DS. ITM listens for change in Network Topology Node.
When TEP parameters (like local_ip
, transport-zone
, br-name
, of-tunnel
)
are received in add notification of OVSDB Node, then TEP is added.
For TEP addition, TEP-IP and DPN-ID are mandatory. TEP-IP is obtained from local_ip
TEP parameter and DPN-ID is fetched from OVSDB node based on br-name
TEP parameter:
br-int
bridge is fetched.TEP-IP and fetched DPN-ID would be needed to add TEP in the transport-zone.
Once TEP is added in config datastore, transport-zone listener of ITM would
internally take care of creating tunnels on the bridge whose DPN-ID is
passed for TEP addition. It is noted that TEP parameter of-tunnel
would be
checked if it is true, then of-tunnel
flag would be set for vtep to be added
under transport-zone or tepsInNotHostedTransportZone
.
TEP would be added under transport zone with following conditions:
external_ids:transport-zone
i.e. without transport zone
will be placed under the default-transport-zone
if def-tz-enabled
parameter
is configured to true in genius-itm-config.xml
. This will fire a DTCN to
transport zone yang listener and ITM tunnels gets built.external_ids:transport-zone
i.e. with transport zone and
if the specified transport zone exists in the ITM Config DS, then TEP will
be placed under the specified transport zone. This will fire a DTCN to
transport zone yang listener and the ITM tunnels gets built.external_ids:transport-zone
i.e. with transport zone and
if the specified transport zone does not exist in the ITM Config DS, then
TEP will be placed under the tepsInNotHostedTransportZone
container under ITM
Oper DS.When transport zone which was not configured earlier, is created through REST, then
it is checked whether any “orphan” TEPs already exists in the
tepsInNotHostedTransportZone
for the newly created transport zone, if present,
then such TEPs are removed from tepsInNotHostedTransportZone
container
in Oper DS, and then added under the newly created transport zone in ITM config DS
and then TEPs are added to the tunnel mesh of that transport zone.
other_config:local_ip
is updated at OVS side, then such change will be notified to OVSDB plugin via OVSDB
protocol, which in turn is reflected in Network topology Operational DS. ITM gets
DTCN for Node update. Parsing Node update notification for other_config:local_ip
parameter in old and new node can determine change in local_ip for TEP.
If it is updated, then TEP with old local_ip is deleted from transport zone and
TEP with new local_ip is added into transport zone. This will fire a DTCN to transport
zone yang listener and the ITM tunnels get updated.external_ids:transport-zone
is updated at OVS side, then such change
will be notified to OVSDB plugin via OVSDB protocol, which in turn is reflected in
Network topology Operational DS. ITM gets DTCN for Node update.
Parsing Node update notification for external_ids:transport-zone
parameter in
old and new node can determine change in transport zone for TEP.
If it is updated, then TEP is deleted from old transport zone and added into new
transport zone. This will fire a DTCN to transport zone yang listener and
the ITM tunnels get updated.When an openvswitch:other_config:local_ip
parameter gets deleted through ovs-vsctl
command, then network topology Operational DS gets updated via OVSB update notification.
ITM which has registered for the network-topology DTCNs, gets notified and this deletes
the TEP from Transport zone or tepsInNotHostedTransportZone
stored in ITM config/Oper
DS based on external_ids:transport-zone
parameter configured for TEP.
external_ids:transport-zone
is configured and corresponding transport zone exists
in Configuration DS, then remove TEP from transport zone. This will fire a DTCN
to transport zone yang listener and the ITM tunnels of that TEP get deleted.external_ids:transport-zone
is configured and corresponding transport zone does
not exist in Configuration DS, then check if TEP exists in tepsInNotHostedTransportZone
container in Oper DS, if present, then remove TEP from tepsInNotHostedTransportZone
.external_ids:transport-zone
is not configured, then check if TEP exists in the
default transport zone in Configuration DS, if and only if def-tz-enabled
parameter is configured to true in genius-itm-config.xml
. In case, TEP is present,
then remove TEP from default-transport-zone
. This will fire a DTCN to transport
zone yang listener and ITM tunnels of that TEP get deleted.Following are the configuation changes and impact in the OpenDaylight.
genius-itm-config.xml
configuation file is introduced newly into ITM
in which following parameters are added:def-tz-enabled
: this is boolean type parameter which would create or delete
default-transport-zone
if it is configured true or false respectively. Default
value is false
.def-tz-tunnel-type
: this is string type parameter which would allow user to
configure tunnel-type for default-transport-zone
. Default value is vxlan
. <itm-config xmlns="urn:opendaylight:genius:itm:config">
<def-tz-enabled>false</def-tz-enabled>
<def-tz-tunnel-type>vxlan</def-tz-tunnel-type>
</itm-config>
Runtime changes to the parameters of this config file would not be taken into consideration.
Any clustering requirements are already addressed in ITM, no new requirements added as part of this feature.
N.A.
N.A.
This feature would not introduce any significant scale and performance issues in the OpenDaylight.
OpenDaylight Carbon
255.255.255.255/32
under transport-zone is used to
store the TEPs listened from southbound.N.A.
This feature doesn’t add any new karaf feature. This feature would be available in
already existing odl-genius
karaf feature.
As per this feature, the TEP addition is based on the southbound configuation and respective transport zone should be created on the controller to form the tunnel for the same. The REST API to create the transport zone with mandatory parameters.
URL: restconf/config/itm:transport-zones/
Sample JSON data
{
"transport-zone": [
{
"zone-name": "TZA",
"tunnel-type": "odl-interface:tunnel-type-vxlan"
}
]
}
To retrieve the TEP configuations from all the transport zones.
URL: restconf/config/itm:transport-zones/
Sample JSON output
{
"transport-zones": {
"transport-zone": [
{
"zone-name": "default-transport-zone",
"tunnel-type": "odl-interface:tunnel-type-vxlan"
},
{
"zone-name": "TZA",
"tunnel-type": "odl-interface:tunnel-type-vxlan",
"subnets": [
{
"prefix": "255.255.255.255/32",
"vteps": [
{
"dpn-id": 1,
"portname": "",
"ip-address": "10.0.0.1"
},
{
"dpn-id": 2,
"portname": "",
"ip-address": "10.0.0.2"
}
],
"gateway-ip": "0.0.0.0",
"vlan-id": 0
}
]
}
]
}
}
No CLI is added into OpenDaylight for this feature.
ITM TEP parameters can be added/removed to/from the OVS switch using
the ovs-vsctl
command:
DESCRIPTION
ovs-vsctl
Command for querying and configuring ovs-vswitchd by providing a
high-level interface to its configuration database.
Here, this command usage is shown to store TEP parameters into
``openvswitch`` table of OVS database.
SYNTAX
ovs-vsctl set O . [column]:[key]=[value]
* To set TEP params on OVS table:
ovs-vsctl set O . other_config:local_ip=192.168.56.102
ovs-vsctl set O . external_ids:transport-zone=TZA
ovs-vsctl set O . external_ids:br-name=br0
ovs-vsctl set O . external_ids:of-tunnel=true
* To clear TEP params in one go by clearing external_ids and other_config
column from OVS table:
ovs-vsctl clear O . external_ids
ovs-vsctl clear O . other_config
* To clear specific TEP paramter from external_ids or other_config column
in OVS table:
ovs-vsctl remove O . other_config local_ip
ovs-vsctl remove O . external_ids transport-zone
* To check TEP params are set or cleared on OVS table:
ovsdb-client dump -f list Open_vSwitch
Primary assignee:
Other contributors:
default-transport-zone
during bootup and configure tunnel-type
for
default transport zone.def-tz-enabled
configurable parameter during
OpenDaylight restart.def-tz-tunnel-type
configurable parameter during
OpenDaylight restart.tepsInNotHostedTransportZone
list in operational datastore to
store TEP received with transport-zone not-configured from northbound.tepsInNotHostedTransportZone
list to transport-zone
configured from REST.tepsInNotHostedTransportZone
list to transport-zone.This feature should be used when configuration flag i.e. use-transport-zone
in
netvirt-neutronvpn-config.xml
for automatic tunnel configuration in transport-zone
is disabled in Netvirt’s NeutronVpn, otherwise netvirt feature of dynamic tunnel
creation may duplicate tunnel for TEPs in the tunnel mesh.
Appropriate UTs will be added for the new code coming in, once UT framework is in place.
Integration tests will be added, once IT framework for ITM is ready.
Following test cases will need to be added/expanded in Genius CSIT:
default-transport-zone
is not created when def-tz-enabled
flag is false.default-transport-zone
.default-transport-zone
on switch when def-tz-enabled
flag is true.default-transport-zone
is deleted when def-tz-enabled flag
is changed
from true to false during OpenDaylight controller restart.tepsInNotHostedTransportZone
to transport-zone when
transport-zone is configured from northbound.local_ip
dynamic update is possible and corresponding tunnels are also
updated.This will require changes to User Guide and Developer Guide.
User Guide will need to add information for below details:
def-tz-enabled
flag and def-tz-tunnel-type
to create/delete
default-transport-zone
and its tunnel-type
respectively.def-tz-enabled
as true if
TEPs needed to be added into default-transport-zone
from northbound.Developer Guide will need to capture how to use changes in ITM to create tunnel automatically for TEPs configured from southbound.
Table of Contents
https://git.opendaylight.org/gerrit/#/q/topic:vxlan-tunnel-aggregation
The purpose of this feature is to enable resiliency and load balancing of VxLAN encapsulated traffic between pair of OVS nodes.
Additionally, the feature will provide infrastructure to support more complex use cases such as policy-based path selection. The exact implementation of policy-based path selection is out of the scope of this document and will be described in a different spec [2].
The current ITM implementation enables creation of a single VxLAN tunnel between each pair of hypervisors.
If the hypervisor is connected to the network using multiple links with different capacity or connected to different L2 networks in different subnets, it is not possible to utilize all the available network resources to increase the throughput of traffic to remote hypervisors.
In addition, link failure of the network card forwarding the VxLAN traffic will result in complete traffic loss to/from the remote hypervisor if the network card is not part of a bonded interface.
The ITM will continue to create tunnels based on transport-zone configuration similarly to the current implementation -
TEP IP per DPN per transport zone.
When ITM creates TEP interfaces, in addition to creating the actual tunnels, it will create logical tunnel interface for
each pair of DPNs in the ietf-interface
config data-store representing the tunnel aggregation group between the DPNs.
The logical tunnel interface be created only when the first tunnel interface on each OVS is created. In addition,
this feature will be guarded by a global configuration option in the ITM and will be turned off by default.
Only when the feature is enabled, the logical tunnel interfaces will be created.
Creation of transport-zone with multiple IPs per DPN is out of the scope of this document and will be described in [2] However, the limitation of configuring no more than one TEP ip per transport zone will remain.
The logical tunnel will reference all member tunnel interfaces in the group using interface-child-info
model.
In addition, it would be possible to add weight to each member of the group to support unequal load-sharing of traffic.
The proposed feature depends on egress tunnel service binding functionality detailed in [3].
When the logical tunnel interface is created, a default egress service would be bound to it. The egress service will
create an OF select group based on the actual list of tunnel members in the logical group.
Each tunnel member can be assigned a weight field that will be applied on it’s corresponding bucket in the OF select
group. If weight was not defined, the bucket weight will be configured with a default value of 1 resulting
in uniform distribution if weight was not configured for any of the buckets.
Each bucket in the select group will route the egress traffic to one of the tunnel members in the group by
loading the lport-tag of the tunnel member interface to NXM register6
.
Logical tunnel egress service pipeline example:
cookie=0x6900000, duration=0.802s, table=220, n_packets=0, n_bytes=0, priority=6,reg6=0x500
actions=load:0xe000500->NXM_NX_REG6[],write_metadata:0xe000500000000000/0xfffffffffffffffe,group:80000
cookie=0x8000007, duration=0.546s, table=220, n_packets=0, n_bytes=0, priority=7,reg6=0x600 actions=output:3
cookie=0x8000007, duration=0.546s, table=220, n_packets=0, n_bytes=0, priority=7,reg6=0x700 actions=output:4
cookie=0x8000007, duration=0.546s, table=220, n_packets=0, n_bytes=0, priority=7,reg6=0x800 actions=output:5
group_id=800000,type=select,
bucket=weight:50,watch_port=3,actions=load:0x600->NXM_NX_REG6[],resubmit(,220),
bucket=weight:25,watch_port=4,actions=load:0x700->NXM_NX_REG6[],resubmit(,220),
bucket=weight:25,watch_port=5,actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
Each bucket of the LB group will set the watch_port
property to be the tunnel member OF port number.
This will allow the OVS to monitor the bucket liveness and route egress traffic only to live buckets.
BFD monitoring is required to probe the tunnel state and update the OF select group accordingly. Using OF tunnels [4] or turning off BFD monitoring will not allow the logical group service to respond to tunnel state changes.
OF select group for logical tunnel can contain a mix of IPv4 and IPv6 tunnels, depending on the transport-zone configuration.
A new pool will be allocated to generate OF group ids of the default select group and the policy groups described in [2].
The pool name VXLAN_GROUP_POOL
will allocate ids from the id-manager in the range 300,000-310,000.
ITM RPC calls to get internal tunnel interface between source and destination DPNs will return the logical tunnel
interface group name if such exits, otherwise the lower layer tunnel will be returned.
The logical tunnel group is an ietf-interface
thus it has an allocated lport-tag.
RPC call to getEgressActionsForInterface
for the logical tunnel will load register6
with its corresponding
lport-tag and resubmit the traffic to the egress dispatcher table.
The state of the logical tunnel group is affected by the states of the group members. If at least one of the
tunnels is in oper-status
UP, the logical group is considered UP.
If the logical tunnel was set as admin-status
DOWN, all the tunnel members will be set accordingly.
Ingress traffic from VxLAN tunnels would not be bounded to any logical group service as part of this feature and it will continue to use the same workflow while traversing the ingress services pipeline.
Other applications would be able to utilize this infrastructure to introduce new services over logical tunnel group interface e.g. policy-based path selection. These services will take precedence over the default egress service for logical tunnel.
L3 models map each combination of VRF id and destination prefix to a list of nexthop ip addresses.
When calling getInternalOrExternalInterfaceName
RPC from the FIB manager, if the DPN id of the remote nexthop
is known it will be sent along with the nexthop ip. If logical tunnel exists between the source and destination DPNs
it will be set as the lport-tag of register6
in the remote nexthop actions.
For the flows below it is assumed that a logical tunnel group was configured for both ingress and egress DPNs.
The logical tunnel group is composed of { tunnnel1
, tunnel2
} and bound to the default logical tunnel
egress service.
No pipeline changes required
Remote next hop group in the FIB table references the logical tunnel group.
The default logical group service uses OF select group to load balance traffic between the tunnels.
l3vpn service: set vpn-id=router-id
=>match: vpn-id=router-id,dst-mac=router-interface-mac
=>match: vpn-id=router-id,dst-ip=vm2-ip set dst-mac=vm2-mac tun-id=vm2-label reg6=logical-tun-lport-tag
=>match: reg6=logical-tun-lport-tag
=>set reg6=tun1-lport-tag
=>match: reg6=tun1-lport-tag
output to tunnel1
No pipeline changes required
match:tun-id=vm2-label
=>set dst-mac=vm2-mac,reg6=vm2-lport-tag
=>match: reg6=vm2-lport-tag
output to VM 2NAPT group references the logical tunnel group.
l3vpn service: set vpn-id=router-id
=>match: vpn-id=router-id,dst-mac=router-interface-mac
=>match: vpn-id=router-id
=>match: vpn-id=router-id
=>set tun-id=router-id reg6=logical-tun-lport-tag
=>match: reg6=logical-tun-lport-tag
=>set reg6=tun1-lport-tag
=>match: reg6=tun1-lport-tag
output to tunnel1
No explicit pipeline changes required
match:tun-id=router-id
=>set vpn-id=router-id, punt-to-controller
ELAN DMAC table references the logical tunnel group
l3vpn service: set vpn-id=router-id
=>l2vpn service: set elan-tag=vxlan-net-tag
=>match: elan-tag=vxlan-net-tag,src-mac=vm1-mac
=>match: elan-tag=vxlan-net-tag,dst-mac=vm2-mac set tun-id=vm2-lport-tag reg6=logical-tun-lport-tag
=>match: reg6=logical-tun-lport-tag
=>set reg6=tun2-lport-tag
=>match: reg6=tun2-lport-tag
output to tunnel2
No explicit pipeline changes required
match:tun-id=vm2-lport-tag set reg6=vm2-lport-tag
=>match: reg6=vm2-lport-tag
output to VM 2ELAN broadcast group references the logical tunnel group.
l3vpn service: set vpn-id=router-id
=>l2vpn service: set elan-tag=vxlan-net-tag
=>match: elan-tag=vxlan-net-tag,src-mac=vm1-mac
=>match: elan-tag=vxlan-net-tag
=>goto_group=elan-local-group, set tun-id=vxlan-net-tag reg6=logical-tun-lport-tag
=>match: reg6=logical-tun-lport-tag
=>set reg6=tun1-lport-tag
=>match: reg6=tun1-lport-tag
output to tunnel1
No explicit pipeline changes required
match:tun-id=vxlan-net-tag
=>set tun-id=vm2-lport-tag
=>match: tun-id=vm2-lport-tag set reg6=vm2-lport-tag
=>match: reg6=vm2-lport-tag
output to VM 2The following changes would be required to support configuration of logical tunnel group:
Add a new tunnel type to represent the logical group in odl-interface.yang
.
identity tunnel-type-logical-group {
description "Aggregation of multiple tunnel endpoints between two DPNs";
base tunnel-type-base;
}
Each tunnel member in the logical group can have an assigned weight as part of tunnel-optional-params
in odl-interface:if-tunnel
augment to support unequal load sharing.
grouping tunnel-optional-params {
leaf tunnel-source-ip-flow {
type boolean;
default false;
}
leaf tunnel-remote-ip-flow {
type boolean;
default false;
}
leaf weight {
type uint16;
}
...
}
Each tunnel endpoint in itm:transport-zones/transport-zone
can be configured with optional weight parameter.
Weight configuration will be propagated to tunnel-optional-params
.
list vteps {
key "dpn-id portname";
leaf dpn-id {
type uint64;
}
leaf portname {
type string;
}
leaf ip-address {
type inet:ip-address;
}
leaf weight {
type unit16;
default 1;
}
leaf option-of-tunnel {
type boolean;
default false;
}
}
The internal tunnel will be enhanced to contain multiple tunnel interfaces
container tunnel-list {
list internal-tunnel {
key "source-DPN destination-DPN transport-type";
leaf source-DPN {
type uint64;
}
leaf destination-DPN {
type uint64;
}
leaf transport-type {
type identityref {
base odlif:tunnel-type-base;
}
}
leaf-list tunnel-interface-name {
type string;
}
}
}
The RPC call itm-rpc:get-internal-or-external-interface-name
will be enhanced to contain the destination dp-id
as an optional input parameter
rpc get-internal-or-external-interface-name {
input {
leaf source-dpid {
type uint64;
}
leaf destination-dpid {
type uint64;
}
leaf destination-ip {
type inet:ip-address;
}
leaf tunnel-type {
type identityref {
base odlif:tunnel-type-base;
}
}
}
output {
leaf interface-name {
type string;
}
}
}
Creation of logical tunnel group will be guarded by configuration in itm-config
per tunnel-type
container itm-config {
config true;
leaf def-tz-enabled {
type boolean;
default false;
}
leaf def-tz-tunnel-type {
type string;
default "vxlan";
}
list tunnel-aggregation {
key "tunnel-type";
leaf tunnel-type {
type string;
}
leaf enabled {
type boolean;
default false;
}
}
}
This feature is expected to increase the datapath throughput by utilizing all available network resources.
Carbon
There are certain use cases where it would be possible to add the network cards to a separate bridge with LACP enabled and patch it to br-int but this alternative was rejected since it imposes limitations on the type of links and the overall capacity.
This feature doesn’t add any new karaf feature.
URL: restconf/config/itm:transport-zones/
Sample JSON data
The following REST will create 3 bi-directional tunnels between two OVS nodes.
{
"transport-zone": [
{
"zone-name": "underlay-net1",
"subnets": [
{
"prefix": "0.0.0.0/0",
"vteps": [
{
"dpn-id": 273348439543366,
"portname": "tunnel_port",
"ip-address": "20.2.1.2",
"option-of-tunnel": false
},
{
"dpn-id": 110400932149974,
"portname": "tunnel_port",
"ip-address": "20.2.1.3",
"option-of-tunnel": false
}
],
"gateway-ip": "0.0.0.0",
"vlan-id": 0
}
],
"tunnel-type": "odl-interface:tunnel-type-vxlan"
},
{
"zone-name": "underlay-net2",
"subnets": [
{
"prefix": "0.0.0.0/0",
"vteps": [
{
"dpn-id": 273348439543366,
"portname": "tunnel_port",
"ip-address": "30.3.1.2",
"option-of-tunnel": false
},
{
"dpn-id": 110400932149974,
"portname": "tunnel_port",
"ip-address": "30.3.1.3",
"option-of-tunnel": false
}
],
"gateway-ip": "0.0.0.0",
"vlan-id": 0
}
],
"tunnel-type": "odl-interface:tunnel-type-vxlan"
},
{
"zone-name": "underlay-net3",
"subnets": [
{
"prefix": "0.0.0.0/0",
"vteps": [
{
"dpn-id": 273348439543366,
"portname": "tunnel_port",
"ip-address": "40.4.1.2",
"option-of-tunnel": false
},
{
"dpn-id": 110400932149974,
"portname": "tunnel_port",
"ip-address": "40.4.1.3",
"option-of-tunnel": false
}
],
"gateway-ip": "0.0.0.0",
"vlan-id": 0
}
],
"tunnel-type": "odl-interface:tunnel-type-vxlan"
}
]
}
URL: restconf/operations/itm-rpc:get-tunnel-interface-name
{
"input": {
"source-dpid": "40146672641571",
"destination-dpid": "102093507130250",
"tunnel-type": "odl-interface:tunnel-type-vxlan"
}
}
URL: restconf/operations/itm-rpc:get-internal-or-external-interface-name
{
"input": {
"source-dpid": "40146672641571",
"destination-dpid": "102093507130250",
"tunnel-type": "odl-interface:tunnel-type-vxlan"
}
}
Trello card: https://trello.com/c/Q7LgiHH7/92-multiple-vxlan-endpoints-for-compute
ietf-interface
if more than one tunnel exist between two DPNs.
Update the interface-child-info
model with the list of individual tunnel membersgetTunnelInterfaceName
and getInternalOrExternalInterfaceName
to prefer
the logical tunnel group over the tunnel membersNone
Table of Contents
https://git.opendaylight.org/gerrit/#/q/topic:of-tunnels
OF Tunnels feature adds support for flow based tunnels to allow scalable overlay tunnels.
Today when tunnel interfaces are created, InterFaceManager [IFM] creates one OVS port for each tunnel interface i.e. source-destination pair. For N devices in a TransportZone this translates to N*(N-1) tunnel ports created across all devices and N-1 ports in each device. This has obvious scale limitations.
This feature will support following use cases:
Following use cases will not be supported:
remote_ip
, local_ip
, type
and key
to
be unique. Currently we don’t support multiple local_ip and key is always set to flow.
So remote_ip
and type
are the only unique identifiers. remote_ip=flow
is a super set of remote_ip=<fixed-ip>
and we can’t have two interfaces with
all other fields same except this.OVS 2.0.0
onwards allows configuration of flow based tunnels through
interface option:remote_ip=flow
. Currently this field is set to
IP address of the destination endpoint.
remote_ip=flow
means tunnel destination IP will be set by an OpenFlow
action. This allows us to add different actions for different destinations
using the single OVS/OF port.
This change will add optional parameters to ITM and IFM YANG files to allow OF Tunnels. Based on this option, ITM will configure IFM which in turn will create tunnel ports in OVSDB.
OVSDB Plugin provides following field in Interface to configure options:
list options {
description "Port/Interface related optional input values";
key "option";
leaf option {
description "Option name";
type string;
}
leaf value {
description "Option value";
type string;
}
For flow based tunnels we will set option name remote_ip
to
value flow
.
Following new actions will be added to mdsalutil/ActionType.java
Following new matches will be added to mdsalutil/NxMatchFieldType.java
This change adds a new match in Table0. Today we match in in_port
to determine which tunnel interface this pkt came in on. Since currently
each tunnel maps to a source-destination pair it tells us about source device.
For interfaces configured to use flow based tunnels this will add an
additional match for tun_src_ip
. So, in_port+tunnel_src_ip
will
give us which tunnel interface this pkt belongs to.
When services call getEgressActions(), they will get one additional action,
``set_tunnel_dest_ip
before the output:ofport
action.
Changes will be needed in itm.yang
and odl-interface.yang
to allow
configuring a tunnel as flow based or not.
A new parameter option-of-tunnel
will be added to list-vteps
list vteps {
key "dpn-id portname";
leaf dpn-id {
type uint64;
}
leaf portname {
type string;
}
leaf ip-address {
type inet:ip-address;
}
leaf option-of-tunnel {
type boolean;
default false;
}
}
Same parameter will also be added to tunnel-end-points
in itm-state.yang
.
This will help eliminate need to retrieve information from TransportZones when configuring
tunnel interfaces.
list tunnel-end-points {
ordered-by user;
key "portname VLAN-ID ip-address tunnel-type";
/* Multiple tunnels on the same physical port but on different VLAN can be supported */
leaf portname {
type string;
}
...
...
leaf option-of-tunnel {
type boolean;
default false;
}
}
This will allow to set OF Tunnels on per VTEP basis. So in a transport-zone we can have some VTEPs (devices) that use OF Tunnels and others that don’t. Default of false means it will not impact existing behavior and will need to be explicitly configured. Going forward we can choose to set default true.
We’ll add a new tunnel-optional-params
and add them to iftunnel
grouping tunnel-optional-params {
leaf tunnel-source-ip-flow {
type boolean;
default false;
}
leaf tunnel-remote-ip-flow {
type boolean;
default false;
}
list tunnel-options {
key "tunnel-option";
leaf tunnel-option {
description "Tunnel Option name";
type string;
}
leaf value {
description "Option value";
type string;
}
}
}
The list tunnel-options
is a list of key-value pairs of strings, similar to
options in OVSDB Plugin. These are not needed for OF Tunnels but is being added
to allow user to configure any other Interface options that OVS supports. Aim is to
enable developers and users try out newer options supported by OVS without needing to
add explicit support for it. Note that there is no counterpart for this option in
itm.yang
. Any options that we want to explicitly support will be added as a separate
option. This will allow us to do better validations for options that are needed for
our specific use cases.
augment "/if:interfaces/if:interface" {
ext:augment-identifier "if-tunnel";
when "if:type = 'ianaift:tunnel'";
...
...
uses tunnel-optional-params;
uses monitor-params;
}
option-of-tunnel:true
for tep being
added.option-of-tunnel:true
, set tunnel-remote-ip:true
for the tunnel
interface.option-of-tunnel:true
and this is first tunne on this device,
set option:remote_ip=flow
when creating tunnel interface in OVSDB. Else,
set option:remote_ip=<destination-ip>
.tunnel-remote-ip:true
and this is last tunnel on this device,
delete tunnel port in OVSDB. Else, do nothing.tunnel-remote-ip:false
, follow existing logic.This change doesn’t add or modify any configuration parameters.
Any clustering requirements are already addressed in ITM and IFM, no new requirements added as part of this feature.
This solution will help improve scale numbers by reducing no. of interfaces
created on devices as well as no. of interfaces and ports present in
inventory
and network-topology
.
Carbon. Boron-SR3.
BFD monitoring will not work when OF Tunnels are used. Today BFD monitoring in
OVS relies on destination_ip configured in remote_ip when creating tunnel port
to determine target IP for BFD packets. If we use flow
it won’t know where
to send BFD packets. Unless OVS allows adding destination IP for BFD monitoring
on such tunnels, monitoring cannot be enabled.
LLDP/ARP based monitoring was considered for OF tunnels to overcome lack of BFD monitoring but was rejected because LLDP/ARP based monitoring doesn’t scale well. Since driving requirement for this feature is scale setups, it didn’t make sense to use an unscalable solution for monitoring.
XML/CFG file based global knob to enable OF tunnels for all tunnel interfaces was rejected due to inflexible nature of such a solution. Current solution allows a more fine grained and device based configuration at runtime. Also, wanted to avoid adding yet another global configuration knob.
This feature doesn’t add any new karaf feature.
For most users TEP Addition is the only configuration they need to do to create tunnels using genius. The REST API to add TEPs with OF Tunnels is same as earlier with one small addition.
URL: restconf/config/itm:transport-zones/
Sample JSON data
{
"transport-zone": [
{
"zone-name": "TZA",
"subnets": [
{
"prefix": "192.168.56.0/24",
"vlan-id": 0,
"vteps": [
{
"dpn-id": "1",
"portname": "eth2",
"ip-address": "192.168.56.101",
"option-of-tunnel":"true"
}
],
"gateway-ip": "0.0.0.0"
}
],
"tunnel-type": "odl-interface:tunnel-type-vxlan"
}
]
}
This use case is mainly for those who want to write applications using Genius and/or want to create individual tunnel interfaces. Note that this is a simpler easy way to create tunnels without needing to delve into how OVSDB Plugin creates tunnels.
Refer Genius User Guide for more details on this.
URL: restconf/config/ietf-interfaces:interfaces
Sample JSON data
{
"interfaces": {
"interface": [
{
"name": "vxlan_tunnel",
"type": "iana-if-type:tunnel",
"odl-interface:tunnel-interface-type": "odl-interface:tunnel-type-vxlan",
"odl-interface:datapath-node-identifier": "1",
"odl-interface:tunnel-source": "192.168.56.101",
"odl-interface:tunnel-destination": "192.168.56.102",
"odl-interface:tunnel-remote-ip-flow": "true",
"odl-interface:monitor-enabled": false,
"odl-interface:monitor-interval": 10000,
"enabled": true
}
]
}
}
A new boolean option, remoteIpFlow
will be added to tep:add
command.
DESCRIPTION
tep:add
adding a tunnel end point
SYNTAX
tep:add [dpnId] [portNo] [vlanId] [ipAddress] [subnetMask] [gatewayIp] [transportZone]
[remoteIpFlow]
ARGUMENTS
dpnId
DPN-ID
portNo
port-name
vlanId
vlan-id
ipAddress
ip-address
subnetMask
subnet-Mask
gatewayIp
gateway-ip
transportZone
transport_zone
remoteIpFlow
Use flow for remote ip
set_tunnel_dest_ip
action to actions returned in
getEgressActions()
for OF Tunnels.tun_src_ip
in Table0 for OF Tunnels.This doesn’t add any new dependencies. This requires minimum of OVS 2.0.0
which is already lower than required by some of other features.
This change is backwards compatible, so no impact on dependent projects. Projects can choose to start using this when they want. However, there is a known limitation with monitoring, refer Limitations section for details.
Following projects currently depend on Genius:
Appropriate UTs will be added for the new code coming in once framework is in place.
Integration tests will be added once IT framework for ITM and IFM is ready.
CSIT already has test cases for tunnels which test with non OF Tunnels. Similar test cases will be added for OF Tunnels. Alternatively, some of the existing test cases that use multiple teps can be tweaked to use OF Tunnels for one of them.
Following test cases will need to be added/expanded in Genius CSIT:
This will require changes to User Guide and Developer Guide.
User Guide will need to add information on how to add TEPs with flow based tunnels.
Developer Guide will need to capture how to use changes in IFM to create individual tunnel interfaces.
Table of Contents
QoS patches: https://git.opendaylight.org/gerrit/#/q/topic:qos-shaping
The current Boron implementation provides support for ingress rate limiting configuration of OVS. The Carbon release will add egress traffic shaping to QoS feature set. (Note, the direction of traffic flow (ingress, egress) is from the perspective of the OpenSwitch)
OVS supports traffic shaping for traffic that egresses from a switch. To utilize this functionality, Genius implementation should be able to create ‘set queue’ output action upon connection of new OpenFlow node.
Unimgr or Neutron VPN creates ietf vlan interface for each port connected to particular service. The Ovsdb provides a possibility to create QoS and mapped Queue with egress rate limits for lower level port. Such queue should be created on parent physical interface of vlan or trunk member port if service has definition of limits. The ovsdb southbound provides interface for creation of ovs QoS and Queues. This functionality may be utilized by netvirt qos service. Below is the dump from ovsdb with queues created for one of the ports.
Port table
_uuid : a6cf4ca9-b15c-4090-aefe-23af2d5ce4f2
name : "ens5"
qos : 9779ce41-4347-4383-b308-75f46d6a258c
QoS table
_uuid : 9779ce41-4347-4383-b308-75f46d6a258c
other_config : {max-rate="50000"}
queues : {1=3cc34bb7-7df8-4538-9fd7-4a6c6c467c69}
type : linux-htb
Queue table
_uuid : 3cc34bb7-7df8-4538-9fd7-4a6c6c467c69
dscp : []
other_config : {max-rate="50000", min-rate="5000"}
The queues creation is out of scope of this document. The definition of vlan or trunk member port will be augmented with relevant queue reference and number if queue was created successful. That will allow to create openflow ‘set_queue’ output action during service binding.
New ‘set_queue’ action will be supported in Egress Dispatcher table
Table | Match | Action |
---|---|---|
Egress Dispatcher [220] | no changes | Set queue id (optional) and output to port |
A new augment “ovs-qos” is added to if:interface in odl-interface.yang
/* vlan port to qos queue */
augment "/if:interfaces/if:interface" {
ext:augment-identifier "ovs-qos";
when "if:type = 'ianaift:l2vlan'";
leaf ovs-qos-ref {
type instance-identifier;
description
"represents whether service port has associated qos. A reference to a ovsdb QoS entry";
}
leaf service-queue-number {
type uint32;
description
"specific queue number within the list of queues in the qos entry";
}
}
Additional OpenFlow action will be performed on part of the packages. Egress packages will be processed via linux-htp if service configured accordanly.
Carbon
The unified REST API for ovsdb port adjustment could be created if future release. The QoS engress queues and ingress rate limiting should be a part of this API. Usage ===== User will configure unimgr service with egress rate limits. That will follow to process described above.
Minimum OVS version 1.8.0 is required.
[1] OpenDaylight Documentation Guide <http://docs.opendaylight.org/en/latest/documentation.html>
[2] https://specs.openstack.org/openstack/nova-specs/specs/kilo/template.html
Table of Contents
https://git.opendaylight.org/gerrit/#/q/topic:service-binding-on-tunnels
Service Binding On Tunnels Feature enables applications to bind multiple services on an ingress/egress tunnel.
Currently GENIUS does not provide a generic mechanism to support binding services on all interfaces.Ingress service binding pipeline is different for l2vlan interfaces and tunnel interfaces.Similarly, egress Service Binding is only supported for l2vlan interfaces.
Today when ingress services are bound on a tunnel, the highest priority service gets
bound in INTERFACE INGRESS TABLE(0)
itself, and remaining service entries get
populated in LPORT DISPATCHER TABLE(17)
, which is not in alignment with the service
binding logic for VM ports. As part of this feature, we enable ingress/egress service
binding support for tunnels in the same way as for VM interfaces. This feature also enables
service-binding based on a tunnel-type which is basically meant for optimizing the number
of flow entries in dispatcher tables.
This feature will support following use cases:
LPORT
DISPATCHER TABLE(17)
.Following use cases will not be supported:
The proposed change extends the current l2vlan service binding functionality to tunnel
interfaces. With this feature, multiple applications can bind their services on the same
tunnel interface, and traffic will be processed on an application priority basis.
Applications are given the flexibility to provide service specific actions while they
bind their services. Normally service binding actions include
go-to-service-pipeline-entry-table. Packets will enter a particular service based
on the service priority, and if the packet is not consumed by the service,
it is the application’s responsibility to resubmit the packet back to the egress/ingress
dispatcher table
for further processing by next priority service. Egress Dispatcher
Table will have a default service priority entry per tunnel interface to egress the
packet on the tunnel port.So, if there are no egress services bound on a tunnel interface,
this default entry will take care of taking the packet out of the switch.
The feature also enables service binding based on tunnel type. This way number of entries in Dispatcher Tables can be optimized if all the packets entering on tunnel of a particular type needs to be handled in the same way.
There is a pipeline change introduced as part of this feature for tunnel egress as well as ingress, and is captured in genius pipeline document patch [2].
With this feature, all traffic from INTERFACE_INGRESS_TABLE(0) will be dispatched to LPORT_DISPATCHER_TABLE(17), from where the packets will be dispatched to the respective applications on a priority basis.
Register6 will be used to set the ingress tunnel-type in Table0, and this can be used to match in Table17 to identify the respective applications bound on the tunnel-type. Remaining logic of ingress service binding will remain as is, and service-priority and interface-tag will be set in metadata as usual. The bits from 25-28 of Register6 will be used to indicate tunnel-type.
After the ingress service processing, packets which are identified to be egressed on tunnel interfaces, currently directly go to the tunnel port. With this feature, these packets will goto Egress Dispatcher Table[Table 220] first, where the packet will be processed by Egress Services on the tunnel interface one by one, and finally will egress the switch.
Register6 will be used to indicate service priority as well as interface tag for the egress tunnel interface, in Egress Dispatcher Table, and when there are N services bound on a tunnel interface, there will be N+1 entries in Egress Dispatcher Table, the additional one for the default tunnel entry. The first 4 bits of Register6 will be used to indicate the service priority and the next 20 bits for interface Tag, and this will be the match criteria for packet redirection to service pipeline in Egress Dispatcher Table. Before sending the packet to the service, Egress Dispatcher Table will set the service index to the next service’ priority. Same as ingress, Register6 will be used for egress tunnel-type matching, if there are services bound on tunnel-type.
TABLE | MATCH | ACTION |
---|---|---|
INTERFACE_INGRESS_TABLE | in_port | SI=0,reg6=interface_type, metadata=lport tag, goto table 17 |
LPORT_DISPATCHER_TABLE | metadata=service priority && lport-tag(priority=10) | increment SI, apply service specific actions, goto ingress service |
reg6=tunnel-type priority=5 | increment SI, apply service specific actions, goto ingress service | |
EGRESS_DISPATCHER_TABLE | Reg6==service Priority && lport-tag(priority=10) | increment SI, apply service specific actions, goto egress service |
reg6=tunnel-type priority=5 | increment SI, apply service specific actions, goto egress service |
GetEgressActionsForInterface
RPC in interface-manager currently returns the output:port
action for tunnel interfaces. This will be changed to return
set_field_reg6(default-service-index + interface-tag) and resubmit(egress_dispatcher_table).
No yang changes are needed, as binding on tunnel-type is enabled by having reserved keywords for interface-names
service-priority
, service-mode
and instructions
for service being bound on the tunnel interface.service-priority
and interface-tag
value with actions
pointing to the service specific actions supplied by the application.service-priority
and
service-mode
for service being unbound on the tunnel interface.service-mode
and instructions
for service being bound. The reserved keywords will be
ALL_VXLAN_INTERNAL
, ALL_VXLAN_EXTERNAL
, and ALL_MPLS_OVER_GRE
.service-priority
and
tunnel-type
value with actions pointing to the service specific actions
supplied by the application will be created on each DPN.service-mode
for service being
unbound on all connected DPNs.This change doesn’t add or modify any configuration parameters.
The solution is supported on a 3-node cluster.
Carbon.
N/A
This feature doesn’t add any new karaf feature.Installing any of the below features can enable the service:
odl-genius-ui odl-genius-rest odl-genius
This use case is mainly for those who want to write applications using Genius and/or want to create individual tunnel interfaces. Note that this is a simpler easy way to create tunnels without needing to delve into how OVSDB Plugin creates tunnels.
Refer Genius User Guide [4]_ for more details on this.
URL: restconf/config/ietf-interfaces:interfaces
Sample JSON data
{
"interfaces": {
"interface": [
{
"name": "vxlan_tunnel",
"type": "iana-if-type:tunnel",
"odl-interface:tunnel-interface-type": "odl-interface:tunnel-type-vxlan",
"odl-interface:datapath-node-identifier": "1",
"odl-interface:tunnel-source": "192.168.56.101",
"odl-interface:tunnel-destination": "192.168.56.102",
"odl-interface:monitor-enabled": false,
"odl-interface:monitor-interval": 10000,
"enabled": true
}
]
}
}
URL: http://localhost:8181/restconf/config/interface-service-bindings:service-bindings/services-info/{tunnel-interface-name}/interface-service-bindings:service-mode-egress
Sample JSON data
{
"bound-services": [
{
"service-name": "service1",
"flow-priority": "5",
"service-type": "service-type-flow-based",
"instruction": [
{
"order": 1,
"go-to-table": {
"table_id": 88
}
}],
"service-priority": "2",
"flow-cookie": "1"
}
]
}
LPORT_DISPATCHER_TABLE
set_field_reg_6
and resubmit(220)
action to actions returned in
getEgressActionsForInterface()
for Tunnels.Genius, Netvirt
There will be several impacts on netvirt pipeline with this change. A brief overview is given in the table below:
Capture details of testing that will need to be added.
New junits will be added to InterfaceManagerConfigurationTest to cover the following :
The following TCs should be added to CSIT to cover this feature:
This will require changes to User Guide and Developer Guide.
There is a pipeline change for tunnel datapath introduced due to this change. This should go in User Guide.
Developer Guide should capture how to configure egress service binding on tunnels.
[1] | Genius Carbon Release Plan https://wiki.opendaylight.org/view/Genius:Carbon_Release_Plan |
[2] | Netvirt Pipeline Diagram http://docs.opendaylight.org/en/latest/submodules/genius/docs/pipeline.html |
[3] | Genius Trello Card https://trello.com/c/S8lNGd9S/6-service-binding-on-tunnel-interfaces |
[4] | Genius User Guide http://docs.opendaylight.org/en/latest/user-guide/genius-user-guide.html#creating-overlay-tunnel-interfaces |
Note
This template was derived from [2], and has been modified to support our project.
This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode
Table of Contents
https://git.opendaylight.org/gerrit/#/q/topic:service-recovery
Service Recovery Framework is a feature that enables recovery of services. This recovery can be trigerred by user, or eventually, be used as a self-healing mechanism.
Status and Diagnostic adds support for reporting current status of different services. However, there is no means to recover individual service or service instances that have failed. Only recovery that can be done today is to restart the controller node(s) or manually restart the bundle or reinstall the karaf feature itself.
Restarting the controller can be overkill and needlessly disruptive. Manually restarting bundle or feature requires user to be aware of and have access to these CLIs. There may not be one-to-one mapping from a service to corresponding bundle or feature. Also, a truly secure system would provide role based access to users. Only someone with administrative rights will have access to Karaf CLI to restart/reinstall while a less privileged user should be able to trigger recovery without requiring higher level access.
Note that role based access is out of scope of this document
This feature will support following use cases:
A new module Service Recovery Manager (SRM) will be added to Genius. SRM will provide single and common point of interaction with all individual services. Recovery options will vary from highest level service restart to restarting individual service instances.
SRM will introduce concept of service entities and operations.
L3VPN
, ITM
, VPNInstance
etc.service
and instance
. e.g. L3VPN
is a entity of type service
and VPNInstance
is an entity of type instance
instance
will have a unique entity-id
as an
identifier. e.g. The uuid
of VPNInstance
is the entity-id
identifying an
individual VPN Instance from amongst many present in L3VPN
service.karaf
bundle restart, but may result in restart of more than one bundle as per the
service. This operation will only be applicable to entity-type service
.service
or instance
. For entity-type: service
the entity-name
will be service name.
For entity-type: instance
the entity-name
will be instance name and entity-id` will
be a required field.This table gives some examples of different entities and operations for them:
OPERATION | EntityType | EntityName | EntityId | Remarks |
---|---|---|---|---|
reinstall | service | ITM | N.A. | Restart ITM |
recover | service | ITM | ITM | Recover ITM Service |
recover | instance | TEP | dpn-1 | Recover TEP |
recover | isntance | TransportZone | TZA | Recover Transport Zone |
N.A.
We’ll be adding three new yang files
This file will contain different types used by service recovery framework. Any service that wants to use ServiceRecovery will have to define its supported names and types in this file.
module srm-types {
namespace "urn:opendaylight:genius:srm:types";
prefix "srmtypes";
revision "2017-05-31" {
description "ODL Services Recovery Manager Types Module";
}
/* Entity TYPEs */
identity entity-type-base {
description "Base identity for all srm entity types";
}
identity entity-type-service {
description "SRM Entity type service";
base entity-type-base;
}
identity entity-type-instance {
description "SRM Entity type instance";
base entity-type-base;
}
/* Entity NAMEs */
/* Entity Type SERVICE names */
identity entity-name-base {
description "Base identity for all srm entity names";
}
identity genius-ifm {
description "SRM Entity name for IFM service";
base entity-type-base;
}
identity genius-itm {
description "SRM Entity name for ITM service";
base entity-type-base;
}
identity netvirt-vpn {
description "SRM Entity name for VPN service";
base entity-type-base;
}
identity netvirt-elan {
description "SRM Entity name for elan service";
base entity-type-base;
}
identity ofplugin {
description "SRM Entity name for openflowplugin service";
base entity-type-base;
}
/* Entity Type INSTANCE Names */
/* Entity types supported by GENIUS */
identity genius-itm-tep {
description "SRM Entity name for ITM's tep instance";
base entity-type-base;
}
identity genius-itm-tz {
description "SRM Entity name for ITM's transportzone instance";
base entity-type-base;
}
identity genius-ifm-interface {
description "SRM Entity name for IFM's interface instance";
base entity-type-base;
}
/* Entity types supported by NETVIRT */
identity netvirt-vpninstance {
description "SRM Entity name for VPN instance";
base entity-type-base;
}
identity netvirt-elaninstance {
description "SRM Entity name for ELAN instance";
base entity-type-base;
}
/* Service operations */
identity service-op-base {
description "Base identity for all srm operations";
}
identity service-op-reinstall {
description "Reinstall or restart a service";
base service-op-base;
}
identity service-op-recover {
description "Recover a service or instance";
base service-op-recover;
}
}
This file will contain different operations that individual services must support on entities exposed by them in servicesrecovery-types.yang. These are not user facing operations but used by SRM to translate user RPC calls to
module srm-ops {
namespace "urn:opendaylight:genius:srm:ops";
prefix "srmops";
import srm-types {
prefix srmtype;
}
revision "2017-05-31" {
description "ODL Services Recovery Manager Operations Model";
}
/* Operations */
container service-ops {
config false;
list services {
key service-name
leaf service-name {
type identityref {
base srmtype:entity-name-base
}
}
list operations {
key entity-name;
leaf entity-name {
type identityref {
base srmtype:entity-name-base;
}
}
leaf entity-type {
type identityref {
base srmtype:entity-type-base;
mandatory true;
}
}
leaf entity-id {
description "Optional when entity-type is service. Actual
id depends on entity-type and entity-name"
type string;
}
leaf trigger-operation {
type identityref {
base srmtypes:service-op;
mandatory true;
}
}
}
}
}
}
This file will contain different RPCs supported by SRM. These RPCs are user facing and SRM will translate these into ServiceRecovery Operations as defined in srm-ops.yang.
module srm-rpcs {
namespace "urn:opendaylight:genius:srm:rpcs";
prefix "srmrpcs";
import srm-types {
prefix srmtype;
}
revision "2017-05-31" {
description "ODL Services Recovery Manager Rpcs Module";
}
/* RPCs */
rpc reinstall {
description "Reinstall a given service";
input {
leaf entity-name {
type identityref {
base srmtype:entity-name-base;
mandatory true;
}
}
leaf entity-type {
description "Currently supported entity-types:
service";
type identityref {
base srmtype:entity-type-base;
mandatory false;
}
}
}
output {
leaf successful {
type boolean;
}
leaf message {
type string;
}
}
}
rpc recover {
description "Recover a given service or instance";
input {
leaf entity-name {
type identityref {
base srmtype:entity-name-base;
mandatory true;
}
}
leaf entity-type {
description "Currently supported entity-types:
service, instance";
type identityref {
base srmtype:entity-type-base;
mandatory true;
}
}
leaf entity-id {
description "Optional when entity-type is service. Actual
id depends on entity-type and entity-name"
type string;
mandatory false;
}
}
output {
leaf response {
type identityref {
base rpc-result-base;
mandatory true;
}
}
leaf message {
type string;
mandatory false;
}
}
}
/* RPC RESULTs */
identity rpc-result-base {
description "Base identity for all SRM RPC Results";
}
identity rpc-success {
description "RPC result successful";
base rpc-result-base;
}
identity rpc-fail-op-not-supported {
description "RPC failed:
operation not supported for given parameters";
base rpc-result-base;
}
identity rpc-fail-entity-type {
description "RPC failed:
invalid entity type";
base rpc-result-base;
}
identity rpc-fail-entity-name {
description "RPC failed:
invalid entity name";
base rpc-result-base;
}
identity rpc-fail-entity-id {
description "RPC failed:
invalid entity id";
base rpc-result-base;
}
identity rpc-fail-unknown {
description "RPC failed:
reason not known, check message string for details";
base rpc-result-base;
}
}
SRM will provide RPCs, which will only be handled on one of the nodes. In turn, it will
write to srm-ops.yang
and each individual service will have Clustered
Listeners to track operations being triggered. Individual services will decide, based
on service and instance on which recovery is triggered, if it needs to run on all nodes
on cluster or individual nodes.
Status and Diagnostics (SnD) may need to be updated to user service names similar to ones used in SRM.
Providing RPCs to trigger service restarts will eliminate the need to give administrative access to non-admin users just so they can trigger recovery though bundle restarts from karaf CLI. Expectation is access to these RPCs will be role based, but role based access and its implementation is out of scope of this feature.
This feature allows recovery at a much fine grained level than full controller or node restart. Such restarts impact and trigger recovery of services that didn’t need to be recover. Every restart of controller cluster or individual nodes has a significant overhead that impacts scale and performance. This feature aims to eliminate these overheads by allowing targeted recovery.
Nitrogen.
Using existing karaf CLI for feature and bundle restart was considered but rejected due to reasons already captured in earlier sections.
TBD.
odl-genius
All arguments are case insensitive unless specified otherwise.
DESCRIPTION
srm:reinstall
reinstall a given service
SYNTAX
srm:reinstall <service-name>
ARGUMENTS
service-name
Name of service. to re-install e.g. itm/ITM, ifm/IFM etc.
EXAMPLE
srm:reinstall ifm
DESCRIPTION
srm:recover
recover a service or service instance
SYNTAX
srm:recover <entity-type> <entity-name> [<entity-id>]
ARGUMENTS
entity-type
Type of entity as defined in srm-types.
e.g. service, instance etc.
entity-name
Entity name as defined in srm-types.
e.g. itm, itm-tep etc.
entity-id
Entity Id for instances, requierd for entity-type instance.
e.g. 'TZA', 'tunxyz' etc.
EXAMPLES
srm:recover service itm
srm:recover instance itm-tep TZA
srm:recover instance vpn-instance e5e2e1ee-31a3-4d0c-a8d8-b86d08cd14b1
This will require changes to User Guide based on information provided in Usage section.
[1] Genius Nitrogen Release Plan https://wiki.opendaylight.org/view/Genius:Nitrogen_Release_Plan
[2] https://specs.openstack.org/openstack/nova-specs/specs/kilo/template.html
Note
This template was derived from [2], and has been modified to support our project.
This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode
Table of Contents
https://git.opendaylight.org/gerrit/#/c/75248/
The Genius arputil component provides a notification service for ARP packets forwarded from switches via OpenFlow packet-in events. This change adds the switch’s datapath ID to the notifications.
This change resolves the fact that the switch datapath ID is not copied from the OpenFlow packet-in event to the ARP notification sent by Genius arputil.
This change is primarily introduced to correctly support assigning a FIP to an Octavia VIP:
https://jira.opendaylight.org/browse/NETVIRT-1402
An Octavia VIP is a Neutron port that is not bound to any VM and is therefor not added to br-int. The VM containing the active HaProxy sends gratuitous ARPs for the VIP’s IP and ODL intercepts those and programs flows to forward VM traffic to the VMs port.
The ODL code responsible for configuring the FIP association flows on OVS currently relies on a southbound openflow port that corresponds to the neutron FIP port. The only real reason this is required is so that ODL can decide which switch should get the flows. In the case of the VIP port, there is no corresponding southbound port so the flows never get configured.
To resolve this, ODL can know which switch to program the flows on from the gratuitous ARP packet-in event which will come from the right switch (we already listen for those.) So, basically we just respond to the gratuitous ARP by correlating it with the Neutron port, checking that the port is an Octavia VIP (the owner field), and programming the flows.
In arputil-api, add dpn-id fields to the the arp-request-received and arp-response-received yang notifications
Nitrogen and preferably backported to Oxygen
N/A
Consumers of the ARP notifications may call getDpnId() to retrieve the datapath ID of the switch that forwarded the ARP packed to the controller.
odl-genius
Josh Hershberg, jhershbe@redhat.com
Simple change, see the gerrit patch above.
Although ARP notifications are currently consumed by netvirt vpnmanager, this feature is backwards compatible. A new notification listener that consumes the datapath ID will be added to natmanager to resolve the issue with Octavia mentioned above.
N/A
Starting from Oxygen, Genius uses RST format Test Plan document for all new Test Suites.
Contents:
Table of Contents
Test Suite for testing basic Interface Manager functions.
Test setup consists of ODL with odl-genius installed and two switches (DPNs) connected to ODL over OVSDB and OpenflowPlugin.
This suit uses the default Genius topology.
+--------+ +--------+
| BR1 | data | BR2 |
| <-------> |
+---^----+ +----^---+
| mgmt |
+---v-----------------v---+
| |
| ODL |
| |
| odl-genius |
| |
+-------------------------+
OVS 2.6+ Mininet ???
Following steps are followed at beginning of test suite:
Following steps are followed at beginning of test suite:
Following DataStore models are captured at end of each test case:
This creates a transparent l2vlan interface between two dpns
N.A.
This testcase deletes the l2vlan transparent interface created in previous test case.
N.A.
This testcase creates a l2vlan trunk interface between 2 DPNs.
N.A.
This testcase creates a l2vlan Trunk member interface for the l2vlan trunk interface created in previous testcase.
N.A.
This testcase binds service to the L2vlan Trunk Interface earlier.
N.A.
This testcase Unbinds the services which were bound in previous testcase.
N.A.
Delete l2vlan trunk interface created and used in earlier test cases
N.A.
N.A.
Table of Contents
This document serves as the test plan for the ITM Scalability – OF Based tunnels. This document comprises of test cases pertaining to all the use case covered by the Functional Spec.
Note
Name of suite and test cases should map exactly to as they appear in Robot reports.
Brief description of test setup.
Topology device software and inter node communication details -
+--------+ +--------+
| BR1 | data | BR2 |
| <-------> |
+---^----+ +----^---+
| mgmt |
+---v-----------------v---+
| |
| ODL |
| |
| odl-genius |
| |
+-------------------------+
In test suit bringup build the topology as described in the Test Topology. Bring all the tunnels UP.
Final steps after all tests in suite are done. This should include any cleanup, sanity checks, configuration etc. that needs to be done once all test cases in suite are done.
Change the config parameter to enable IFM Bypass of ITM provisioning and Verify Tunnel Creation is successful.
Change the config parameter to enable without IFM Bypass of ITM provisioning and Verify Tunnel Creation is successful.
Clean up existing ITM config, change ITM provisioning parameter to provide IFM Bypass, Verify ITM creation succeeds.
Clean up existing ITM config, change ITM provisioning parameter to disable IFM Bypass, Verify ITM creation succeeds.
Configure ITM tunnel Mesh, Bring DOWN the datapath and Verify Tunnel status is updated in ODL.
Configure ITM tunnel Mesh, Bring UP the datapath and Verify Tunnel status is updated in ODL.
Change ITM config parameters to enable IFM Bypass and Verify BFD monitoring can be enabled for ITM tunnels.
Change ITM config parameters to enable IFM Bypass and Verify BFD monitoring can be disabled for ITM tunnels
Enable BFD and check for the data path alarm and as well as control path alarms.
Enable Tunnel status alarm and Bring down the Tunnel port, and verify Tunnel down alarm is reported.
Disconnect DPN from ODL and verify Tunnel status is shown as UNKNOWN for the Disconnected DPN.
Enable Tunnel status alarm and Bring up the Tunnel port which is down, and verify Tunnel down alarm is cleared.
Create ITM with provisioning config parameter set to true, Perform ODL reboot and Verify dataplane is intact.
Create ITM with provisioning config parameter set to true for IFM Bypass, bring down control plane connection(between ODL–OVS), modify ODL config, Verify Re-sync is successful once connection is up.
Add new TEP’s and verify Creation is successful
Delete few TEP’s and verify Deletion is successful and no stale(flows,config) is left.
Re-add deleted TEP’s and Verify ITM creation is successful
Delete all TEP’s and verify Deletion is successful and no stale(flows,config) is left
N.A.
Who is contributing test cases? In case of multiple authors, designate a primary assignee and other contributors. Primary assignee is also expected to be maintainer once test code is in.
N.A.
N.A.
Table of Contents
Test plan for testing service recovery manager functionalities.
Test setup consists of ODL with odl-genius-rest installed and two switches (DPNs) connected to ODL over OVSDB and OpenflowPlugin.
This suit uses the default Genius topology.
+--------+ +--------+
| BR1 | data | BR2 |
| <-------> |
+---^----+ +----^---+
| mgmt |
+---v-----------------v---+
| |
| ODL |
| |
| odl-genius |
| |
+-------------------------+
OVS 2.6+
Following steps are followed at beginning of test suite:
Following steps are followed at the end of test suite:
Verify SRM by recovering TEP instance by using transportzone name and TEP’s ip address.
Verify SRM by recovering TZ instance using transportzone name.
Verify SRM by recovering service ITM.
Verify SRM instance recovery using interface port name.
Verify SRM by recovering service IFM.
N.A.
Who is contributing test cases? In case of multiple authors, designate a primary assignee and other contributors. Primary assignee is also expected to be maintainer once test code is in.
N.A.