Genius Documentation

This documentation provides critical information needed to help you write ODL Applications/Projects that can co-exist with other ODL Projects.

Contents:

Genius Pipeline

This document captures current OpenFlow pipeline as use by Genius and projects using Genius for app-coexistence.

High Level Pipeline

                             +---------+
                             | In Port |
                             +----+----+
                                  |
                                  |
                        +---------v---------+
                        | (0) Classifier    |
                        |     Table         |
                        +-------------------+
                        | VM Port           +------+
                        +-------------------+      +----------+
                        | Provider Network  +------+          |
                        +-------------------+                 |
    +-------------------+ Internal Tunnel   |                 |
    |                   +-------------------+                 |
    |            +------+ External Tunnel   |                 |
    |            |      +-------------------+       +---------v---------+
    |            |                                  | (17) Dispatcher   |
    |            |                                  |      Table        |
    |            |                                  +-------------------+
    |            |                                  |Ing Tap Service (1)+------------+
    | +----------v--------+                         +-------------------+            |
    | |     (18,20,38)    |           +-------------+Ing.ACL Service (2)|            |
    | | Services External |           |             +-------------------+            |
    | |      Pipeline     |           | +-----------+IPv6 Service    (3)|            |
    | +-------------------+           | |           +-------------------+            |
    |                                 | |           |L3 Service      (4)+-+          |
    |                                 | |           +-------------------+ |          |
    |                                 | |         +-+L2 Service      (5)| |          |
    |                                 | |         | +-------------------+ |          |
    |                                 | |         |                       |          |
    |                                 | |         |                       |          |
    |                                 | |         |                       |          |
    |                                 | |         |                       |          |
    |                                 | |         |                       |          |
    |              +------------------+ |         |                       |          |
    |              |                    |         |                       |          |
    |     +--------v--------+           |         |                       |          |
    |     |    (40 to 42)   |           |         |                       |          |
    |     |  Ingress ACL    |           |         |                       |          |
    |     |    Pipeline     |           |         |                       |          |
    |     +-------+---------+           |         |                       |          |
    |             |                     |         |                       |          |
    |          +--v-+      +------------v------+  |                       |          |
    |          |(17)|      |      (45)         |  |                       |          |
    |          +----+      |                   |  |                       |          |
    |                      |   IPv6 Pipeline   |  |                       |          |
    +----------+           +--+-------+--------+  |                       |          |
               |              |       |           |                       |          |
    +----------v--------+  +--v--+ +--v-+   +-----v-----------+           |          |
    |       (36)        |  | ODL | |(17)|   |    (50 to 55)   |           |          |
    |      Internal     |  +-----+ +----+   |                 |           |          |
    |      Tunnel       |                   |   L2 Pipeline   |           |          |
    +-------+-----------+                   +------+----------+           |          |
            |                                      |                      |          |
            |                                      |         +------------v----+  +--V----------+
            |                                      |         |    (19 to 47)   |  |    (170)    |
            +---------------------------------+    |    +----+                 |  | TaaS Ingress|
            |                                 |    |    |    |   L3 Pipeline   |  |   Pipeline  |
            |                                 |    |    |    +----+-------+----+  +--+----------+
            |                                 |    |    |         |       |          |
            |(itm-direct-tunnels enabled)     |    |    |      +--v--+ +--v-+        |
            |                                 |    |    |      | ODL | |(17)|        |
            |                                 |    |    |      +-----+ +----+        |
            |                             +---v----v----v-----+                      |           +--------------+
            |                             |                   +----------------------+           |      (171)   |
    +-------v-----------+                 | (220) Egress      +----------------------------------+  TaaS Egress |
    |   (95) Egress     |                 | Dispatcher Table  |          +------------------+    |   Pipeline   |
    |   Tunnel Table    |                 |                   |          |                  |    |              |
    +-------+-----------+                 +-------------------+          |                  |    +--------------+
            |                             | VM Port,          +---------->   (251 to 253)   |
            |                             | Provider Network  <----------+     Pipeline     |
            |                             +-------------------+          |    Egress ACL    |
            |                             | External Tunnel   |          |                  |
            |                             +-------------------+          +------------------+
            |                             | Internal Tunnel   |
            |                             +---------+---------+
            |                                       |
            +------------------------------------+  |
                                                 |  |
                                              +--v--v----+
                                              | Out Port |
                                              +----------+

Services Pipelines

Ingress ACL Pipeline

                      +-----------------+
                      |      (17)       |
         +------------+   Dispatcher    <---------------------------+
         |            |      Table      |                           |
         |            +-----------------+                           |
         |                                                          |
+--------v--------+                                                 |
|      (40)       |                                                 |
|   Ingress ACL   |    +-----------------+                          |
|      Table      |    |      (41)       |                          |
+-----------------+    |  Ingress ACL 2  |    +-----------------+   |
|  Match Allowed  +---->      Table      |    |      (42)       |   |
+-----------------+    +-----------------+    |  Ingress ACL 2  +---+
                       |  Match Allowed  +---->      Table      |
                       +-----------------+    +-----------------+

Owner Project: Netvirt

TBD.

IPv6 Pipeline


+-----------------+    +--------v--------+
|      (17)       |    |      (45)       |
|   Dispatcher    +---->      IPv6       |
|      Table      |    |      Table      |
+--------^--------+    +-----------------+    +---+
         |             | IPv6 ND for     +---->ODL|
         |             | Router Interface|    +---+
         |             +-----------------+
         +-------------+  Other Packets  |
                       +-----------------+

Owner Project: Netvirt

TBD.

L2 Pipeline


+-----------------+
|      (17)       |
|   Dispatcher    |
|      Table      |
+--------+--------+
         |
         |
+--------v--------+
|      (50)       |
| L2 SMAC Learning|
|      Table      |
+-----------------+    +--------v--------+
|  Known SMAC     +---->      (51)       |
+-----------------+    | L2 DMAC Filter  |
|  Unknown SMAC   +---->      Table      |
+-------+---------+    +-----------------+
        |              |  Known DMAC     +--------------------+
        |              +-----------------+                    |
      +-v-+            |  Unknown DMAC   |                    |
      |ODL|            |                 |                    |
      +---+            +--------+--------+                    |
                                |                             |
                                |                             |
                       +--------v--------+                    |
                       |      (52)       |                    |
                       | Unknown DMACs   |                    |
                       |      Table      |                    |
                       +-----------------+                    |
                  +----+  Tunnel In Port |                    |
                  |    +-----------------+                    |
                  |    |  VM In Port     |                    |
                  |    +------+----------+                    |
                  |           |                               |
                  |    +------v-----+                         |
                  |    |   Group    |                         |
                  |    | Full BCast +------+                  |
                  |    +-----+------+      |                  |
                  |          |             |                  |
                  |    +-----v------+      |              +---v-------------+
                  +---->   Group    +--+   |              |     (220)       |
                       | Local BCast|  |   |              |Egress Dispatcher|
                       +------------+  |   |         +--->+      Table      |
                                       |   |         |    +-----------------+
                                       |   |         |
                                       |   |         |
                               +-------v---v-----+   |
                               |     (55)        |   |
                               |  Filter Equal   |   |
                               |      Table      |   |
                               +-----------------+   |
                               |  L Register     +---+
                               |  and Egress     |
                               +-----------------+
                               | ? Match   Drop  |
                               +-----------------+

Owner Project: Netvirt

TBD.

L3 Pipeline


+-----------------+
|   Coming        |
|      Soon!      |
+-----------------+

Owner Project: Netvirt

TBD.

Egress ACL Pipeline


                      +-----------------+
                      | (220)  Egress   |
         +------------+      Dispatcher <---------------------------+
         |            |        Table    |                           |
         |            +-----------------+                           |
         |                                                          |
+--------v--------+                                                 |
|     (251)       |                                                 |
|    Egress ACL   |    +-----------------+                          |
|      Table      |    |     (252)       |                          |
+-----------------+    |   Egress ACL 2  |    +-----------------+   |
|  Match Allowed  +---->      Table      |    |     (253)       |   |
+-----------------+    +-----------------+    |   Egress ACL 2  +---+
                       |  Match Allowed  +---->      Table      |
                       +-----------------+    +-----------------+

Owner Project: Netvirt

TBD.

Ingress TaaS Pipeline

                      +-----------------+
                      |      (17)       |
         +------------>   Dispatcher    |
         |            |      Table      |
         |            +--------+--------+
         |                     |
         |                     |
         |                     |
         |            +--------v----------+
         |            |      (170)        |
         |            |    OUTBOUND_TAP_  |
         |            | CLASSIFIER Table |
         |            +-------------------+          +-----------------+
         +-----<------+  Original Packet  |          |      (220)      |
                      +-------------------+          |Egress Dispatcher|
                      |  Copied Packet    +---------->    Table       |
                      +-------------------+          +-----------------+

Owner Project: Netvirt

Egress TaaS Pipeline


                      +-----------------+
                      | (220)  Egress   |
         +------------>      Dispatcher <-----------------+
         |            |        Table    |                 |
         |            +-----------------+                 |
         |                      |                         |
         |                      |                         |
         ^                      |                         ^
 To Tap  |                      |                         |To Tap
 Service |              +-------V---------+               |Flow Port
 Port    |              |     (171)       |               |
         |              |   INBOUND_TAP_  |               |
         |              | CLASSIFIER Table|               |
         |              +-----------------+               |
         |              | Original Packet +----------->---+
         |              +-----------------+
         +------<-------+  Copied Packet  +
                        +-----------------+

Owner Project: Netvirt

Running Genius CSIT in Dev Environment

Genius CSIT requires very minimal testbed topology, and it is very easy to run the same on your laptop, with the below steps. This will help you run tests yourself on the code changes you are making in genius locally, without the need for waiting in jenkins job queue for long.

Test Setup

Test setup consists of ODL with odl-genius-rest feature installed and two switches (DPNs) connected to ODL over OVSDB and OpenflowPlugin channels.

Testbed Topologies

This setup uses the default Genius test topology.

Default Topology
+--------+       +--------+
|  BR1   | data  |  BR2   |
|        <------->        |
+---^----+       +----^---+
    |       mgmt      |
+---v-----------------v---+
|                         |
|           ODL           |
|                         |
|        odl-genius       |
|                         |
+-------------------------+

Software Requirements

  • ODL running on a VM or laptop[usual specifications for running ODL, min 2CPU+4GB RAM]

  • Two VMs with OVS [2.4 or higher] installed. [1CPU+1GB RAM]

  • Robotframework which can co-exist with any of the above VMs(provided it has connectivity to all the above entities).

Steps To Bring Up the CSIT Environment

We can run ODL on laptop, and OVS on two VMs. RobotFramework can be installed on of the two OVS VMs. The documentation is based on ubuntu Desktop VMs, which were started using virtual box.

ODL Installation

  • Pick up any ODL stable distribution or build it yourself

  • cd genius/karaf/target/assembly/bin

  • ./karaf

  • In the karaf prompt, install the genius feature feature:install odl-genius-rest

OVS Installation

Most of the genius developers already know this. Just for completion sake, on both the VMs, OVS has to be installed.

  • sudo apt-get install openvswitch-switch

  • service openvswitch-switch start

  • You can type “ovs-vsctl show” command to check if ovs is running as expected.

  • Make sure that the output of the above command should show different unique node UUIDs for OVS. If not, genius CSIT will have trouble creating ITM tunnels.[This is likely to happen, if you clone the first VM to run the second OVS]

RobotFrameowork Installation

Please refer to the script below for the latest uptodate requirement versions supported, the script also has more information on how to setup the robot environment.

https://github.com/opendaylight/releng-builder/blob/master/jjb/integration/integration-install-robotframework.sh

Below are the requirements for running genius CSIT.

  • Install Python 2.7

  • Install pip

  • Install ride tool and required libraries: sudo pip install robotframework-ride sudo pip install robotframework-requests sudo pip install robotframework-sshlibrary sudo pip install –upgrade robotframework-httplibrary sudo pip install jmespath

  • To start ride : ride.py <test suite name> To open genius test suite, opendaylight integration/test repo needs to be cloned. git clone https://<your_username>@git.opendaylight.org/gerrit/p/integration/test.git ride.py test/csit/suites/genius [Same can be imported after RIDE opens up, if you don’t want to specify the path in the prompt]

  • In the RIDE window that opens up, Genius test suite will be imported now

  • Click on the Run panel, and Click Start, by passing below arguments -v ODL_SYSTEM_IP:<ODL_IP> -v TOOLS_SYSTEM_IP:<OVS1_IP> -v TOOLS_SYSTEM_2_IP:<OVS2_IP> -v USER_HOME:<HOME_FOLDER> -v TOOLS_SYSTEM_USER:<USER NAME> -v DEFAULT_USER:<USER NAME> -v DEFAULT_LINUX_PROMPT:<LINUX PROMPT> -v ODL_SYSTEM_USER:<ODL USER NAME> -v ODL_SYSTEM_PROMPT:<ODL PROMPT> -v ODL_STREAM:<ODL STREAM> -v ODL_SYSTEM_1_IP:<ODL1_IP> -v KARAF_HOME:<KARAF-HOME-FOLDER> Any arguments defined in Variables.py can be overriden, by passing the argument value like above. For eg:, there was a recent change in karaf prompt, in that case we could run genius csit by passing argument “-v KARAF_PROMPT:karaf@root

Genius Design Overview

Genius project provides generic infrastructure services and utilities for integration and co-existance of mulltiple networking services/applications. Following image presents a top level view of Genius framework -

https://wiki.opendaylight.org/images/3/3e/Genius_overview.png

Genius Module Dependencies

Genius modules are developed as karaf features which can be independently installed. However, there is some dependency among these modules. The diagram below provides a dependency relationship of these modules.

All these modules expose Yang based API which can be used to configure/interact with these modules and fetch services provided by these modules. Thus all these modules can be used/configured by other ODL modules and can also be accessed via REST interface.

Genius based packet pipeline

Following picture presents an example of packet pipeline based on Genius framework. It also presents the functions of diffrent genius components -

https://wiki.opendaylight.org/images/5/56/App_co_exist_new.png

Following sections provide details about each of these components.

Interface Manager Design

The Interface Manager (IFM) uses MD-SAL based architecture, where different software components operate on, and interact via a set of data-models. Interface manager defines configuration data-stores where other OpenDaylight modules can write interface configurations and register for services. These configuration data-stores can also be accessed by external entities through REST interface. IFM listens to changes in these config data-stores and accordingly programs the data-plane. Data in Configuration data-stores remains persistent across controller restarts.

Operational data like network state and other service specific operational data are stored in operational data-stores. Change in network state is updated in southbound interfaces (OFplugin, OVSDB) data-stores. Interface Manager uses ODL Inventory and Topology datastores to retrive southbound configurations and events. IFM listens to these updates and accordingly updates its own operational data-stores. Operational data stores are cleaned up after a controller restart.

Additionally, a set of RPCs to access IFM data-stores and provide other useful information. Following figure presents different IFM data-stores and its interaction with other modules.

Follwoing diagram provides a toplevel architecture of Interface Manager.

https://wiki.opendaylight.org/images/2/25/Ifmsbirenderers.png
InterfaceManager Dependencies

Interface Manager uses other Genius modules for its operations. It mainly interacts with following other genius modules-

  1. Id Manager – For allocating dataplane interface-id (if-index)

  2. Aliveness Monitor - For registering the interfaces for monitoring

  3. MdSalUtil – For interactions with MD-SAL and other openflow operations

Following picture shows interface manager dependencies

digraph structs {
	subgraph {
"interfacemanager-impl" -> "interfacemanager-api";
"interfacemanager-api" -> "iana-if-type-2014-05-08";
"interfacemanager-impl" -> "idmanager-api";
"interfacemanager-impl" -> "utils.southbound-utils";
"interfacemanager-api" -> "mdsalutil-api";
"interfacemanager-impl" -> "model-flow-base";
"interfacemanager" -> "interfacemanager-api";
"interfacemanager-api" -> "yang-binding";
"interfacemanager-impl" -> "hwvtepsouthbound-api";
"interfacemanager-impl" -> "javax.inject";
"interfacemanager" -> "interfacemanager-impl";
"interfacemanager-shell" -> "interfacemanager-impl";
"interfacemanager-impl" -> "mdsalutil-api";
"interfacemanager-impl" -> "southbound-api";
"interfacemanager-api" -> "southbound-api";
"interfacemanager-shell" -> "org.apache.karaf.shell.console";
"interfacemanager-impl" -> "guava";
"interfacemanager-impl" -> "model-flow-service";
"interfacemanager-impl" -> "alivenessmonitor-api";
"interfacemanager" -> "interfacemanager-shell";
"interfacemanager-impl" -> "idmanager-impl";
"interfacemanager-api" -> "ietf-inet-types-2013-07-15";
"interfacemanager-impl" -> "ietf-interfaces";
"interfacemanager-impl" -> "openflowplugin-extension-nicira";
"interfacemanager-api" -> "ietf-yang-types-20130715";
"interfacemanager-api" -> "ietf-interfaces";
"interfacemanager-shell" -> "interfacemanager-api";
"interfacemanager-api" -> "yang-ext";
"interfacemanager-impl" -> "testutils";
"interfacemanager-api" -> "model-inventory";
"interfacemanager-impl" -> "lockmanager-impl";
"interfacemanager-api" -> "openflowplugin-extension-nicira";
}
rankdir=LR;
}
Code structure

Interface manager code is organized in following folders -

  1. interfacemanager-api contains the interface yang data models and corresponding interface implementation.

  2. interfacemanager-impl contains the interfacemanager implementation

  3. interface-manager-shell contains Karaf CLI implementation for interfacemanager

interfacemanager-api

└───main
``   ├───java``
``   │   └───org``
``   │       └───opendaylight``
``   │           └───genius``
``   │               └───interfacemanager``
``   │                   ├───exceptions``
``   │                   ├───globals``
``   │                   └───interfaces``
``   └───yang``

interfacemanager-impl

├───commons  <---containscommonutilityfunctions
├───listeners<---ContainsinterfacemanagerDCNlistenenrsfordifferntMD-SALdatastores
├───renderer<---Containsdifferentsouthboundrenderers'implementation
├───hwvtep<---HWVTEPspecificrenderer
``│   │   ├───confighelpers                     ``
├───statehelpers
└───utilities
└───ovs<---OVSspecificSBIrenderer
├───confighelpers
├───statehelpers
└───utilities
├───servicebindings<---containsinterfaceservicebindingDCNlistenerandcorrespondingimplementation
└───flowbased
├───confighelpers
├───listeners
├───statehelpers
└───utilities
├───rpcservice<---ContainsinterfacemanagerRPCs'implementation
├───pmcounters<---ContainsPMcountersgathering
└───statusanddiag<---containsstatusanddiagnosticsimplementations

‘interfacemanager-shell

Interfacemanager Data-model

FOllowing picture shows different MD-SAL datastores used by intetrface manager. These datastores are created based on YANG datamodels defined in interfacemanager-api.

https://wiki.opendaylight.org/images/4/46/Ifmarch.png
Config Datastores

InterfaceManager mainly uses following two datastores to accept configurations.

  1. odl-interface datamodel () where verious type of interface can be configuted.

  2. service-binding datamodel () where different applications can bind services to interfaces.

In addition to these datamodels, it also implements several RPCs for accessing interface operational data. Details of these datamodels and RPCs are described in following sections.

Interface Config DS

Interface config datamodel is defined in odl-interface.yang . It is based on ‘ietf-interfaces’ datamodel (imported in odl_interface.yang) with additional augmentations to it. Common interface configurations are –

  • name (string) : this is the unique interface name/identifier.

  • type (identityref:iana-if-type) : this configuration sets the interface type. Interface types are defined in iana-if-types data model. Odl-interfaces.yang data model adds augmentations to iana-if-types to define new interface types. Currently supported interface types are -

    • l2vlan (trunk, vlan classified sub-ports/trunk-member)

    • tunnel (OVS based VxLAN, GRE, MPLSoverGRE/MPLSoverUDP)

  • enabled (Boolean) : this configuration sets the administrative state of the interface.

  • parent-refs : this configuration specifies the parent of the interface, which feeds data/hosts this interface. It can be a physical switch port or a virtual switch port.

    • Parent-interface (string) : is the port name with which a network port in dataplane in that appearing on the southbound interface. E.g. neutron port. this can also be another interface, thus supporting a hierarchy of linked interfaces.

    • Node-identifier (topology_id, node_id) : is used for configuring parent node for HW nodes/VTEPs

Additional configuration parameters are defined for specific interface type. Please see the table below.

Vlan-xparent

Vlan-trunk

Vlan-trunk-member

vxlan

gre

Name =uuid

Name =uuid

Name =uuid

Name =uuid

Name =uuid

description

description

description

description

description

Type =l2vlan

Type =l2valn

Type =l2vlan

Type =tunnel

Type =tunnel

enabled

enabled

enabled

enabled

enabled

Parent-if = port-name

Parent-if = port-name

Parent-if = vlan-trunkIf

Vlan-id

Vlan-id

vlan-mode = transparent

vlan-mode = trunk

vlan-mode = trunk-member

tunnel-type = vxlan

tunnel-type = gre

vlan-list= [trunk-member-list]

Vlan-Id = trunk-vlanId

dpn-id

dpn-id

Parent-if = vlan-trunkIf

Vlan-id

Vlan-id

local-ip

local-ip

remote-ip

remote-ip

gayeway-ip

gayeway-ip

Interface service binding config

Yang Data Model odl-interface-service-bindings.yang contains the service binding configuration datamodel.

An application can bind services to a particular interface by configuring MD-SAL data node at path /config/interface-service-binding. Binding services on interface allows particular service to pull traffic arriving on that interface, depending upon the a service priority. It is possible to bind services at ingress interface (when packet enters into the packet-pipeline from particular interface) as well as on the egress Interface (before the packet is sent out on particular interafce). Service modules can specify openflow-rules to be applied on the packet belonging to the interface. Usually these rules include sending the packet to specific service table/pipeline. Service modules/applications are responsible for sending the packet back (if not consumed) to service dispatcher table, for next service to process the packet.

https://wiki.opendaylight.org/images/5/56/App_co_exist_new.png

Following are the service binding parameters –

  • interface-name is name of the interface to which service binding is being configured

  • Service-Priority parameter is used to define the order in which the packet will be delivered to different services bind to the particular interface.

  • Service-Name

  • Service-Info parameter is used to configure flow rule to be applied to the packets as needed by services/applications.

    • (for service-type openflow-based)

    • Flow-priority

    • Instruction-list

When a service is bind to an interface, Interface Manager programs the service dispatcher table with a rule to match on the interface data-plane-id and the service-index (based on priority) and the instruction-set provided by the service/application. Every time when the packet leaves the dispatcher table the service-index (in metadata) is incremented to match the next service rule when the packet is resubmitted back to dispatcher table. Following table gives an example of the service dispatcher flows, where one interface is bind to 2 services.

Service Dispatcher Table

Match

Actions

  • if-index = I

  • ServiceIndex = 1

  • Set SI=2 in metadata

  • service specific actions <e.g., Goto prio 1 Service table>

  • if-index = I

  • ServiceIndex = 2

  • Set SI=3 in metadata

  • service specific actions <e.g., Goto prio 2 Service table>

miss

Drop

Interface Manager programs openflow rules in the service dispatcher table.

Egress Service Binding

There are services that need packet processing on the egress, before sending the packet out to particular port/interface. To accommodate this, interface manager also supports egress service binding. This is achieved by introducing a new “egress dispatcher table” at the egress of packet pipeline before the interface egress groups.

On different application request, Interface Manager returns the egress actions for interfaces. Service modules program use these actions to send the packet to particular interface. Generally, these egress actions include sending packet out to port or appropriate interface egress group. With the inclusion of the egress dispatcher table the egress actions for the services would be to

  • Update REG6 - Set service_index =0 and egress if_index

  • send the packet to Egress Dispatcher table

IFM shall add a default entry in Egress Dispatcher Table for each interface With -

  • Match on if_index with REG6

  • Send packet to corresponding output port or Egress group.

On Egress Service binding, IFM shall add rules to Egress Dispatcher table with following parameters –

  • Match on

    • ServiceIndex=egress Service priority

    • if_index in REG6 = if_index for egress interface

  • Actions

    • Increment service_index

    • Actions provided by egress service binding.

Egress Services will be responsible for sending packet back to Egress Dispatcher table, if the packet is not consumed (dropped/ send out). In this case the packet will hit the lowest priority default entry and the packet will be send out.

Operational Datastores

Interface Manager uses ODL Inventory and Topology datastores to retrive southbound configurations and events.

Interface Manager modules

Interface manager is designed in a modular fashion to provide a flexible way to support multiple southbound protocols. North-bound interface/data-model is decoupled from south bound plugins. NBI Data change listeners select and interact with appropriate SBI renderers. The modular design also allows addition of new renderers to support new southbound interfaces, protocols plugins. Following figure shows interface manager modules –

images/ifmsbirenderers.png

InterfaceManager uses the datastore-job-coordinator module for all its operations.

Datastore job coordinator solves the following problems which is observed in the previous Li-based interface manager :

  1. The Business Logic for the Interface configuration/state handling is performed in the Actor Thread itself.

  2. This will cause the Actor’s mailbox to get filled up and may start causing unnecessary back-pressure.

  3. Actions that can be executed independently will get unnecessarily serialized.

  4. Can cause other unrelated applications starve for chance to execute.

  5. Available CPU power may not be utilized fully. (for instance, if 1000 interfaces are created on different ports, all 1000 interfaces creation will happen one after the other.)

  6. May depend on external applications to distribute the load across the actors.

IFM Listeners

IFM listeners listen to data change events for different MD-SAL data-stores. On the NBI side it implements data change listeners for interface config data-store and the service-binding data store. On the SBI side IFM implements listeners for Topology and Inventory data-stores in opendaylight.

Interface Config change listener

Interface config change listener listens to ietf-interface/interfaces data node.

service-binding change listener

Interface config change listener listens to ietf-interface/interfaces data node.

Topology state change listener

Interface config change listener listens to ietf-interface/interfaces data node.

inventory state change listener

+++ this page is under construction +++

Dynamic Behavior
when a l2vlan interface is configured
  1. Interface ConfigDS is populated

  2. Interface DCN in InterfaceManager does the following :

    • Add interface-state entry for the new interface along with if-index generated

    • Add ingress flow entry

    • If it is a trunk VLAN, need to add the interface-state for all child interfaces, and add ingress flows for all child interfaces

when a tunnel interface is configured
  1. Interface ConfigDS is populated

  2. Interface DCN in InterfaceManager does the following :

    • Creates bridge interface entry in odl-interface-meta Config DS

    • Add port to Bridge using OVSDB
      • retrieves the bridge UUID corresponding to the interface and

      • populates the OVSDB Termination Point Datastore with the following information


tpAugmentationBuilder.setName(portName);
tpAugmentationBuilder.setInterfaceType(type);
options.put(key,``\ “``flow);
options.put(local_ip,localIp.getIpv4Address().getValue());
options.put(remote_ip,remoteIp.getIpv4Address().getValue());
tpAugmentationBuilder.setOptions(options);

OVSDB plugin acts upon this data change and configures the tunnel end

points on the switch with the supplied information.

NodeConnector comes up on vSwitch
Inventory DCN Listener in InterfaceManager does the following:
  1. Updates interface-state DS.

  2. Generate if-index for the interface

  3. Update if-index to interface reverse lookup map

  4. If interface maps to a vlan trunk entity, operational states of all vlan trunk members are updated

  5. If interface maps to tunnel entity, add ingress tunnel flow

Bridge is created on vSWitch
Topology DCN Listener in InterfaceManager does the following:
  1. Update odl-interface-meta OperDS to have the dpid to bridge reference

  2. Retrieve all pre provisioned bridge Interface Entries for this dpn, and add ports to bridge using ovsdb

ELAN/VPNManager does a bind service
  1. Interface service-bindings config DS is populated with service name, priority and lport dispatcher flow instruction details

  2. Based on the service priority, the higher priority service flow will go in dispatcher table with match as if-index

  3. Lower priority service will go in the same lport dispatcher table with match as if-index and service priority

Interface Manager Sequence Diagrams

Following gallery contains sequence diagrams for different IFM operations -

Removal of Tunnel Interface When OF Switch is Connected
https://wiki.opendaylight.org/images/8/81/Removal_of_Tunnel_Interface_When_OF_Switch_is_Connected.png
Removal of Tunnel Interfaces in Pre provisioning Mode
https://wiki.opendaylight.org/images/7/7e/Removal_of_Tunnel_Interfaces_in_Pre_provisioning_Mode.png
Updating of Tunnel Interfaces in Pre provisioning Mode
https://wiki.opendaylight.org/images/5/57/Updating_of_Tunnel_Interfaces_in_Pre_provisioning_Mode.png
creation of tunnel-interface when OF switch is connected and PortName already in OperDS
https://wiki.opendaylight.org/images/5/5d/Creation_of_tunnel-interface_when_OF_switch_is_connected_and_PortName_already_in_OperDS.png
creation of vlan interface in pre provisioning mode
https://wiki.opendaylight.org/images/d/d7/Creation_of_vlan_interface_in_pre_provisioning_mode.png
creation of vlan interface when switch is connected
https://wiki.opendaylight.org/images/b/b4/Creation_of_vlan_interface_when_switch_is_connected.png
deletion of vlan interface in pre provisioning mode
https://wiki.opendaylight.org/images/9/96/Deletion_of_vlan_interface_in_pre_provisioning_mode.png
deletion of vlan interface when switch is connect
https://wiki.opendaylight.org/images/9/96/Deletion_of_vlan_interface_when_switch_is_connected.png
Node connector added updated DCN handling
https://wiki.opendaylight.org/images/c/ce/Node_connector_added_updated_DCN_handling.png
Node connector removed DCN handling
https://wiki.opendaylight.org/images/3/36/Node_connector_removed_DCN_handling.png
updation of vlan interface in pre provisioning mode
https://wiki.opendaylight.org/view/File:Updation_of_vlan_interface_in_pre_provisioning_mode.png
updation of vlan interface when switch is connect
https://wiki.opendaylight.org/images/e/e5/Updation_of_vlan_interface_when_switch_is_connected.png

Internal Transport Manager (ITM)

Internal Transport Manager creates and maintains mesh of tunnels of type VXLAN or GRE between Openflow switches forming an overlay transport network. ITM also builds external tunnels towards DC Gateway. ITM does not provide redundant tunnel support.

The diagram below gives a pictorial representation of the different modules and data stores and their interactions.

https://wiki.opendaylight.org/images/1/12/ITM_top_lvl.png
ITM Dependencies

ITM mainly interacts with following other genius modules-

  1. Interface Manager – For creating tunnel interfaces

  2. Aliveness Monitor - For monitoring the tunnel interfaces

  3. MdSalUtil – For openflow operations

Following picture shows interface manager dependencies

digraph structs {
	subgraph {
"genius" -> "resourcemanager";
"interfacemanager-impl" -> "alivenessmonitor-api";
"genius" -> "arputil";
"itm" -> "itm-impl";
"arputil" -> "arputil-api";
"interfacemanager-impl" -> "idmanager-api";
"alivenessmonitor-impl-protocols" -> "arputil-api";
"interfacemanager-impl" -> "interfacemanager-api";
"interfacemanager" -> "interfacemanager-api";
"arputil" -> "arputil-impl";
"interfacemanager-api" -> "mdsalutil-api";
"lockmanager" -> "lockmanager-api";
"idmanager" -> "idmanager-api";
"idmanager-impl" -> "mdsalutil-api";
"mdsalutil-testutils" -> "mdsalutil-api";
"interfacemanager" -> "interfacemanager-shell";
"lockmanager" -> "lockmanager-impl";
"arputil-impl" -> "arputil-api";
"itm-impl" -> "mdsalutil-api";
"alivenessmonitor" -> "alivenessmonitor-impl";
"idmanager-shell" -> "mdsalutil-api";
"idmanager" -> "idmanager-shell";
"interfacemanager-shell" -> "interfacemanager-impl";
"genius" -> "lockmanager";
"resourcemanager-api" -> "idmanager-api";
"interfacemanager" -> "interfacemanager-impl";
"itm-impl" -> "idmanager-impl";
"itm-impl" -> "lockmanager-impl";
"idmanager-shell" -> "idmanager-impl";
"alivenessmonitor-impl-protocols" -> "interfacemanager-api";
"itm" -> "itm-api";
"resourcemanager-impl" -> "mdsalutil-api";
"arputil-impl" -> "interfacemanager-api";
"itm-impl" -> "idmanager-api";
"genius" -> "alivenessmonitor";
"mdsalutil" -> "mdsalutil-impl";
"interfacemanager-impl" -> "idmanager-impl";
"resourcemanager" -> "resourcemanager-impl";
"genius" -> "idmanager";
"alivenessmonitor" -> "alivenessmonitor-impl-protocols";
"alivenessmonitor-impl" -> "idmanager-api";
"resourcemanager-impl" -> "idmanager-api";
"genius" -> "interfacemanager";
"interfacemanager-impl" -> "lockmanager-impl";
"mdsalutil-impl" -> "mdsalutil-api";
"idmanager-impl" -> "lockmanager-api";
"mdsalutil" -> "mdsalutil-testutils";
"mdsalutil" -> "mdsalutil-api";
"resourcemanager" -> "resourcemanager-api";
"arputil-impl" -> "mdsalutil-api";
"alivenessmonitor-impl-protocols" -> "alivenessmonitor-impl";
"genius" -> "mdsalutil";
"interfacemanager-shell" -> "interfacemanager-api";
"alivenessmonitor-impl" -> "alivenessmonitor-api";
"itm-impl" -> "itm-api";
"idmanager" -> "idmanager-impl";
"alivenessmonitor-impl" -> "mdsalutil-api";
"itm-api" -> "interfacemanager-api";
"resourcemanager-impl" -> "resourcemanager-api";
"idmanager-impl" -> "idmanager-api";
"alivenessmonitor" -> "alivenessmonitor-api";
"lockmanager-impl" -> "lockmanager-api";
"genius" -> "itm";
"interfacemanager-impl" -> "mdsalutil-api";
"itm-impl" -> "interfacemanager-api";
}
rankdir=LR;
}
Code Structure

As shown in the diagram, ITM has a common placeholder for various datastore listeners, RPC implementation, config helpers. Config helpers are responsible for creating / delete of Internal and external tunnel.

https://wiki.opendaylight.org/images/c/c3/Itmcodestructure.png
ITM Data Model

ITM uses the following data model to create and manage tunnel interfaces Tunnels interfces are created by writing to Interface Manager’s Config DS.

itm.yang

follwoing datamodel is defined in itm.yang This DS stores the transport zone information populated through REST or Karaf CLI

|image33|

Itm-state.yang

This DS stores the tunnel end point information populated through REST or Karaf CLI. The internal and external tunnel interfaces are also stored here.

|image34|

Itm-rpc.yang

This Yang defines all the RPCs provided by ITM.

|image35|

Itm-config.yang

|image36|

ITM Design

ITM uses the datastore job coordinator module for all its operations.

When tunnel end point are configured in ITM datastores by CLI or REST, corresponding DTCNs are fired. ITM TransportZoneListener listens to the . Based on the add/remove end point operation, the transport zone listener queues the approporiate job ( ItmInternalTunnelAddWorker or ItmInternalTunnelDeleteWorker) to the DataStoreJob Coordinator. Jobs within transport zones are queued to be executed serially and jobs across transport zones are done parallel.

Tunnel Building Logic

ITM will iterate over all the tunnel end points in each of the transport zones and build the tunnels between every pair of tunnel end points in the given transport zone. The type of the tunnel (GRE/VXLAN) will be indicated in the YANG model as part of the transport zone.

ITM Operations

ITM builds the tunnel infrastructure and maintains them. ITM builds two types of tunnels namely, internal tunnels between openflow switches and external tunnels between openflow switches and an external device such as datacenter gateway. These tunnels can be Vxlan or GRE. The tunnel endpoints are configured using either individual endpoint configuration or scheme based auto configuration method or REST. ITM will iterate over all the tunnel end points in each of the transport zones and build the tunnels between every pair of tunnel end points in the given transport zone.

  • ITM creates tunnel interfaces in Interface manager Config DS.

  • Stores the tunnel mesh information in tunnel end point format in ITM config DS

  • ITM stores the internal and external trunk interface names in itm-state yang

  • Creates external tunnels to DC Gateway when VPN manager calls the RPCs for creating tunnels towards DC gateway.

    ITM depends on interface manager for the following functionality.

  • Provides interface to create tunnel interfaces

  • Provides configuration option to enable monitoring on tunnel interfaces.

  • Registers tunnel interfaces with monitoring enabled with alivenessmonitor.

    ITM depends on Aliveness monitor for the following functionality.

  • Tunnel states for trunk interfaces are updated by alivenessmonitor. Sets OperState for tunnel interfaces

RPCs

The following are the RPCs supported by ITM

Get-tunnel-interface-id RPC

|image37|

Get-internal-or-external-interface-name

|image38|

Get-external-tunnel-interface-name

|image39|

Build-external-tunnel-from-dpns

|image40|

Add-external-tunnel-endpoint

|image41|

Remove-external-tunnel-from-dpns

|image42|

Remove-external-tunnel-endpoint

|image43|

Create-terminating-service-actions

|image44|

Remove-terminating-service-actions

|image45|

  1. Aliveness Monitor

  2. ID-Manager

  3. MDSAL Utils

  4. Resource Manager

  5. FCAPS manager

Genius Design Specifications

Starting from Carbon, Genius uses RST format Design Specification document for all new features. These specifications are perfect way to understand various Genius features.

Contents:

Title of the feature

[link to gerrit patch]

Brief introduction of the feature.

Problem description

Detailed description of the problem being solved by this feature

Use Cases

Use cases addressed by this feature.

Proposed change

Details of the proposed change.

Pipeline changes

Any changes to pipeline must be captured explicitly in this section.

Yang changes

This should detail any changes to yang models.

Configuration impact

Any configuration parameters being added/deprecated for this feature? What will be defaults for these? How will it impact existing deployments?

Note that outright deletion/modification of existing configuration is not allowed due to backward compatibility. They can only be deprecated and deleted in later release(s).

Clustering considerations

This should capture how clustering will be supported. This can include but not limited to use of CDTCL, EOS, Cluster Singleton etc.

Other Infra considerations

This should capture impact from/to different infra components like MDSAL Datastore, karaf, AAA etc.

Security considerations

Document any security related issues impacted by this feature.

Scale and Performance Impact

What are the potential scale and performance impacts of this change? Does it help improve scale and performance or make it worse?

Targeted Release

What release is this feature targeted for?

Alternatives

Alternatives considered and why they were not selected.

Usage

How will end user use this feature? Primary focus here is how this feature will be used in an actual deployment.

For most Genius features users will be other projects but this should still capture any user visible CLI/API etc. e.g. ITM configuration.

This section will be primary input for Test and Documentation teams. Along with above this should also capture REST API and CLI.

Features to Install

odl-genius-ui

Identify existing karaf feature to which this change applies and/or new karaf features being introduced. These can be user facing features which are added to integration/distribution or internal features to be used by other projects.

REST API

Sample JSONS/URIs. These will be an offshoot of yang changes. Capture these for User Guide, CSIT, etc.

CLI

Any CLI if being added.

Implementation

Assignee(s)

Who is implementing this feature? In case of multiple authors, designate a primary assignee and other contributors.

Primary assignee:

<developer-a>

Other contributors:

<developer-b> <developer-c>

Work Items

Break up work into individual items. This should be a checklist on Trello card for this feature. Give link to trello card or duplicate it.

Dependencies

Any dependencies being added/removed? Dependencies here refers to internal [other ODL projects] as well as external [OVS, karaf, JDK etc.] This should also capture specific versions if any of these dependencies. e.g. OVS version, Linux kernel version, JDK etc.

This should also capture impacts on existing project that depend on Genius. Following projects currently depend on Genius: * Netvirt * SFC

Testing

Capture details of testing that will need to be added.

Documentation Impact

What is impact on documentation for this change? If documentation change is needed call out one of the <contributors> who will work with Project Documentation Lead to get the changes done.

Don’t repeat details already discussed but do reference and call them out.

References

Add any useful references. Some examples:

  • Links to Summit presentation, discussion etc.

  • Links to mail list discussions

  • Links to patches in other projects

  • Links to external documentation

[1] OpenDaylight Documentation Guide

[2] https://specs.openstack.org/openstack/nova-specs/specs/kilo/template.html

Note

This template was derived from [2], and has been modified to support our project.

This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode

Genius Event Logging Using Log4j2

Genius Event Logging Reviews

Genius event logger is the feature which is used to log some important events of Genius into a separate file using log4j2.

User should configure log4j appender configuration for genius event logger in etc/org.ops4j.pax.logging.cfg file to achieve this.

Problem Description

When many log events are available in karaf.log file, it will be difficult for user to quickly find the main events with respect to genius southbound connection. And also, as there will be huge amount of karaf logs, there are chances of log events getting rolled out in karaf.log files. Due to which we may tend to miss some of the events related to genius.

Genius event logger feature is intended to overcome this problem by logging important events of genius into a separate file using log4j2 appender, so that user can quickly refer to these event logs to identify important events of genius related to connection, disconnection, reconciliation, port events, errors, failures, etc.

Use Cases
  1. By default genius event logging feature will not be enabled without any configuration changes in logging configuration file.

  2. User can configure log4j2 appender for genius event logger(as mentioned in the configuration section) to log the important logs of genius in a separate file at the path mentioned in configuration file.

Proposed Change

  1. A log4j2 logger with name GeniusEventLogger will be created and used to log the event at the time of connection, disconnection, reconciliation, etc.

  2. By default the event logger logging level is fixed to DEBUG level. Unless there will be an appender configuration present in logging configuration file, the events will not be in enqueued for logging.

  3. The genius event logs will be having a pattern consisting of time stamp of the event, description of event followed by the datapathId of the switch for which events are related.

  4. The event logs will be moved to a separate file(data/events/genius/genius.log file as per the configuration mentioned in configuration section) and this can be configured to different path as per the need.

  5. The file roll over strategy is chosen as to roll events into other file if the current file reaches maximum size(10MB as per configuration) and the event logs will be overwritten if such 10 files(as per configuration) are completed.

Other Changes

Configuration impact

Below log4j2 configuration changes should be added in etc/org.ops4j.pax.logging.cfg file for logging genius events into a separate file.

org.ops4j.pax.logging.cfg
log4j2.logger.genius.name = GeniusEventLogger
log4j2.logger.genius.level = INFO
log4j2.logger.genius.additivity = false
log4j2.logger.genius.appenderRef.GeniusEventRollingFile.ref = GeniusEventRollingFile

log4j2.appender.genius.type = RollingRandomAccessFile
log4j2.appender.genius.name = GeniusEventRollingFile
log4j2.appender.genius.fileName = \${karaf.data}/events/genius/genius.log
log4j2.appender.genius.filePattern = \${karaf.data}/events/genius/genius.log.%i
log4j2.appender.genius.append = true
log4j2.appender.genius.layout.type = PatternLayout
log4j2.appender.genius.layout.pattern =  %d{ISO8601} | %m%n
log4j2.appender.genius.policies.type = Policies
log4j2.appender.genius.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.genius.policies.size.size = 10MB
log4j2.appender.genius.strategy.type = DefaultRolloverStrategy
log4j2.appender.genius.strategy.max = 10
Clustering considerations

The genius event logger will be configured in the controller and are related to log events only in that controller. This will not be affecting cluster environment in any way.

Usage

Features to Install

included with common genius features.

REST API

None

CLI

None

Dependencies

This doesn’t add any new dependencies.

Testing

  1. Verifying the event logs in karaf.log file, when there is no appender configuration added in logger configuration file.

  2. Making appender configuration in logger configuration file and verifying the important events of genius in the log file specified in configuration.

Unit Tests

None added newly.

CSIT

None

ITM Scale Enhancements

https://git.opendaylight.org/gerrit/#/q/topic:ITM-Scale-Improvements

ITM creates tunnel mesh among switches with the help of interfacemanager. This spec describes re-designing of ITM to create the tunnel mesh independently without interface manager. This is expected to improve ITM performance and therefore support a larger tunnel mesh.

Problem description

ITM creates tunnels among the switches. When ITM receives the configuration from NBI, it creates interfaces on ietf interface config DS, which the interface manager listens to and creates the tunnel constructs on the switches. This involves an additional hop from ITM to interface manager which constitutes many DCNs and DS reads and writes. This induces a lot of load on the system, especially in a scale setup. Also, tunnel interfaces are catagorized as generic ietf-interface along with tap and vlan interfaces. Interface manager deals with all these interfaces. Applications listening for interface state gets updates on tunnel interfaces both from interface manager and ITM. This degrades the performance and hence the internal tunnel mesh creation does not scale up very well beyond 80 switches.

Use Cases

This feature will support the following use cases.

  • Use case 1: ITM will create a full tunnel mesh when it receives the configuration from NBI. Tunnel ports or OF based tunnels will be created on the switches directly by ITM, by-passing interface manager.

  • Use case 2: ITM will support a config parameter which can be used to select the old or the new way of tunnel mesh creation. Changing this flag dynamically after configuration will not be supported. If one needs to switch the implementation, then the controller needs to be restarted.

  • Use case 3: ITM will detect the tunnel state changes and publish it to the applications.

  • Use case 4: ITM will provide the tunnel egress actions irrespective of the presence of tunnel in the dataplane.

  • Use case 5: ITM will support a config parameter to enable/disable monitoring on a per tunnel basis

  • Use case 6: ITM will support BFD event dampening during initial BFD config.

  • Use Case 7: ITM will support a config parameter to turn ON/OFF the alarm generation based on tunnel status.

  • Use case 8: ITM will support traffic switch over in dataplane based on the tunnel state.

  • Use case 9: ITM will cache appropriate MDSAL data to further improve the performance.

Proposed change

In order to improve the scale numbers, handling of tunnel interface is separated from other interfaces. Hence, ITM module is being re-architectured to by-pass interface manager and create/delete the tunnels between the switches directly. ITM will also provide the tunnel status without the support of interface manager.

By-passing interface manager provides the following advantage * removes the creation of ietf interfaces in config DS * reduces a number of DCN being generated * reduces the number of datastore reads and writes. * Applications to get tunnel updates only from ITM

All this should improves the performance and thereby the scale numbers.

Further improvements that can be done is to

  • Decouple DPN Id requirement for tunnel port creation. Node id information will suffice the tunnel port creation as DPN Id is not required. This will make the tunnel creation simplier and will remove any timing issues as the mapping between DPN Id and Node id for tunnel creation is eliminated. This also decouples the OF channel establishment and tunnel port creation. Further ITM’s auto tunnel creation, which learns the network topology details when OVS connects can be leveraged for tunnel creation.

Assumption

This feature will not be used along with per-tunnel specific service binding use case as both use cases together are not supported. Multiple Vxlan Tunnel feature will not work with this feature as it needs service binding on tunnels.

Implementation

Most of the code for this proposed changes will be in separate package for code maintainability. There will be minimal changes in some common code in ITM and interface manager to switch between the old and the new way of tunnel creation

  • If itm-direct-tunnels flag is ON, then – itm:transport-zones listener will trigger the new code upon receiving transport zone configuration. – interface manager will ignore events pertaining to OVSDB tunnel port and tunnel interface related inventory changes. – When ITM gets the NBI tep configuration o ITM wires the tunnels by forming tunnel interface name and stores the Tep information in dpn ITM does not create the tunnel interfaces in the ietf-interface config DS. Stores the tunnel name in the dpn-teps-state in itm-state.yang.

    o ITM generates a unique number from ID Manager for each tep and this will be programmed as group id

    in all other CSSs in order to reach this tep.

    o This unique number will also serve as if-index for each tep interface. This will be stored

    in if-indexes-interface-map in odl-itm-meta.yang

    o Install the group on the switch. ITM will write the group in the openflow plugin inventory config DS,

    irrespective of the switch being connected.

    o Add ports to the Bridge through OVSDB, if the switch is connected. ITM will be using the bridge related

    information from the odl-itm-meta.yang.

    – Implement listeners to Topology Operational DS for OvsdbBridgeAugmentation. When switch gets

    connected add ports to the bridge (in the pre-configured case)

    – Implement listeners to Inventory Operational DS for FlowCapableNodeConnector.
    – On OfPort addition,

    – push the table 0 flow entries – populate the tunnels_state in itm-state.yang tunnel state that comes in OF Port status. – update the group with watch-port for handling traffic switchover in dataplane.

    – If this feature is not enabled, then ITM will take the usual route of configuring ietf-interfaces.

  • If the alarm-generation-enabled is enabled, then register for changes in tunnels_state to generate the alarms.

  • ITM will support individual tunnels to be monitored.

  • If Global monitoring flag is enabled, then all tunnels will be monitored.

  • If Global flag is turned OFF, then individual per tunnel monitoring flag will take effect.

  • ITM will support dynamic enable/disable of bfd global flag / individual flag.

  • BFD dampening logic for bfd states is as follows, – On tunnel creation, ITM will consider initial tunnel status to be UP and LIVE and mark it as in ‘dampening’ state – If it receives UP and LIVE event, the tunnel will come out of dampening state, no change/event will be is triggered – If it does not receive UP and LIVE, for a configured duration, it will set the tunnel state to DOWN – There be a configuration parameter for the above - bfd-dampening-timeout.

  • External Tunnel (HWVTEP and DC Gateway) Handling will take same existing path, that is through interfacemanager.

  • OF Tunnel (flow based tunnelling) implementation will also be done directly by ITM following the same approach.

Pipeline changes

Pipeline will change as the egress action will be pointing to a group instead of output on port

  • ITM will install Tunnel Ingress Table Table 0. Match on in_port and goto INTERNAL_TUNNEL_TABLE or L3_LFIB_TABLE. Metadata will contain LPort tag cookie=0x8000001, duration=6627.550s, table=0, n_packets=1115992, n_bytes=72424591, priority=5,in_port=6 actions=write_metadata:0x199f0000000001/0x1fffff0000000001,goto_table:36 cookie=0x8000001, duration=6627.545s, table=0, n_packets=280701, n_bytes=19148626, priority=5,in_port=7 actions=write_metadata:0x19e90000000001/0x1fffff0000000001,goto_table:20

  • ITM will create group-id for each (destination) DPN and install the group on all other DPNs to reach this destination DPN.

  • ITM will update the group with watch-port as the tunnel openflow port. group_id=800000,type=ff, bucket=weight:100,watch_port=5,actions=output:5

  • ITM will program Table 220 with match on Lport Tag and output: [group id]

  • ITM will provide the RPC get-Egress-Action-For-Interface with the following actions, – set Tunnel Id – Load Reg6 (with IfIndex) – Resubmit to Table 220

Yang changes

A new container dpn-teps-state will be added. This will be a config DS

itm-state.yang :emphasize-lines: 145-180 container dpn-teps-state {
                     list dpns-teps {

                       key "source-dpn-id";

                       leaf source-dpn-id {
                                type uint64;
                       }
                       leaf tunnel-type {
                           type identityref {
                              base odlif:tunnel-type-base;
                           }
                       }
                       leaf group-id {
                           type uint32;
                       }

                       /* Remote DPNs to which this DPN-Tep has a tunnel */
                       list remote-dpns {

                            key "destination-dpn-id";

                              leaf destination-dpn-id {
                                 type uint64;
                              }
                              leaf tunnel-name {
                                  type string;
                              }
                              leaf monitor-enabled { // Will be enhanced to support monitor id.
                                   type boolean;
                                   default true;
                              }
                          }
                      }
 }

 A new Yang ''odl-itm-meta.yang'' will be create to store OVS bridge related information.
odl-itm-meta.yang :emphasize-lines: 188-238
 container bridge-tunnel-info {
         description "Contains the list of dpns along with the tunnel interfaces configured on them.";

         list ovs-bridge-entry {
             key dpid;
             leaf dpid {
                 type uint64;
             }

             leaf ovs-bridge-reference {
                 type southbound:ovsdb-bridge-ref;
                 description "This is the reference to an ovs bridge";
             }
             list ovs-bridge-tunnel-entry {
                 key tunnel-name;
                 leaf tunnel-name {
                     type string;
                 }
             }
         }
     }

     container ovs-bridge-ref-info {
         config false;
         description "The container that maps dpid with ovs bridge ref in the operational DS.";

         list ovs-bridge-ref-entry {
             key dpid;
             leaf dpid {
                 type uint64;
             }

             leaf ovs-bridge-reference {
                 type southbound:ovsdb-bridge-ref;
                 description "This is the reference to an ovs bridge";
             }
         }
     }

     container if-indexes-tunnel-map {
            config false;
            list if-index-tunnel {
                key if-index;
                leaf if-index {
                    type int32;
                }
                leaf interface-name {
                    type string;
                }
            }
    }

     New config parameters to be added to ``interfacemanager-config``
interfacemanager-config.yang :emphasize-lines: 245-250 leaf itm-direct-tunnels { description “Enable ITM to handle tunnels directly by-passing interface manager to scale up ITM tunnel mesh.”; type boolean; default false; }
 New config parameters to be added to ``itm-config``
itm-config.yang :emphasize-lines: 257-269 leaf alarm-generation-enabled { description “Enable the ITM to generate alarms based on tunnel state.”; type boolean; default true; } leaf bfd-dampening-timeout { description “CSC will wait for this timeout period to receive the BFD - UP and LIVE event from the switch. If not received within this time period, CSC will mark the tunnel as DOWN. This value is in seconds”; type uint16; default 30; }

The RPC call itm-rpc:get-egress-action will return the group Id which will point to tunnel port (when the tunnel port is created on the switch) between the source and destination dpn id.

itm-rpc.yang
     rpc get-egress-action {
         input {
              leaf source-dpid {
                   type uint64;
              }

              leaf destination-dpid {
                   type uint64;
              }

              leaf tunnel-type {
                  type identityref {
                       base odlif:tunnel-type-base;
                  }
              }
        }

        output {
             leaf group-id {
                  type uint32;
             }
        }
     }

ITM will also support another RPC ``get-tunnel-type``
itm-rpc.yang :emphasize-lines: 308-322
 rpc get-tunnel-type {
          description "to get the type of the tunnel interface(vxlan, vxlan-gpe, gre, etc.)";
              input {
                  leaf intf-name {
                      type string;
                  }
              }
              output {
                  leaf tunnel-type {
                      type identityref {
                          base odlif:tunnel-type-base;
                      }
              }
          }
 }

For the two above RPCs, when this feature is enabled ITM will service the two RPCs for internal tunnels and for the external tunnels, ITM will forward it to interfacemanager. When this feature is disabled, ITM will forward the RPCs for both internal and external to interfacemanager. Applications should now start using the above two RPCs from ITM and not interfacemanager.

ITM will enhance the existing RPCs create-terminating-service-actions and remove-terminating-service-actions.

New RPC will be supported by ITM to enable monitoring of individual tunnels - internal or external.

itm-rpc.yang :emphasize-lines: 337-350
 rpc set-bfd-enable-on-tunnel {
        description "used for turning ON/OFF to monitor individual tunnels";
        input {
            leaf source-node {
               type string;
            }
            leaf destination-node {
               type string;
            }
            leaf monitoring-params {
               type itmcfg:tunnel-monitor-params;
            }
        }
 }
Configuration impact

Following are the configuration changes and impact in the OpenDaylight.

  • Following parameter is added to the genius-interfacemanager-config.xml:

  • itm-direct-tunnels: this is boolean type parameter which enables or disables the new ITM realization of the tunnel mesh. Default value is false.

  • Following parameters are added to the genius-itm-config.xml:

    • alarm-generation-enabled: this is boolean type parameter which enables or disables the new generation of alarms by ITM. Default value is true.

    • bfd-dampening-timeout: timeout in seconds. Config parameter which the dampening logic will use

genius-interfacemanager-config.xml :emphasize-lines: 371-373
           <interfacemanager-config xmlns="urn:opendaylight:genius:itm:config">
               <itm-direct-tunnels>false</itm-direct-tunnels>
           </interfacemanager-config>
genius-itm-config.xml :emphasize-lines: 379-382
     <itm-config xmlns="urn:opendaylight:genius:itm:config">
         <alarm-generation-enabled>true</alarm-generation-enabled>
         <bfd-dampening-timeout>30</bfd-dampening-timeout> -- value in seconds. Whats the ideal default value ?
     </itm-config>

 Runtime changes to the parameters of this config file will not be taken into consideration.
Clustering considerations

The solution is supported on a 3-node cluster.

Upgrade Support

Upgrading ODL versions from the previous ITM tunnel mesh creation logic to this new tunnel mesh creation logic will be supported. When the itm-direct-tunnels flag changes from disable from previous version to enable in this version, ITM will automatically mesh tunnels in the new way and clean up any data that was persisted in the previous tunnel creation method.

Scale and Performance Impact

This solution will improve scale numbers by reducing no. of interfaces created in ietf-interfaces and this will cut down on the additional processing done by interface manager. This feature will provide fine granularity in bfd monitoring per tunnels. This should considerably reduce the number bfd events generated for all the tunnels, instead monitoring only those tunnels that are required. Overall this should improve the ITM performance and scale numbers.

Usage

Features to Install

This feature doesn’t add any new karaf feature.Installing any of the below features can enable the service:

odl-genius-rest odl-genius

REST API
Enable this feature

Before starting the controller, enable this feature in genius-interfacemanager-config.xml, by editing it as follows:-

genius-itm-config.xml :emphasize-lines: 443-445
     <interfacemanager-config xmlns="urn:opendaylight:genius:interface:config">
         <itm-direct-tunnels>true</itm-direct-tunnels>
     </interfacemanager-config>
Creation of transport zone

Post the ITM transport zone configuration from the REST.

URL: restconf/config/itm:transport-zones/

Sample JSON data

{
 "transport-zone": [
     {
         "zone-name": "TZA",
         "subnets": [
             {
                 "prefix": "192.168.56.0/24",
                 "vlan-id": 0,
                 "vteps": [
                     {
                         "dpn-id": "1",
                         "portname": "eth2",
                         "ip-address": "192.168.56.101",
                     },
                     {
                         "dpn-id": "2",
                         "portname": "eth2",
                         "ip-address": "192.168.56.102",
                     }
                 ],
                 "gateway-ip": "0.0.0.0"
             }
         ],
         "tunnel-type": "odl-interface:tunnel-type-vxlan"
     }
 ]
}
ITM RPCs

URL: restconf/operations/itm-rpc:get-egress-action

..code-block:: json
emphasize-lines

495-501

{
“input”: {

“source-dpid”: “40146672641571”, “destination-dpid”: “102093507130250”, “tunnel-type”: “odl-interface:tunnel-type-vxlan” } }

CLI

This feature will not add any new CLI for configuration. Some debug CLIs to dump the cache information may be added for debugging purpose.

Assignee(s)
Primary assignee:

<Hema Gopalakrishnan>

Work Items

Trello card:

  • Add support for the configuration parameter itm-direct-tunnels.

  • Implement listeners for Topology Operational DS for OvsdbBridgeAugmentation.

  • Implement listeners to Inventory Operational DS for FlowCapableNodeConnector.

  • Implement support for creation / deletion of tunnel ports

  • Implement support for installing / removal of Ingress flows

  • Implement API/caches to access bridge-interface-info, bridge-ref-info from odl-itm-meta.yang.

  • Add support for the config parameter alarm-generation-enabled.

  • Implement the dampening logic for bfd states.

  • Add support to populate the new data store dpn-teps-state in itm-state.yang.

  • Add support for getting group id from ID Manager for each DPN and install it on all other switches.

  • Add support to update the group with the tunnel port when OfPort add DCN is received.

  • Add support for RPC - getEgressAction, getTunnelType, set-bfd-enable-on-tunnel.

  • Enhance the existing createTerminationServiceActions and removeTerminatingServiceActions

  • Add caches whereever required - this includes adding data to cache, cleaning them up, CLIs to dump the cache.

  • Add support for upgrade from previous tunnel creation way to this new way of tunnel creation.

The following work items will be taken up later

  • Add support for OF Tunnel based implementation.

  • Removal of dependency on DPN Id for Tunnel mesh creation.

Dependencies

This requires minimum of OVS 2.8 where the BFD state can be received in of-port events.

The dependent applications in netvirt and SFC will have to use the ITM RPC to get the Egress actions. ITM will respond with egress actions for internal tunnels and for external tunnels ITM will forward the RPC to to interface manager, fetch the output and forward it to the applications.

Testing

Unit Tests

Appropriate UTs will be added for the new code coming in for this feature. This includes but not limited to :-

  1. Add ITM configuration enabling this new feature, configure two TEPs and check if the tunnels are created. Check ietf interfaces to verify that interface manager is bypassed, check if groups are created on the switch.

  2. Delete the TEPs and verify if the tunnels are deleted appropriately.

  3. Toggle the alarm-generation-enabled and check if the alarm were generated / supressed based on the flag.

  4. Enable monitoring on a specific tunnel and make the tunnel down on the dataplane and verify if the tunnel status is reflected correctly on the controller.

Integration Tests
  1. Configure ITM to build a larger tunnel mesh and check * if the tunnels are created correctly, * the tunnels are UP * the time taken to create the tunnel mesh * the tunnels come back up correctly after controller restart.

  2. Increase the number of configured DPNs and find out the maximum configurable DPNs for which the tunnel mesh works properly.

CSIT

The following test cases will be added to genius CSIT.

  1. Add ITM configuration enabling this new feature, configure two TEPs and check if the tunnels are created. Check ietf interfaces to verify that interface manager is bypassed,,check if groups are created on the switch.

  2. Delete the TEPs and verify if the tunnels are deleted appropriately.

  3. Toggle the alarm-generation-enabled and check if the alarm were generated / supressed based on the flag.

  4. Enable monitoring on a specfic tunnel and make the tunnel down on the dataplane and verify if the tunnel status is reflected correctly on the controller.

Documentation Impact

This will require changes to User Guide and Developer Guide.

User Guide will need to add information for below details: For the scale setup, this feature needs to be enabled so as to support tunnel mesh among scaled number of DPNs.

  • Usage details of genius-interfacemanager-config.xml config file for ITM to enable this feature by configuring itm-direct-tunnels flag to true.

Developer Guide will need to capture how to use the ITM RPC -

  • get-egress-action

  • get-tunnel-type

  • create-terminating-service-actions

  • remove-terminating-service-actions

  • set-bfd-enable-on-tunnel

References

[1] Genius Oxygen Release Plan

[2] Genius Trello Card

[3] OpenDaylight Documentation Guide

ITM Tunnel Auto-Configuration

https://git.opendaylight.org/gerrit/#/q/topic:itm-auto-config

Internal Transport Manager (ITM) Tunnel Auto configuration feature proposes a solution to migrate from REST/CLI based Tunnel End Point (TEP) configuration to automatic learning of Openvswitch (OVS) TEPs from the switches, thereby triggering automatic configuration of tunnels.

Problem description

User has to use ITM REST APIs for addition/deletion of TEPs into/from Transport zone. But, OVS and other TOR switches that support OVSDB can be configured for TEP without requring TEP configuration through REST API, which leads to redundancy and makes the process cumbersome and error-prone.

Use Cases

This feature will support following use cases:

  • Use case 1: Add tep to existing transport-zone from southbound interface(SBI).

  • Use case 2: Delete tep from SBI.

  • Use case 3: Move the tep from one transport zone to another from SBI.

  • Use case 4: User can specify the Datapath Node (DPN) bridge for tep other than br-int from SBI.

  • Use case 5: Allow user to configure a tep from SBI if they want to use flow based tunnels.

  • Use case 6: TEP-IP, Port, vlan, subnet, gateway IP are optional parameters for creating a transport zone from REST.

  • Use case 7: User must configure Transport zone name and tunnel type parameters while creating a transport zone from REST, as both are mandatory parameters.

  • Use case 8: Store teps received on OVS connect for transport-zone which is not yet created and also allow to move such teps into transport-zone when it gets created from northbound.

  • Use case 9: Allow user to control creation of default transport zone through start-up configurable parameter def-tz-enabled in config file.

  • Use case 10: Tunnel-type for default transport zone should be configurable through configurable parameter def-tz-tunnel-type in config file.

  • Use case 11: Allow user to change def-tz-enabled configurable parameter from OFF to ON during OpenDaylight controller restart.

  • Use case 12: Allow user to change def-tz-enabled configurable parameter from ON to OFF during OpenDaylight controller restart.

  • Use case 13: Default value for configurable parameter def-tz-enabled is OFF and if it is not changed by user, then it will be OFF after OpenDaylight controller restart as well.

  • Use case 14: Allow dynamic change for local_ip tep configuration via change in Openvswitch table’s other_config parameter local_ip.

Following use cases will not be supported:

  • If a switch gets disconnected, the corresponding TEP entries will not get cleared off from the ITM config datastore (DS) and operator must explicitly clean it up.

  • Operator is not supposed to delete default-transport-zone from REST, such scenario will be taken as incorrect configuration.

  • Dynamic change for of-tunnel tep configuration via change in Openvswitch table’s external_ids parameter of-tunnel is not supported.

  • Dynamic change for configurable parameters def-tz-enabled and def-tz-tunnel-type is not supported.

Proposed change

ITM will create a default transport zone on OpenDaylight start-up if configurable parameter def-tz-enabled is true in genius-itm-config.xml file (by default, this flag is false). When the flag is true, default transport zone is created and configured with:

  • Default transport zone will be created with name default-transport-zone.

  • Tunnel type: This would be configurable parameter via config file. ITM will take tunnel type value from config file for default-transport-zone. Tunnel-type value cannot be changed dynamically. It will take value of def-tz-tunnel-type parameter from config file genius-itm-config.xml on startup.

    • If def-tz-tunnel-type parameter is changed and def-tz-enabled remains true during OpenDaylight restart, then default-transport-zone with previous value of tunnel-type would be first removed and then default-transport-zone would be created with newer value of tunnel-type.

If def-tz-enabled is configured as false, then ITM will delete default-transport-zone if it is present already.

When transport-zone is added from northbound i.e. REST interface. Few of the transport-zone parameters are mandatory and fewer are optional now.

Status

Transport zone parameters

Mandatory

transport-zone name, tunnel-type

Optional

TEP IP-Address, Subnet prefix, Dpn-id, Gateway-ip, Vlan-id, Portname

When a new transport zone is created, check for any TEPs if present in tepsInNotHostedTransportZone container in Oper DS for that transport zone. If present, remove from tepsInNotHostedTransportZone and then add them under the transport zone and include the TEP in the tunnel mesh.

ITM will register listeners to the Node of network topology Operational DS to receive Data Tree Change Notification (DTCN) for add/update/delete notification in the OVSDB node so that such DTCN can be parsed and changes in the other_config and external_ids columns of openvswitch table for TEP parameters can be determined to perform TEP add/update/delete operations.

URL: restconf/operational/network-topology:network-topology/topology/ovsdb:1

Sample JSON output

 {
   "topology": [
     {
       "topology-id": "ovsdb:1",
       "node": [
       {
       "node-id": "ovsdb://uuid/83192e6c-488a-4f34-9197-d5a88676f04f",
       "ovsdb:db-version": "7.12.1",
       "ovsdb:ovs-version": "2.5.0",
       "ovsdb:openvswitch-external-ids": [
         {
           "external-id-key": "system-id",
           "external-id-value": "e93a266a-9399-4881-83ff-27094a648e2b"
         },
         {
           "external-id-key": "transport-zone",
           "external-id-value": "TZA"
         },
         {
           "external-id-key": "of-tunnel",
           "external-id-value": "true"
         }
       ],
       "ovsdb:openvswitch-other-configs": [
         {
           "other-config-key": "provider_mappings",
           "other-config-value": "physnet1:br-physnet1"
         },
         {
           "other-config-key": "local_ip",
           "other-config-value": "20.0.0.1"
         }
       ],
       "ovsdb:datapath-type-entry": [
         {
           "datapath-type": "ovsdb:datapath-type-system"
         },
         {
           "datapath-type": "ovsdb:datapath-type-netdev"
         }
       ],
       "ovsdb:connection-info": {
         "remote-port": 45230,
         "local-ip": "10.111.222.10",
         "local-port": 6640,
         "remote-ip": "10.111.222.20"
       }

       ...
       ...

      }
     ]
    }
   ]
 }
OVSDB changes

Below table covers how ITM TEP parameter are mapped with OVSDB and which fields of OVSDB would provide ITM TEP parameter values.

ITM TEP parameter

OVSDB field

DPN-ID

ovsdb:datapath-id from bridge whose name is pre-configured with openvswitch:external_ids:br-name:value

IP-Address

openvswitch:other_config:local_ip:value

Transport Zone Name

openvswitch:external_ids:transport-zone:value

of-tunnel

openvswitch:external_ids:of-tunnel:value

NOTE: If openvswitch:external_ids:br-name is not configured, then by default br-int will be considered to fetch DPN-ID which in turn would be used for tunnel creation. Also, openvswitch:external_ids:of-tunnel is not required to be configured, and will default to false, as described below in Yang changes section.

MDSALUtil changes

getDpnId() method is added into MDSALUtil.java.

 /**
  * This method will be utility method to convert bridge datapath ID from
  * string format to BigInteger format.
  *
  * @param datapathId datapath ID of bridge in string format
  *
  * @return the datapathId datapath ID of bridge in BigInteger format
  */
 public static BigInteger getDpnId(String datapathId);
Pipeline changes

N.A.

Yang changes

Changes are needed in itm.yang and itm-config.yang which are described in below sub-sections.

itm.yang changes

Following changes are done in itm.yang file.

  1. A new container tepsInNotHostedTransportZone under Oper DS will be added for storing details of TEP received from southbound having transport zone which is not yet hosted from northbound.

  2. Existing list transport-zone would be modified for leaf zone-name and tunnel-type to make them mandatory parameters.

itm.yang
 list transport-zone {
     ordered-by user;
     key zone-name;
     leaf zone-name {
         type string;
         mandatory true;
     }
     leaf tunnel-type {
         type identityref {
             base odlif:tunnel-type-base;
         }
         mandatory true;
     }
 }

 container not-hosted-transport-zones {
     config false;
     list tepsInNotHostedTransportZone {
         key zone-name;
         leaf zone-name {
             type string;
         }
         list unknown-vteps {
             key "dpn-id";
             leaf dpn-id {
                 type uint64;
             }
             leaf ip-address {
                 type inet:ip-address;
             }
             leaf of-tunnel {
                 description "Use flow based tunnels for remote-ip";
                 type boolean;
                 default false;
             }
         }
     }
 }
itm-config.yang changes

itm-config.yang file is modified to add new container to contain following parameters which can be configured in genius-itm-config.xml on OpenDaylight controller startup.

  • def-tz-enabled: this is boolean type parameter which would create or delete default-transport-zone if it is configured true or false respectively. By default, value is false.

  • def-tz-tunnel-type: this is string type parameter which would allow user to configure tunnel-type for default-transport-zone. By default, value is vxlan.

itm-config.yang
 container itm-config {
    config true;
    leaf def-tz-enabled {
       type boolean;
       default false;
    }
    leaf def-tz-tunnel-type {
       type string;
       default "vxlan";
    }
 }
Workflow
TEP Addition

When TEP IP other_config:local_ip and external_ids:transport-zone are configured at OVS side using ovs-vsctl commands to add TEP, then TEP parameters details are passed to the OVSDB plugin via OVSDB connection which in turn, is updated into Network Topology Operational DS. ITM listens for change in Network Topology Node.

When TEP parameters (like local_ip, transport-zone, br-name, of-tunnel) are received in add notification of OVSDB Node, then TEP is added.

For TEP addition, TEP-IP and DPN-ID are mandatory. TEP-IP is obtained from local_ip TEP parameter and DPN-ID is fetched from OVSDB node based on br-name TEP parameter:

  • if bridge name is specified, then datapath ID of the specified bridge is fetched.

  • if bridge name is not specified, then datapath ID of the br-int bridge is fetched.

TEP-IP and fetched DPN-ID would be needed to add TEP in the transport-zone. Once TEP is added in config datastore, transport-zone listener of ITM would internally take care of creating tunnels on the bridge whose DPN-ID is passed for TEP addition. It is noted that TEP parameter of-tunnel would be checked if it is true, then of-tunnel flag would be set for vtep to be added under transport-zone or tepsInNotHostedTransportZone.

TEP would be added under transport zone with following conditions:

  • TEPs not configured with external_ids:transport-zone i.e. without transport zone will be placed under the default-transport-zone if def-tz-enabled parameter is configured to true in genius-itm-config.xml. This will fire a DTCN to transport zone yang listener and ITM tunnels gets built.

  • TEPs configured with external_ids:transport-zone i.e. with transport zone and if the specified transport zone exists in the ITM Config DS, then TEP will be placed under the specified transport zone. This will fire a DTCN to transport zone yang listener and the ITM tunnels gets built.

  • TEPs configured with external_ids:transport-zone i.e. with transport zone and if the specified transport zone does not exist in the ITM Config DS, then TEP will be placed under the tepsInNotHostedTransportZone container under ITM Oper DS.

TEP Movement

When transport zone which was not configured earlier, is created through REST, then it is checked whether any “orphan” TEPs already exists in the tepsInNotHostedTransportZone for the newly created transport zone, if present, then such TEPs are removed from tepsInNotHostedTransportZone container in Oper DS, and then added under the newly created transport zone in ITM config DS and then TEPs are added to the tunnel mesh of that transport zone.

TEP Updation
  • TEP updation for IP address can be done dynamically. When other_config:local_ip is updated at OVS side, then such change will be notified to OVSDB plugin via OVSDB protocol, which in turn is reflected in Network topology Operational DS. ITM gets DTCN for Node update. Parsing Node update notification for other_config:local_ip parameter in old and new node can determine change in local_ip for TEP. If it is updated, then TEP with old local_ip is deleted from transport zone and TEP with new local_ip is added into transport zone. This will fire a DTCN to transport zone yang listener and the ITM tunnels get updated.

  • TEP updation for transport zone can be done dynamically. When external_ids:transport-zone is updated at OVS side, then such change will be notified to OVSDB plugin via OVSDB protocol, which in turn is reflected in Network topology Operational DS. ITM gets DTCN for Node update. Parsing Node update notification for external_ids:transport-zone parameter in old and new node can determine change in transport zone for TEP. If it is updated, then TEP is deleted from old transport zone and added into new transport zone. This will fire a DTCN to transport zone yang listener and the ITM tunnels get updated.

TEP Deletion

When an openvswitch:other_config:local_ip parameter gets deleted through ovs-vsctl command, then network topology Operational DS gets updated via OVSB update notification. ITM which has registered for the network-topology DTCNs, gets notified and this deletes the TEP from Transport zone or tepsInNotHostedTransportZone stored in ITM config/Oper DS based on external_ids:transport-zone parameter configured for TEP.

  • If external_ids:transport-zone is configured and corresponding transport zone exists in Configuration DS, then remove TEP from transport zone. This will fire a DTCN to transport zone yang listener and the ITM tunnels of that TEP get deleted.

  • If external_ids:transport-zone is configured and corresponding transport zone does not exist in Configuration DS, then check if TEP exists in tepsInNotHostedTransportZone container in Oper DS, if present, then remove TEP from tepsInNotHostedTransportZone.

  • If external_ids:transport-zone is not configured, then check if TEP exists in the default transport zone in Configuration DS, if and only if def-tz-enabled parameter is configured to true in genius-itm-config.xml. In case, TEP is present, then remove TEP from default-transport-zone. This will fire a DTCN to transport zone yang listener and ITM tunnels of that TEP get deleted.

Configuration impact

Following are the configuation changes and impact in the OpenDaylight.

  • genius-itm-config.xml configuation file is introduced newly into ITM in which following parameters are added:

    • def-tz-enabled: this is boolean type parameter which would create or delete default-transport-zone if it is configured true or false respectively. Default value is false.

    • def-tz-tunnel-type: this is string type parameter which would allow user to configure tunnel-type for default-transport-zone. Default value is vxlan.

genius-itm-config.xml
 <itm-config xmlns="urn:opendaylight:genius:itm:config">
     <def-tz-enabled>false</def-tz-enabled>
     <def-tz-tunnel-type>vxlan</def-tz-tunnel-type>
 </itm-config>

Runtime changes to the parameters of this config file would not be taken into consideration.

Clustering considerations

Any clustering requirements are already addressed in ITM, no new requirements added as part of this feature.

Other Infra considerations

N.A.

Security considerations

N.A.

Scale and Performance Impact

This feature would not introduce any significant scale and performance issues in the OpenDaylight.

Targeted Release

OpenDaylight Carbon

Known Limitations
  • Dummy Subnet prefix 255.255.255.255/32 under transport-zone is used to store the TEPs listened from southbound.

Alternatives

N.A.

Usage

Features to Install

This feature doesn’t add any new karaf feature. This feature would be available in already existing odl-genius karaf feature.

REST API
Creating transport zone

As per this feature, the TEP addition is based on the southbound configuation and respective transport zone should be created on the controller to form the tunnel for the same. The REST API to create the transport zone with mandatory parameters.

URL: restconf/config/itm:transport-zones/

Sample JSON data

{
    "transport-zone": [
        {
            "zone-name": "TZA",
             "tunnel-type": "odl-interface:tunnel-type-vxlan"
        }
    ]
}
Retrieving transport zone

To retrieve the TEP configuations from all the transport zones.

URL: restconf/config/itm:transport-zones/

Sample JSON output

{
    "transport-zones": {
       "transport-zone": [
          {
            "zone-name": "default-transport-zone",
            "tunnel-type": "odl-interface:tunnel-type-vxlan"
          },
          {
            "zone-name": "TZA",
            "tunnel-type": "odl-interface:tunnel-type-vxlan",
            "subnets": [
              {
                "prefix": "255.255.255.255/32",
                "vteps": [
                  {
                    "dpn-id": 1,
                    "portname": "",
                    "ip-address": "10.0.0.1"
                  },
                  {
                    "dpn-id": 2,
                    "portname": "",
                    "ip-address": "10.0.0.2"
                  }
                ],
                "gateway-ip": "0.0.0.0",
                "vlan-id": 0
              }
            ]
          }
        ]
    }
}
CLI

No CLI is added into OpenDaylight for this feature.

OVS CLI

ITM TEP parameters can be added/removed to/from the OVS switch using the ovs-vsctl command:

DESCRIPTION
  ovs-vsctl
  Command for querying and configuring ovs-vswitchd by providing a
  high-level interface to its configuration database.
  Here, this command usage is shown to store TEP parameters into
  ``openvswitch`` table of OVS database.

SYNTAX
  ovs-vsctl  set O . [column]:[key]=[value]

* To set TEP params on OVS table:

ovs-vsctl    set O . other_config:local_ip=192.168.56.102
ovs-vsctl    set O . external_ids:transport-zone=TZA
ovs-vsctl    set O . external_ids:br-name=br0
ovs-vsctl    set O . external_ids:of-tunnel=true

* To clear TEP params in one go by clearing external_ids and other_config
  column from OVS table:

ovs-vsctl clear O . external_ids
ovs-vsctl clear O . other_config

* To clear specific TEP paramter from external_ids or other_config column
  in OVS table:

ovs-vsctl remove O . other_config local_ip
ovs-vsctl remove O . external_ids transport-zone

* To check TEP params are set or cleared on OVS table:

ovsdb-client dump -f list  Open_vSwitch

Implementation

Assignee(s)

Primary assignee:

  • Tarun Thakur

Other contributors:

  • Sathish Kumar B T

  • Nishchya Gupta

  • Jogeswar Reddy

Work Items
  1. YANG changes

  2. Add code to create xml config file for ITM to configure flag which would control creation of default-transport-zone during bootup and configure tunnel-type for default transport zone.

  3. Add code to handle changes in the def-tz-enabled configurable parameter during OpenDaylight restart.

  4. Add code to handle changes in the def-tz-tunnel-type configurable parameter during OpenDaylight restart.

  5. Add code to create listener for OVSDB to receive TEP-specific parameters configured at OVS.

  6. Add code to update configuation datastore to add/delete TEP received from southbound into transport-zone.

  7. Check tunnel mesh for transport-zone is updated correctly for TEP add/delete into transport-zone.

  8. Add code to update configuation datastore for handling update in TEP-IP.

  9. Add code to update configuation datastore for handling update in TEP’s transport-zone.

  10. Check tunnel mesh is updated correctly against TEP update.

  11. Add code to create tepsInNotHostedTransportZone list in operational datastore to store TEP received with transport-zone not-configured from northbound.

  12. Add code to move TEP from tepsInNotHostedTransportZone list to transport-zone configured from REST.

  13. Check tunnel mesh is formed for TEPs after their movement from tepsInNotHostedTransportZone list to transport-zone.

  14. Add UTs.

  15. Add ITs.

  16. Add CSIT.

  17. Add Documentation.

Dependencies

This feature should be used when configuration flag i.e. use-transport-zone in netvirt-neutronvpn-config.xml for automatic tunnel configuration in transport-zone is disabled in Netvirt’s NeutronVpn, otherwise netvirt feature of dynamic tunnel creation may duplicate tunnel for TEPs in the tunnel mesh.

Testing

Unit Tests

Appropriate UTs will be added for the new code coming in, once UT framework is in place.

Integration Tests

Integration tests will be added, once IT framework for ITM is ready.

CSIT

Following test cases will need to be added/expanded in Genius CSIT:

  1. Verify default-transport-zone is not created when def-tz-enabled flag is false.

  2. Verify tunnel-type change is considered while creation of default-transport-zone.

  3. Verify ITM tunnel creation on default-transport-zone when TEPs are configured without transport zone or with default-transport-zone on switch when def-tz-enabled flag is true.

  4. Verify default-transport-zone is deleted when def-tz-enabled flag is changed from true to false during OpenDaylight controller restart.

  5. Verify ITM tunnel creation by TEPs configured with transport zone on switch and respective transport zone should be pre-configured on OpenDaylight controller.

  6. Verify auto-mapping of TEPs to corresponding transport zone group.

  7. Verify ITM tunnel deletion by deleting TEP from switch.

  8. Verify TEP transport zone change from OVS will move the TEP to corresponding transport-zone in OpenDaylight controller.

  9. Verify TEPs movement from tepsInNotHostedTransportZone to transport-zone when transport-zone is configured from northbound.

  10. Verify local_ip dynamic update is possible and corresponding tunnels are also updated.

  11. Verify ITM tunnel details persist after OpenDaylight controller restart, switch restart.

Documentation Impact

This will require changes to User Guide and Developer Guide.

User Guide will need to add information for below details:

  • TEPs parameters to be configured from OVS side to use this feature.

  • TEPs added from southbound can be viewed from REST APIs.

  • TEPs added from southbound will be added under dummy subnet (255.255.255.255/32) in transport-zone.

  • Usage details of genius-itm-config.xml config file for ITM to configure def-tz-enabled flag and def-tz-tunnel-type to create/delete default-transport-zone and its tunnel-type respectively.

  • User is explicitly required to configure def-tz-enabled as true if TEPs needed to be added into default-transport-zone from northbound.

Developer Guide will need to capture how to use changes in ITM to create tunnel automatically for TEPs configured from southbound.

Load balancing and high availability of multiple VxLAN tunnels

https://git.opendaylight.org/gerrit/#/q/topic:vxlan-tunnel-aggregation

The purpose of this feature is to enable resiliency and load balancing of VxLAN encapsulated traffic between pair of OVS nodes.

Additionally, the feature will provide infrastructure to support more complex use cases such as policy-based path selection. The exact implementation of policy-based path selection is out of the scope of this document and will be described in a different spec [2].

Problem description

The current ITM implementation enables creation of a single VxLAN tunnel between each pair of hypervisors.

If the hypervisor is connected to the network using multiple links with different capacity or connected to different L2 networks in different subnets, it is not possible to utilize all the available network resources to increase the throughput of traffic to remote hypervisors.

In addition, link failure of the network card forwarding the VxLAN traffic will result in complete traffic loss to/from the remote hypervisor if the network card is not part of a bonded interface.

Use Cases
  • Forwarding of VxLAN traffic between hypervisors with multiple network cards connected to L2 switches in different networks.

  • Forwarding of VxLAN traffic between hypervisors with multiple network cards connected to the same L2 switch.

Proposed change

ITM Changes

The ITM will continue to create tunnels based on transport-zone configuration similarly to the current implementation - TEP IP per DPN per transport zone. When ITM creates TEP interfaces, in addition to creating the actual tunnels, it will create logical tunnel interface for each pair of DPNs in the ietf-interface config data-store representing the tunnel aggregation group between the DPNs. The logical tunnel interface be created only when the first tunnel interface on each OVS is created. In addition, this feature will be guarded by a global configuration option in the ITM and will be turned off by default. Only when the feature is enabled, the logical tunnel interfaces will be created.

Creation of transport-zone with multiple IPs per DPN is out of the scope of this document and will be described in [2] However, the limitation of configuring no more than one TEP ip per transport zone will remain.

The logical tunnel will reference all member tunnel interfaces in the group using interface-child-info model. In addition, it would be possible to add weight to each member of the group to support unequal load-sharing of traffic.

The proposed feature depends on egress tunnel service binding functionality detailed in [3].

When the logical tunnel interface is created, a default egress service would be bound to it. The egress service will create an OF select group based on the actual list of tunnel members in the logical group. Each tunnel member can be assigned a weight field that will be applied on it’s corresponding bucket in the OF select group. If weight was not defined, the bucket weight will be configured with a default value of 1 resulting in uniform distribution if weight was not configured for any of the buckets. Each bucket in the select group will route the egress traffic to one of the tunnel members in the group by loading the lport-tag of the tunnel member interface to NXM register6.

Logical tunnel egress service pipeline example:

cookie=0x6900000, duration=0.802s, table=220, n_packets=0, n_bytes=0, priority=6,reg6=0x500
actions=load:0xe000500->NXM_NX_REG6[],write_metadata:0xe000500000000000/0xfffffffffffffffe,group:80000
cookie=0x8000007, duration=0.546s, table=220, n_packets=0, n_bytes=0, priority=7,reg6=0x600 actions=output:3
cookie=0x8000007, duration=0.546s, table=220, n_packets=0, n_bytes=0, priority=7,reg6=0x700 actions=output:4
cookie=0x8000007, duration=0.546s, table=220, n_packets=0, n_bytes=0, priority=7,reg6=0x800 actions=output:5
group_id=800000,type=select,
bucket=weight:50,watch_port=3,actions=load:0x600->NXM_NX_REG6[],resubmit(,220),
bucket=weight:25,watch_port=4,actions=load:0x700->NXM_NX_REG6[],resubmit(,220),
bucket=weight:25,watch_port=5,actions=load:0x800->NXM_NX_REG6[],resubmit(,220)

Each bucket of the LB group will set the watch_port property to be the tunnel member OF port number. This will allow the OVS to monitor the bucket liveness and route egress traffic only to live buckets.

BFD monitoring is required to probe the tunnel state and update the OF select group accordingly. Using OF tunnels [4] or turning off BFD monitoring will not allow the logical group service to respond to tunnel state changes.

OF select group for logical tunnel can contain a mix of IPv4 and IPv6 tunnels, depending on the transport-zone configuration.

A new pool will be allocated to generate OF group ids of the default select group and the policy groups described in [2]. The pool name VXLAN_GROUP_POOL will allocate ids from the id-manager in the range 300,000-310,000. ITM RPC calls to get internal tunnel interface between source and destination DPNs will return the logical tunnel interface group name if such exits, otherwise the lower layer tunnel will be returned.

IFM Changes

The logical tunnel group is an ietf-interface thus it has an allocated lport-tag. RPC call to getEgressActionsForInterface for the logical tunnel will load register6 with its corresponding lport-tag and resubmit the traffic to the egress dispatcher table.

The state of the logical tunnel group is affected by the states of the group members. If at least one of the tunnels is in oper-status UP, the logical group is considered UP.

If the logical tunnel was set as admin-status DOWN, all the tunnel members will be set accordingly.

Ingress traffic from VxLAN tunnels would not be bounded to any logical group service as part of this feature and it will continue to use the same workflow while traversing the ingress services pipeline.

Other applications would be able to utilize this infrastructure to introduce new services over logical tunnel group interface e.g. policy-based path selection. These services will take precedence over the default egress service for logical tunnel.

Netvirt Changes

L3 models map each combination of VRF id and destination prefix to a list of nexthop ip addresses. When calling getInternalOrExternalInterfaceName RPC from the FIB manager, if the DPN id of the remote nexthop is known it will be sent along with the nexthop ip. If logical tunnel exists between the source and destination DPNs it will be set as the lport-tag of register6 in the remote nexthop actions.

Pipeline changes

For the flows below it is assumed that a logical tunnel group was configured for both ingress and egress DPNs. The logical tunnel group is composed of { tunnnel1, tunnel2 } and bound to the default logical tunnel egress service.

Traffic between VMs on the same DPN

No pipeline changes required

L3 traffic between VMs on different DPNs
VM originating the traffic (Ingress DPN):
  • Remote next hop group in the FIB table references the logical tunnel group.

  • The default logical group service uses OF select group to load balance traffic between the tunnels.

    Classifier table (0) =>
    Dispatcher table (17) l3vpn service: set vpn-id=router-id =>
    GW Mac table (19) match: vpn-id=router-id,dst-mac=router-interface-mac =>
    FIB table (21) match: vpn-id=router-id,dst-ip=vm2-ip set dst-mac=vm2-mac tun-id=vm2-label reg6=logical-tun-lport-tag =>
    Egress table (220) match: reg6=logical-tun-lport-tag =>
    Logical tunnel LB select group set reg6=tun1-lport-tag =>
    Egress table (220) match: reg6=tun1-lport-tag output to tunnel1
VM receiving the traffic (Ingress DPN):
  • No pipeline changes required

    Classifier table (0) =>
    Internal tunnel Table (36) match:tun-id=vm2-label =>
    Local Next-Hop group: set dst-mac=vm2-mac,reg6=vm2-lport-tag =>
    Egress table (220) match: reg6=vm2-lport-tag output to VM 2
SNAT traffic from non-NAPT switch
VM originating the traffic is non-NAPT switch:
  • NAPT group references the logical tunnel group.

    Classifier table (0) =>
    Dispatcher table (17) l3vpn service: set vpn-id=router-id =>
    GW Mac table (19) match: vpn-id=router-id,dst-mac=router-interface-mac =>
    FIB table (21) match: vpn-id=router-id =>
    Pre SNAT table (26) match: vpn-id=router-id =>
    NAPT Group set tun-id=router-id reg6=logical-tun-lport-tag =>
    Egress table (220) match: reg6=logical-tun-lport-tag =>
    Logical tunnel LB select group set reg6=tun1-lport-tag =>
    Egress table (220) match: reg6=tun1-lport-tag output to tunnel1
Traffic from NAPT switch punted to controller:
  • No explicit pipeline changes required

    Classifier table (0) =>
    Internal tunnel Table (36) match:tun-id=router-id =>
    Outbound NAPT table (46) set vpn-id=router-id, punt-to-controller
L2 unicast traffic between VMs in different DPNs
VM originating the traffic (Ingress DPN):
  • ELAN DMAC table references the logical tunnel group

    Classifier table (0) =>
    Dispatcher table (17) l3vpn service: set vpn-id=router-id =>
    GW Mac table (19) =>
    Dispatcher table (17) l2vpn service: set elan-tag=vxlan-net-tag =>
    ELAN base table (48) =>
    ELAN SMAC table (50) match: elan-tag=vxlan-net-tag,src-mac=vm1-mac =>
    ELAN DMAC table (51) match: elan-tag=vxlan-net-tag,dst-mac=vm2-mac set tun-id=vm2-lport-tag reg6=logical-tun-lport-tag =>
    Egress table (220) match: reg6=logical-tun-lport-tag =>
    Logical tunnel LB select group set reg6=tun2-lport-tag =>
    Egress table (220) match: reg6=tun2-lport-tag output to tunnel2
VM receiving the traffic (Ingress DPN):
  • No explicit pipeline changes required

    Classifier table (0) =>
    Internal tunnel Table (36) match:tun-id=vm2-lport-tag set reg6=vm2-lport-tag =>
    Egress table (220) match: reg6=vm2-lport-tag output to VM 2
L2 multicast traffic between VMs in different DPNs
VM originating the traffic (Ingress DPN):
  • ELAN broadcast group references the logical tunnel group.

    Classifier table (0) =>
    Dispatcher table (17) l3vpn service: set vpn-id=router-id =>
    GW Mac table (19) =>
    Dispatcher table (17) l2vpn service: set elan-tag=vxlan-net-tag =>
    ELAN base table (48) =>
    ELAN SMAC table (50) match: elan-tag=vxlan-net-tag,src-mac=vm1-mac =>
    ELAN DMAC table (51) =>
    ELAN DMAC table (52) match: elan-tag=vxlan-net-tag =>
    ELAN BC group goto_group=elan-local-group, set tun-id=vxlan-net-tag reg6=logical-tun-lport-tag =>
    Egress table (220) match: reg6=logical-tun-lport-tag =>
    Logical tunnel LB select group set reg6=tun1-lport-tag =>
    Egress table (220) match: reg6=tun1-lport-tag output to tunnel1
VM receiving the traffic (Ingress DPN):
  • No explicit pipeline changes required

    Classifier table (0) =>
    Internal tunnel Table (36) match:tun-id=vxlan-net-tag =>
    ELAN local BC group set tun-id=vm2-lport-tag =>
    ELAN filter equal table (55) match: tun-id=vm2-lport-tag set reg6=vm2-lport-tag =>
    Egress table (220) match: reg6=vm2-lport-tag output to VM 2
Yang changes

The following changes would be required to support configuration of logical tunnel group:

IFM Yang Changes

Add a new tunnel type to represent the logical group in odl-interface.yang.

identity tunnel-type-logical-group {
    description "Aggregation of multiple tunnel endpoints between two DPNs";
    base tunnel-type-base;
}

Each tunnel member in the logical group can have an assigned weight as part of tunnel-optional-params in odl-interface:if-tunnel augment to support unequal load sharing.

 grouping tunnel-optional-params {
     leaf tunnel-source-ip-flow {
         type boolean;
         default false;
     }

     leaf tunnel-remote-ip-flow {
         type boolean;
         default false;
     }

     leaf weight {
        type uint16;
     }

     ...
 }
ITM Yang Changes

Each tunnel endpoint in itm:transport-zones/transport-zone can be configured with optional weight parameter. Weight configuration will be propagated to tunnel-optional-params.

 list vteps {
      key "dpn-id portname";
      leaf dpn-id {
          type uint64;
      }

      leaf portname {
           type string;
      }

      leaf ip-address {
           type inet:ip-address;
      }

      leaf weight {
           type unit16;
           default 1;
      }

      leaf option-of-tunnel {
           type boolean;
           default false;
      }
 }

The internal tunnel will be enhanced to contain multiple tunnel interfaces

container tunnel-list {
    list internal-tunnel {
        key "source-DPN destination-DPN transport-type";
        leaf source-DPN {
            type uint64;
        }

        leaf destination-DPN {
            type uint64;
        }

        leaf transport-type {
            type identityref {
                base odlif:tunnel-type-base;
            }
        }

        leaf-list tunnel-interface-name {
             type string;
        }
    }
}

The RPC call itm-rpc:get-internal-or-external-interface-name will be enhanced to contain the destination dp-id as an optional input parameter

 rpc get-internal-or-external-interface-name {
     input {
          leaf source-dpid {
               type uint64;
          }

          leaf destination-dpid {
               type uint64;
          }

          leaf destination-ip {
               type inet:ip-address;
          }

          leaf tunnel-type {
              type identityref {
                   base odlif:tunnel-type-base;
              }
          }
    }

    output {
         leaf interface-name {
              type string;
         }
    }
 }
Configuration impact

Creation of logical tunnel group will be guarded by configuration in itm-config per tunnel-type

 container itm-config {
    config true;
    leaf def-tz-enabled {
       type boolean;
       default false;
    }

    leaf def-tz-tunnel-type {
       type string;
       default "vxlan";
    }

    list tunnel-aggregation {
       key "tunnel-type";
       leaf tunnel-type {
           type string;
       }

       leaf enabled {
           type boolean;
           default false;
       }
    }
 }
Scale and Performance Impact

This feature is expected to increase the datapath throughput by utilizing all available network resources.

Alternatives

There are certain use cases where it would be possible to add the network cards to a separate bridge with LACP enabled and patch it to br-int but this alternative was rejected since it imposes limitations on the type of links and the overall capacity.

Usage

Features to Install

This feature doesn’t add any new karaf feature.

REST API
ITM RPCs

URL: restconf/operations/itm-rpc:get-tunnel-interface-name

{
   "input": {
       "source-dpid": "40146672641571",
       "destination-dpid": "102093507130250",
       "tunnel-type": "odl-interface:tunnel-type-vxlan"
   }
}

URL: restconf/operations/itm-rpc:get-internal-or-external-interface-name

{
   "input": {
       "source-dpid": "40146672641571",
       "destination-dpid": "102093507130250",
       "tunnel-type": "odl-interface:tunnel-type-vxlan"
   }
}
CLI

tep:show-state will be enhanced to extract the state of the logical tunnel interface in addition to the actual TEP state.

Implementation

Assignee(s)
Primary assignee:

Olga Schukin <olga.schukin@hpe.com>

Other contributors:

Tali Ben-Meir <tali@hpe.com>

Work Items

Trello card: https://trello.com/c/Q7LgiHH7/92-multiple-vxlan-endpoints-for-compute

  • Add support to ITM for creation of multiple tunnels between pair of DPNs

  • Create logical tunnel group in ietf-interface if more than one tunnel exist between two DPNs. Update the interface-child-info model with the list of individual tunnel members

  • Bind a default service for the logical tunnel interface to create OF select group based on the tunnel members

  • Change ITM RPC calls to getTunnelInterfaceName and getInternalOrExternalInterfaceName to prefer the logical tunnel group over the tunnel members

  • Support OF weighted select group

Testing

Unit Tests
  • ITM unitests will be enhanced with test cases of multiple tunnels

  • IFM unitests will be enhanced to handle CRUD operations on logical tunnel group

CSIT
Transport zone creation with multiple tunnels
  • Verify tunnel endpoint creation

  • Verify logical tunnel group creation

  • Verify logical tunnel service binding flows/group

Transport zone removal with multiple tunnels
  • Verify tunnel endpoint removal

  • Verify logical tunnel group removal

  • Verify logical tunnel service binding flows/group removal

Transport zone updates to single/multiple tunnels
  • Verify tunnel endpoint creation/removal

  • Verify logical tunnel group creation/removal

  • Verify logical tunnel service binding flows/group creation/removal

Transport zone creation with multiple OF tunnels
  • Verify tunnel endpoint creation

  • Verify logical tunnel group creation

  • Verify logical tunnel service binding flows/group

OF Tunnels

https://git.opendaylight.org/gerrit/#/q/topic:of-tunnels

OF Tunnels feature adds support for flow based tunnels to allow scalable overlay tunnels.

Problem description

Today when tunnel interfaces are created, InterFaceManager [IFM] creates one OVS port for each tunnel interface i.e. source-destination pair. For N devices in a TransportZone this translates to N*(N-1) tunnel ports created across all devices and N-1 ports in each device. This has obvious scale limitations.

Use Cases

This feature will support following use cases:

  • Use case 1: Allow user to specify if they want to use flow based tunnels at the time of configuration.

  • Use case 2: Create single OVS Tunnel Interface if flow based tunnels are configured and this is the first tunnel on this device/tep.

  • Use case 3: Flow based and non flow based tunnels should be able to exist in a given transport zone.

  • Use case 4: On tep delete, if this is the last tunnel interface on this tep/device and it is flow based tunnel, delete the OVS Tunnel Interface.

Following use cases will not be supported:

  • Configuration of flow based and non-flow based tunnels of same type on the same device. OVS requires one of the following: remote_ip, local_ip, type and key to be unique. Currently we don’t support multiple local_ip and key is always set to flow. So remote_ip and type are the only unique identifiers. remote_ip=flow is a super set of remote_ip=<fixed-ip> and we can’t have two interfaces with all other fields same except this.

  • Changing tunnel from one flow based to non-flow based at runtime. Such a change will require deletion and addition of tep. This is inline with existing model where tunnel-type cannot be changed at runtime.

  • Configuration of Source IP for tunnel through flow. It will still be fixed. Though we’re adding option in IFM YANG for this, implementation for it won’t be done till we get use case(s) for it.

Proposed change

OVS 2.0.0 onwards allows configuration of flow based tunnels through interface option:remote_ip=flow. Currently this field is set to IP address of the destination endpoint.

remote_ip=flow means tunnel destination IP will be set by an OpenFlow action. This allows us to add different actions for different destinations using the single OVS/OF port.

This change will add optional parameters to ITM and IFM YANG files to allow OF Tunnels. Based on this option, ITM will configure IFM which in turn will create tunnel ports in OVSDB.

Using OVSDB Plugin

OVSDB Plugin provides following field in Interface to configure options:

ovsdb.yang
 list options {
     description "Port/Interface related optional input values";
     key "option";
     leaf option {
         description "Option name";
         type string;
     }
     leaf value {
         description "Option value";
         type string;
     }

For flow based tunnels we will set option name remote_ip to value flow.

MDSALUtil changes

Following new actions will be added to mdsalutil/ActionType.java

  • set_tunnel_src_ip

  • set_tunnel_dest_ip

Following new matches will be added to mdsalutil/NxMatchFieldType.java

  • tun_src_ip

  • tun_dest_ip

Pipeline changes

This change adds a new match in Table0. Today we match in in_port to determine which tunnel interface this pkt came in on. Since currently each tunnel maps to a source-destination pair it tells us about source device. For interfaces configured to use flow based tunnels this will add an additional match for tun_src_ip. So, in_port+tunnel_src_ip will give us which tunnel interface this pkt belongs to.

When services call getEgressActions(), they will get one additional action, ``set_tunnel_dest_ip before the output:ofport action.

YANG changes

Changes will be needed in itm.yang and odl-interface.yang to allow configuring a tunnel as flow based or not.

ITM YANG changes

A new parameter option-of-tunnel will be added to list-vteps

itm.yang
 list vteps {
     key "dpn-id portname";
     leaf dpn-id {
         type uint64;
     }
     leaf portname {
         type string;
     }
     leaf ip-address {
         type inet:ip-address;
     }
     leaf option-of-tunnel {
         type boolean;
         default false;
     }
 }

Same parameter will also be added to tunnel-end-points in itm-state.yang. This will help eliminate need to retrieve information from TransportZones when configuring tunnel interfaces.

itm-state.yang
 list tunnel-end-points {
     ordered-by user;
     key "portname VLAN-ID ip-address tunnel-type";
     /* Multiple tunnels on the same physical port but on different VLAN can be supported */

     leaf portname {
         type string;
     }
     ...
     ...
     leaf option-of-tunnel {
         type boolean;
         default false;
     }
 }

This will allow to set OF Tunnels on per VTEP basis. So in a transport-zone we can have some VTEPs (devices) that use OF Tunnels and others that don’t. Default of false means it will not impact existing behavior and will need to be explicitly configured. Going forward we can choose to set default true.

IFM YANG changes

We’ll add a new tunnel-optional-params and add them to iftunnel

odl-interface.yang
 grouping tunnel-optional-params {
     leaf tunnel-source-ip-flow {
         type boolean;
         default false;
     }

     leaf tunnel-remote-ip-flow {
         type boolean;
         default false;
     }

     list tunnel-options {
         key "tunnel-option";
         leaf tunnel-option {
             description "Tunnel Option name";
             type string;
         }
         leaf value {
             description "Option value";
             type string;
         }
     }
 }

The list tunnel-options is a list of key-value pairs of strings, similar to options in OVSDB Plugin. These are not needed for OF Tunnels but is being added to allow user to configure any other Interface options that OVS supports. Aim is to enable developers and users try out newer options supported by OVS without needing to add explicit support for it. Note that there is no counterpart for this option in itm.yang. Any options that we want to explicitly support will be added as a separate option. This will allow us to do better validations for options that are needed for our specific use cases.

 augment "/if:interfaces/if:interface" {
     ext:augment-identifier "if-tunnel";
     when "if:type = 'ianaift:tunnel'";
     ...
     ...
     uses tunnel-optional-params;
     uses monitor-params;
 }
Workflow
Adding tep
  1. User: While adding tep user gives option-of-tunnel:true for tep being added.

  2. ITM: When creating tunnel interfaces for this tep, if option-of-tunnel:true, set tunnel-remote-ip:true for the tunnel interface.

  3. IFM: If option-of-tunnel:true and this is first tunne on this device, set option:remote_ip=flow when creating tunnel interface in OVSDB. Else, set option:remote_ip=<destination-ip>.

Deleting tep
  1. If tunnel-remote-ip:true and this is last tunnel on this device, delete tunnel port in OVSDB. Else, do nothing.

  2. If tunnel-remote-ip:false, follow existing logic.

Configuration impact

This change doesn’t add or modify any configuration parameters.

Clustering considerations

Any clustering requirements are already addressed in ITM and IFM, no new requirements added as part of this feature.

Scale and Performance Impact

This solution will help improve scale numbers by reducing no. of interfaces created on devices as well as no. of interfaces and ports present in inventory and network-topology.

Targeted Release(s)

Carbon. Boron-SR3.

Known Limitations

BFD monitoring will not work when OF Tunnels are used. Today BFD monitoring in OVS relies on destination_ip configured in remote_ip when creating tunnel port to determine target IP for BFD packets. If we use flow it won’t know where to send BFD packets. Unless OVS allows adding destination IP for BFD monitoring on such tunnels, monitoring cannot be enabled.

Alternatives

LLDP/ARP based monitoring was considered for OF tunnels to overcome lack of BFD monitoring but was rejected because LLDP/ARP based monitoring doesn’t scale well. Since driving requirement for this feature is scale setups, it didn’t make sense to use an unscalable solution for monitoring.

XML/CFG file based global knob to enable OF tunnels for all tunnel interfaces was rejected due to inflexible nature of such a solution. Current solution allows a more fine grained and device based configuration at runtime. Also, wanted to avoid adding yet another global configuration knob.

Usage

Features to Install

This feature doesn’t add any new karaf feature.

REST API
Adding TEPs to transport zone

For most users TEP Addition is the only configuration they need to do to create tunnels using genius. The REST API to add TEPs with OF Tunnels is same as earlier with one small addition.

URL: restconf/config/itm:transport-zones/

Sample JSON data

{
 "transport-zone": [
     {
         "zone-name": "TZA",
         "subnets": [
             {
                 "prefix": "192.168.56.0/24",
                 "vlan-id": 0,
                 "vteps": [
                     {
                         "dpn-id": "1",
                         "portname": "eth2",
                         "ip-address": "192.168.56.101",
                         "option-of-tunnel":"true"
                     }
                 ],
                 "gateway-ip": "0.0.0.0"
             }
         ],
         "tunnel-type": "odl-interface:tunnel-type-vxlan"
     }
 ]
}
Creating tunnel-interface directly in IFM

This use case is mainly for those who want to write applications using Genius and/or want to create individual tunnel interfaces. Note that this is a simpler easy way to create tunnels without needing to delve into how OVSDB Plugin creates tunnels.

Refer Genius User Guide for more details on this.

URL: restconf/config/ietf-interfaces:interfaces

Sample JSON data

{
 "interfaces": {
 "interface": [
     {
         "name": "vxlan_tunnel",
         "type": "iana-if-type:tunnel",
         "odl-interface:tunnel-interface-type": "odl-interface:tunnel-type-vxlan",
         "odl-interface:datapath-node-identifier": "1",
         "odl-interface:tunnel-source": "192.168.56.101",
         "odl-interface:tunnel-destination": "192.168.56.102",
         "odl-interface:tunnel-remote-ip-flow": "true",
         "odl-interface:monitor-enabled": false,
         "odl-interface:monitor-interval": 10000,
         "enabled": true
     }
  ]
 }
}
CLI

A new boolean option, remoteIpFlow will be added to tep:add command.

DESCRIPTION
  tep:add
  adding a tunnel end point

SYNTAX
  tep:add [dpnId] [portNo] [vlanId] [ipAddress] [subnetMask] [gatewayIp] [transportZone]
  [remoteIpFlow]

ARGUMENTS
  dpnId
          DPN-ID
  portNo
          port-name
  vlanId
          vlan-id
  ipAddress
          ip-address
  subnetMask
          subnet-Mask
  gatewayIp
          gateway-ip
  transportZone
          transport_zone
  remoteIpFlow
          Use flow for remote ip

Implementation

Assignee(s)
Primary assignee:

<Vishal Thapar>

Other contributors:

<Vacancies available>

Work Items
  1. YANG changes

  2. Add relevant match and actions to MDSALUtil

  3. Add set_tunnel_dest_ip action to actions returned in getEgressActions() for OF Tunnels.

  4. Add match on tun_src_ip in Table0 for OF Tunnels.

  5. Add CLI.

  6. Add UTs.

  7. Add ITs.

  8. Add CSIT.

  9. Add Documentation

Dependencies

This doesn’t add any new dependencies. This requires minimum of OVS 2.0.0 which is already lower than required by some of other features.

This change is backwards compatible, so no impact on dependent projects. Projects can choose to start using this when they want. However, there is a known limitation with monitoring, refer Limitations section for details.

Following projects currently depend on Genius:

  • Netvirt

  • SFC

Testing

Unit Tests

Appropriate UTs will be added for the new code coming in once framework is in place.

Integration Tests

Integration tests will be added once IT framework for ITM and IFM is ready.

CSIT

CSIT already has test cases for tunnels which test with non OF Tunnels. Similar test cases will be added for OF Tunnels. Alternatively, some of the existing test cases that use multiple teps can be tweaked to use OF Tunnels for one of them.

Following test cases will need to be added/expanded in Genius CSIT:

  1. Create a TZ with more than one TEPs set to use OF Tunnels and test datapath.

  2. Create a TZ with mix of OF and non OF Tunnels and test datapath.

  3. Delete a TEP using OF Tunnels and add it again with non OF tunnels and test the datapath.

  4. Delete a TEP using non OF Tunnels and add it again with OF Tunnels and test datapath.

Documentation Impact

This will require changes to User Guide and Developer Guide.

User Guide will need to add information on how to add TEPs with flow based tunnels.

Developer Guide will need to capture how to use changes in IFM to create individual tunnel interfaces.

OF Tunnels Support For ITM Direct Tunnels

https://git.opendaylight.org/gerrit/#/q/topic:of-tunnels

Genius already supports creation of OF-tunnels through interface-manager. With itm-direct-tunnels, interface-manager is by-passed for all internal tunnel operations, and in such scenarios, OF tunnels are not currently supported. This feature adds support for flow based tunnels on top of itm-direct-tunnels for better scalability. This feature additionally adds support for BFD monitoring on top of OF-Tunnels enabled end-points on demand basis. This feature also scopes in creation of ITM groups, as that will enhance the performance of OF-tunnels.

Problem description

Today when tunnel interfaces are created with itm-direct-tunnels enabled, ITM creates one OVS port for each tunnel interface i.e. source-destination pair. For N devices in a TransportZone this translates to N*(N-1) tunnel ports created across all devices and N-1 ports in each device. This has obvious scale limitations.

Use Cases

This feature will support following use cases:

  • Use case 1: User should be able to create OF-tunnels with itm-direct-tunnels flag enabled.

  • Use case 2: Allow user to specify if they want to use flow based tunnels at the time of configuration, at the vtep level.

  • Use case 3: Create single OVS Tunnel Interface if flow based tunnels are configured for a VTEP.

  • Use case 4: Create an ITM group per destination on all DPNs. For simplicity, the same group-id can be used across all DPNs to reach a destination.

  • Use case 5: Flow based and non flow based tunnels should be able to exist in a given transport zone.

  • Use case 6: If BFD monitoring is required between two end-points, point-to-point tunnels should be created between them, in addition to the default of-tunnel. This point-to-point tunnel will be used only for BFD, and the actual traffic will still take of-tunnels.

  • Use case 7: ITM should maintain a reference count for the number of applications who have requested for monitoring, and the p2p tunnel should be deleted only when the reference count becomes zero on a disable monitoring RPC request.

  • Use case 8: On tep delete of a flow-based vtep, delete the OVS Tunnel Interface.

  • Use case 9: On tep delete of a flow-based vtep, where BFD monitoring is enabled, ITM has to delete the p2p tunnel created for BFD monitoring.

  • Use case 10: On tep add of an already deleted flow-based vtep where monitoring was previously enabled, ITM should re-create the p2p tunnel for monitoring.

  • Use case 11: On tep delete, update all groups on all DPNs with drop action.

  • Use case 12: The ITM group will get deleted only when a scale-in happens with an explicit trigger to remove the DPN.

  • Use case 13: Applications will get the ITM group-id to program their respective remote flows irrespective of the tunnel port being created or not, and hence applications can get rid of their current tunnel-state listeners for better performance. (Please note that the openflowplugin feature to enable ordered processing of flows pointing

    to groups is pre-requisite for the smooth functioning of the ITM groups)

  • Use case 14: Add of-tunnels through CLI

  • Use case 15: Delete of-tunnels through CLI

  • Use case 16: Display of-tunnels through CLI

Following use cases will not be supported:

  • Configuration of flow based and non-flow based tunnels of same type on the same device. OVS requires one of the following: remote_ip, local_ip, type and key to be unique. Currently we don’t support multiple local_ip and key is always set to flow. So remote_ip and type are the only unique identifiers. remote_ip=flow is a super set of remote_ip=<fixed-ip> and we can’t have two interfaces with all other fields same except this.

  • Changing tunnel from one flow based to non-flow based at runtime. Such a change will require deletion and addition of tep. This is inline with existing model where tunnel-type cannot be changed at runtime.

  • Configuration of Source IP for tunnel through flow.

  • Since data traffic will not be flowing over the point to point tunnel for BFD monitoring, the existing configuration of forwarding_if_rx which helps in avoiding tunnel flapping by making use of data traffic will not be supported.

  • Hitless upgrade cannot be supported from a point to point tunnel deployment to of-tunnel deployment. The default configuration will remain as point to point tunnel, and user has to explicitly switch to of-tunnels after upgrade.

  • Monitoring enable/disable is requested by services, which results in creating/deleting corresponding p2p tunnel for monitoring. There are scenarios where a service may fail to disable monitoring when it is not needed, for example case when a VM migrates from a BFD monitored source DPN to another one, while the controller cluster is down. In such scenarios the current solution will not be able to delete the unwanted p2p tunnel created on the previous source.

Proposed change

The current OF Tunnels implmentation in genius has already taken care of the major yang model changes in ITM and IFM yang files to allow flow based tunnels. This change will additionally enable any other missing pieces for enabling OF-Tunnels through itm-direct-tunnels as well as supporting monitoring between DPNs when OF-Tunnels is enabled.

Pipeline changes

Major pipeline changes for OF-Tunnels are alredy covered as part of the existing OFTunnels implementation. However the same will not work with itm-direct-tunnels as the code path is different.

ITM will program a group per source, destination DPN pair. This group will have actions set_tunnel_dest_ip before the output:ofport action.

When services call ``getEgressActionsForTunnel(), they will get the action to goto the above programmed group-id.

OVSDB configuration changes

Whenever point to point tunnel is configured for BFD monitoring on a flow-based source VTEP, an additional parameter of dst_port needs to be configured for the tunnel port on the switch, so that OVS can distinguish between the actual traffic coming over OF Tunnel against the BFD packets coming over the point to point tunnel.

YANG changes

Yang changes needed in itm.yang and itm-state.yang to allow configuring a tunnel as flow based or not, is already convered by the previous OF-Tunnels implementation. To support the same through itm-direct-tunnels, some more yang changes will be needed in ITM as specified below :

ITM YANG changes

A new parameter option-of-tunnel is already added to list-vteps in itm.yang and tunnel-end-points in itm-state.yang.

A new container will be added in odl-itm-meta.yang to maintain a mapping of parent-child interfaces.

odl-item-meta.yang
 container interface-child-info {
 description "The container of all child-interfaces for an interface.";
     list interface-parent-entry {
         key parent-interface;
         leaf parent-interface {
             type string;
         }

         list interface-child-entry {
             key child-interface;
             leaf child-interface {
                 type string;
             }
         }
     }
 }

A new container will be added to maintain the reference count for bfd monitoring requests from applications:

odl-item-meta.yang
 container monitoring-ref-count {
 description "The container for maintaing the reference count for monitoring requests
              between a src and dst DPN pair";
     config "false"
     list monitored-tunnels {
         key source-dpn destination-dpn;
         leaf source-dpn {
             type uint64;
         }
         leaf destination-dpn {
             type uint64;
         }
         leaf reference-count {
             type uint16;
         }
     }
 }

The key for dpn-teps-state yang will have to be made composite, to include monitoring-enabled flag too, as this will be needed if bfd-monitoring is enabled on an of-tunnel enabled DPN.

itm-state.yang
container dpn-teps-state {
    list dpns-teps {
        key "source-dpn-id";
        leaf source-dpn-id {
            type uint64;
            mandatory true;
        }

        leaf ip-address {
            type inet:ip-address;
        }
        ..........

        /* Remote DPNs to which this DPN-Tep has a tunnel */
        list remote-dpns {
             key "destination-dpn-id";
             leaf destination-dpn-id {
                 type uint64;
                 mandatory true;
             }

             leaf option-of-tunnel {
                 description "Use flow based tunnels for remote-ip";
                 type boolean;
                 default false;
             }

             leaf monitoring-enabled {
                  type boolean;
                  mandatory true;
             }

             leaf tunnel-name {
                 type string;
                 mandatory true;
             }
ITM RPC changes

A new RPC will be added to retrieve watch-port for the BFD enabled point-to-point tunnels. By default all traffic will use the OF Tunnels between a source and destination DPN pair. But applications like ECMP might want to use the BFD monitoring enabled point to point tunnel in their pipeline as watch port for implementing liveness, and for such applications this RPC will be useful.

itm-rpc.yang
 rpc get-watch-port-for-tunnel {
     description "retrieve the watch port for the BFD enabled point to point tunnel";
     input {
         leaf source-node {
             type string;
         }

         leaf destination-node {
             type string;
         }

     }
     output {
         leaf port-no {
             type uint32;
         }
         leaf portname {
             type string;
         }
     }
 }
Workflow
Adding TEP
  1. User: Enables itm-scalability by setting itm-direct-tunnels flag to true in genius-ifm-config.xml.

  2. User: While adding tep user gives option-of-tunnel:true for tep being added.

  3. ITM: If option-of-tunnel:true for vtep, set option:remote_ip=flow when creating tunnel interface in OVSDB. Else, set option:remote_ip=<destination-ip>.

  4. ITM: OF Tunnel will be created with a separate destination udp port, so that the BFD traffic can be distinguished from the actual data traffic.

  5. ITM: Receives notification when the of-port is added on the switch.

  6. ITM: Checks for the northbound configured tunnel interfaces on top of this flow based tunnel, and creates group for each source-destination pair reachable over this of-tunnel.

Enable BFD between two TEPs
  1. If BFD monitoring is enabled through the setBfdParamOnTunnel() RPC, additional point to point tunnels will be created on the specified source, destination DPNs.

  2. ITM increments monitored reference count for the particular source, destination pair.

  3. These tunnel end points will be added to the tunnel-state which applications listen to.

  4. The state of the point to point tunnels will be still available via ``get-watch-port-for-tunnel``RPC for applications who want to use them in their datapath for aliveness.

  5. There won’t be any flows that will be programmed on the OVS for these point to point tunnels, and they will serve the purpose of BFD monitoring alone.

Disable BFD between two TEPs
  1. ITM has to maintain a reference count for the number of applications who have requested for the monitoring.

  2. If BFD monitoring is disabled through the setBfdParamOnTunnel() RPC, ITM will delete the p2p tunnel for monitoring only if this is the last service interested in monitoring.

Deleting TEP
  1. If tunnel-remote-ip:true for vtep, delete tunnel port in OVSDB. Also, delete relevant datastores which were populated in ITM.

  2. If tunnel-remote-ip:false, follow existing logic.

  3. If BFD monitoring is enabled on a flow based VTEP, the point to point tunnel created for monitoring also needs to be deleted.

  4. The BFD monitoring information will be still maintained in ITM, to enable smooth and transparent TEP re-creation.

  5. All remote DPNs will be updated, to add drop action in their ITM group pointing to the deleted TEP.

  6. The BFD monitoring information and the empty group corresponding to this TEP will be deleted only in a scale-in scenario, where the DPN is explicitly removed.

Configuration impact

A configuration parameter will be added to genius-ifm-config.xml to set the value of dst_udp_port for point to point tunnel for BFD monitoring.

Clustering considerations

Any clustering requirements are already addressed in ITM and IFM, no new requirements added as part of this feature.

Upgrade Considerations

An existing tunnel deployment should not automatically change after an upgrade. If a deployment has pt-pt tunnels, then that’s what the upgrade will maintain. The user would then have to set up of tunnels separately and remove the pt-pt tunnel mesh, so it would amount to downtime.

Scale and Performance Impact

This solution will help improve scale numbers by reducing no. of interfaces created on devices as well as no. of interfaces and ports present in inventory and network-topology. ITM will still be maintaining n*(n-1) tunnel-states in its datastore, so that application logic won’t be impacted.

Known Limitations
  1. Openflowplugin needs to ensure that the ITM group gets programmed on the switch first, before programming any flows that point to this group. This feature is currently not supported in ofp, but will be available as part of Fluorine.

  2. Hitless upgrade cannot be supported from a point to point tunnel deployment to of-tunnel deployment. The default configuration will remain as point to point tunnel, and user has to explicitly switch to of-tunnels after upgrade.

  3. Since data traffic will not be flowing over the point to point tunnel for BFD monitoring, the existing configuration of forwarding_if_rx which helps in avoiding tunnel flapping by making use of data traffic will not be supported.

Alternatives

LLDP/ARP based monitoring was considered for OF tunnels to overcome lack of BFD monitoring but was rejected because LLDP/ARP based monitoring doesn’t scale well. Since driving requirement for this feature is scale setups, it didn’t make sense to use an unscalable solution for monitoring.

Even BFD monitoring with point to point tunnel may not scale if all O(n**2). Hence this whole proposal is about need based monitoring to reduce the monitored set of tunnels to reduce it to a small subset of O(n**2) tunnels. LLDP & ARP might scale enough for the subset.

Using point to point tunnel itself for Data Traffic whenever BFD monitoring gets enabled was discussed, however since all applications are currently using the destination port number in their flows, it will add additional complexity of updating all application flows with the new port number, the moment point to point tunnel is created to override OF-tunnels. Hence this option was discarded.

Usage

Features to Install

This feature doesn’t add any new karaf feature.

User can use this feature via three options - REST, CLI or Auto-Tunnel Configuration.

REST API
Adding TEPs to transport zone

For most users TEP Addition is the only configuration they need to do to create tunnels using genius. The REST API to add TEPs with OF Tunnels is same as earlier.

URL: restconf/config/itm:transport-zones/

Sample JSON data

{
 "transport-zone": [
     {
         "zone-name": "TZA",
         "subnets": [
             {
                 "prefix": "192.168.56.0/24",
                 "vlan-id": 0,
                 "vteps": [
                     {
                         "dpn-id": "1",
                         "portname": "eth2",
                         "ip-address": "192.168.56.101",
                         "option-of-tunnel":"true"
                     }
                 ],
                 "gateway-ip": "0.0.0.0"
             }
         ],
         "tunnel-type": "odl-interface:tunnel-type-vxlan"
     }
 ]
}
CLI

A new boolean option, remoteIpFlow will be added to tep:add command.

DESCRIPTION
  tep:add
  adding a tunnel end point

SYNTAX
  tep:add [dpnId] [portNo] [vlanId] [ipAddress] [subnetMask] [gatewayIp] [transportZone]
  [remoteIpFlow]

ARGUMENTS
  dpnId
          DPN-ID
  portNo
          port-name
  vlanId
          vlan-id
  ipAddress
          ip-address
  subnetMask
          subnet-Mask
  gatewayIp
          gateway-ip
  transportZone
          transport_zone
  remoteIpFlow
          Use flow for remote ip
ITM AUTO-TUNNELS

ITM already supports automatic configuration of of-tunnels. Details on how to configure the same can be found under the references section.

Implementation

Assignee(s)
Primary assignee:

<Faseela K>

Other contributors:

<Dimple Jain> <Nidhi Adhvaryu> <N Edwin Anthony> <B Sathwik>

Work Items
  1. YANG changes

  2. Create ITM groups per destination

  3. Update ITM groups per destination with drop action on TEP delete.

  4. Delete ITM group only while scale-in.

  5. Create OF-port on OVS only for the first tunnel getting configured, if of-tunnel is true.

  6. Create point to point tunnel on OVS, when monitoring has to be enabled between two Flow Based DPNs.

  7. Add option for configuring dst_port for point to point tunnels.

  8. Add configuration option for dst_udp_port.

  9. Skip flow configuration for point to point tunnels configured on top of flow-based VTEP.

  10. Add goto_group action to actions returned in getEgressActionsForTunnel() for OF Tunnels.

  11. Add match on tun_src_ip in Table0 for OF Tunnels.

  12. Maintain reference count for applications requesting for BFD monitoring.

  13. Migrate setBfdParamOnTunnel() RPC as a routed RPC to ensure synchronized updation of reference count.

  14. Transparently handle monitored p2p tunnel deletion, in case of flow based tunnel deletion.

  15. Transparently handle monitored p2p tunnel addition, in case of flow based tunnel re-addition.

  16. Add CLI.

  17. Add UTs.

  18. Add scale tests and compare the performance numbers against p2p tunnels.

  19. Add CSIT.

  20. Add Documentation

Dependencies

This doesn’t add any new dependencies. This requires minimum of OVS 2.0.0 which is already lower than required by some of other features.

This change is backwards compatible, so no impact on dependent projects. Projects can choose to start using this when they want. However, there is a known limitation with monitoring, refer Limitations section for details.

Following projects currently depend on Genius:

  • Netvirt

  • SFC

Testing

Unit Tests

Appropriate UTs will be added for the new code coming in once framework is in place.

CSIT

Following test cases will need to be added/expanded in Genius CSIT:

  1. Enhance Genius CSIT to support 3 switches

  2. Create a TZ with more than one TEPs set to use OF Tunnels.

  3. Delete a TZ with more than one TEPs set to use OF Tunnels.

  4. Delete a TEP using OF Tunnels and add it again with non OF tunnels.

  5. Delete a TEP using non OF Tunnels and add it again with OF Tunnels.

  6. Enable BFD monitoring on an OF Tunnel enabled src, dest DPN pair.

  7. Disable BFD monitoring on an OF Tunnel enabled src, dest DPN pair.

  8. Enable auto-config and test the of-tunnels feature.

Documentation Impact

This will require changes to User Guide and Developer Guide.

User Guide will need to add information on how to add TEPs with flow based tunnels.

Developer Guide will need to capture how to use changes in ITM to create individual tunnel interfaces.

Traffic shaping with Ovsdb QoS queues

QoS patches: https://git.opendaylight.org/gerrit/#/q/topic:qos-shaping

The current Boron implementation provides support for ingress rate limiting configuration of OVS. The Carbon release will add egress traffic shaping to QoS feature set. (Note, the direction of traffic flow (ingress, egress) is from the perspective of the OpenSwitch)

Problem description

OVS supports traffic shaping for traffic that egresses from a switch. To utilize this functionality, Genius implementation should be able to create ‘set queue’ output action upon connection of new OpenFlow node.

Use Cases

Use case 1: Allow Unimgr to shape egress traffic from UNI

Proposed change

Unimgr or Neutron VPN creates ietf vlan interface for each port connected to particular service. The Ovsdb provides a possibility to create QoS and mapped Queue with egress rate limits for lower level port. Such queue should be created on parent physical interface of vlan or trunk member port if service has definition of limits. The ovsdb southbound provides interface for creation of ovs QoS and Queues. This functionality may be utilized by netvirt qos service. Below is the dump from ovsdb with queues created for one of the ports.

Port table
   _uuid : a6cf4ca9-b15c-4090-aefe-23af2d5ce4f2
   name                : "ens5"
   qos                 : 9779ce41-4347-4383-b308-75f46d6a258c
QoS table
   _uuid               : 9779ce41-4347-4383-b308-75f46d6a258c
   other_config        : {max-rate="50000"}
   queues              : {1=3cc34bb7-7df8-4538-9fd7-4a6c6c467c69}
   type                : linux-htb
Queue table
   _uuid               : 3cc34bb7-7df8-4538-9fd7-4a6c6c467c69
   dscp                : []
   other_config        : {max-rate="50000", min-rate="5000"}

The queues creation is out of scope of this document. The definition of vlan or trunk member port will be augmented with relevant queue reference and number if queue was created successful. That will allow to create openflow ‘set_queue’ output action during service binding.

Pipeline changes

New ‘set_queue’ action will be supported in Egress Dispatcher table

Table

Match

Action

Egress Dispatcher [220]

no changes

Set queue id (optional) and output to port

Yang changes

A new augment “ovs-qos” is added to if:interface in odl-interface.yang

/* vlan port to qos queue */
 augment "/if:interfaces/if:interface" {
     ext:augment-identifier "ovs-qos";
     when "if:type = 'ianaift:l2vlan'";

     leaf ovs-qos-ref {
         type instance-identifier;
         description
           "represents whether service port has associated qos. A reference to a ovsdb QoS entry";
     }
     leaf service-queue-number {
         type uint32;
         description
           "specific queue number within the list of queues in the qos entry";
     }
 }
Scale and Performance Impact

Additional OpenFlow action will be performed on part of the packages. Egress packages will be processed via linux-htp if service configured accordanly.

Alternatives

The unified REST API for ovsdb port adjustment could be created if future release. The QoS engress queues and ingress rate limiting should be a part of this API. Usage ===== User will configure unimgr service with egress rate limits. That will follow to process described above.

Features to Install
  • odl-genius (unimgr using genius feature for flows creation)

REST API

None

CLI

None

Dependencies

Minimum OVS version 1.8.0 is required.

Testing

Unimgr test cases with configured egress rate limits will cover this functionality.

References

[1] OpenDaylight Documentation Guide <http://docs.opendaylight.org/en/latest/documentation.html>

[2] https://specs.openstack.org/openstack/nova-specs/specs/kilo/template.html

Service Binding On Tunnels

https://git.opendaylight.org/gerrit/#/q/topic:service-binding-on-tunnels

Service Binding On Tunnels Feature enables applications to bind multiple services on an ingress/egress tunnel.

Problem description

Currently GENIUS does not provide a generic mechanism to support binding services on all interfaces.Ingress service binding pipeline is different for l2vlan interfaces and tunnel interfaces.Similarly, egress Service Binding is only supported for l2vlan interfaces.

Today when ingress services are bound on a tunnel, the highest priority service gets bound in INTERFACE INGRESS TABLE(0) itself, and remaining service entries get populated in LPORT DISPATCHER TABLE(17), which is not in alignment with the service binding logic for VM ports. As part of this feature, we enable ingress/egress service binding support for tunnels in the same way as for VM interfaces. This feature also enables service-binding based on a tunnel-type which is basically meant for optimizing the number of flow entries in dispatcher tables.

Use Cases

This feature will support following use cases:

  • Use case 1: IFM should support binding services based on tunnel type.

  • Use case 2: All application traffic ingressing on a tunnel should go through the LPORT DISPATCHER TABLE(17).

  • Use case 3: IFM should support binding multiple ingress services on tunnels.

  • Use case 4: IFM should support priority based ingress service handling for tunnels.

  • Use case 5: IFM should support unbinding ingress services on tunnels.

  • Use case 6: IFM should support binding multiple egress services on tunnels.

  • Use case 7: IFM should support priority based egress service handling for tunnels.

  • Use case 8: All application traffic egressing on a tunnel should go through the egress dispatcher table(220).

  • Use case 9: Datapath should be intact even if there is no egress service bound on the tunnel.

  • Use case 10: IFM should support unbinding egress services on tunnels.

  • Use case 11: IFM should support handling of lower layer interface deletions gracefully.

  • Use case 12: IFM should support binding services based on tunnel type and lport-tag on the same tunnel interface on a priority basis.

  • Use case 13: Applications should bind on specific tunnel types on module startup

  • Use case 13: IFM should take care of programming the tunnel type based binding flows on each DPN.

Following use cases will not be supported:

  • Use case 1 : Update of service binding on tunnels. Any update should be done as delete and re-create

Proposed change

The proposed change extends the current l2vlan service binding functionality to tunnel interfaces. With this feature, multiple applications can bind their services on the same tunnel interface, and traffic will be processed on an application priority basis. Applications are given the flexibility to provide service specific actions while they bind their services. Normally service binding actions include go-to-service-pipeline-entry-table. Packets will enter a particular service based on the service priority, and if the packet is not consumed by the service, it is the application’s responsibility to resubmit the packet back to the egress/ingress dispatcher table for further processing by next priority service. Egress Dispatcher Table will have a default service priority entry per tunnel interface to egress the packet on the tunnel port.So, if there are no egress services bound on a tunnel interface, this default entry will take care of taking the packet out of the switch.

The feature also enables service binding based on tunnel type. This way number of entries in Dispatcher Tables can be optimized if all the packets entering on tunnel of a particular type needs to be handled in the same way.

Pipeline changes

There is a pipeline change introduced as part of this feature for tunnel egress as well as ingress, and is captured in genius pipeline document patch 2.

With this feature, all traffic from INTERFACE_INGRESS_TABLE(0) will be dispatched to LPORT_DISPATCHER_TABLE(17), from where the packets will be dispatched to the respective applications on a priority basis.

Register6 will be used to set the ingress tunnel-type in Table0, and this can be used to match in Table17 to identify the respective applications bound on the tunnel-type. Remaining logic of ingress service binding will remain as is, and service-priority and interface-tag will be set in metadata as usual. The bits from 25-28 of Register6 will be used to indicate tunnel-type.

After the ingress service processing, packets which are identified to be egressed on tunnel interfaces, currently directly go to the tunnel port. With this feature, these packets will goto Egress Dispatcher Table[Table 220] first, where the packet will be processed by Egress Services on the tunnel interface one by one, and finally will egress the switch.

Register6 will be used to indicate service priority as well as interface tag for the egress tunnel interface, in Egress Dispatcher Table, and when there are N services bound on a tunnel interface, there will be N+1 entries in Egress Dispatcher Table, the additional one for the default tunnel entry. The first 4 bits of Register6 will be used to indicate the service priority and the next 20 bits for interface Tag, and this will be the match criteria for packet redirection to service pipeline in Egress Dispatcher Table. Before sending the packet to the service, Egress Dispatcher Table will set the service index to the next service’ priority. Same as ingress, Register6 will be used for egress tunnel-type matching, if there are services bound on tunnel-type.

TABLE

MATCH

ACTION

INTERFACE_INGRESS_TABLE

in_port

SI=0,reg6=interface_type, metadata=lport tag, goto table 17

LPORT_DISPATCHER_TABLE

metadata=service priority && lport-tag(priority=10)

increment SI, apply service specific actions, goto ingress service

reg6=tunnel-type priority=5

increment SI, apply service specific actions, goto ingress service

EGRESS_DISPATCHER_TABLE

Reg6==service Priority && lport-tag(priority=10)

increment SI, apply service specific actions, goto egress service

reg6=tunnel-type priority=5

increment SI, apply service specific actions, goto egress service

RPC Changes

GetEgressActionsForInterface RPC in interface-manager currently returns the output:port action for tunnel interfaces. This will be changed to return set_field_reg6(default-service-index + interface-tag) and resubmit(egress_dispatcher_table).

Yang changes

No yang changes are needed, as binding on tunnel-type is enabled by having reserved keywords for interface-names

Workflow
Create Tunnel
  1. User: User created a tunnel end point

  2. IFM: When tunnel port is created on OVS, and the respective OpenFlow port Notification comes, IFM binds a default service in Egress Dispatcher Table for the tunnel interface, which will be the least priority service, and the action will be to take the packet out on the tunnel port.

Bind Service on Tunnel Interface
  1. User: While binding service on tunnels user gives service-priority, service-mode and instructions for service being bound on the tunnel interface.

  2. IFM: When binding the service for the tunnel, if this is the first service being bound, program flow rules in Dispatcher Table(ingress/egress based on service mode) to match on service-priority and interface-tag value with actions pointing to the service specific actions supplied by the application.

  3. IFM: When binding a second service, based on the service priority one more flow will be created in Dispatcher Table with matches specific to the new service priority.

Unbind Service on Tunnel Interface
  1. User: While unbinding service on tunnels user gives service-priority and service-mode for service being unbound on the tunnel interface.

  2. IFM: When unbinding the service for the tunnel, IFM removes the entry in Dispatcher Tables for the service. IFM also rearranges the remaining flows for the same tunnel interface to adjust the missing service priority

Bind Service on Tunnel Type
  1. Application: While binding service on tunnel type user gives a reserved keyword indicating the tunnel-type apart from``service-priority``, service-mode and instructions for service being bound. The reserved keywords will be ALL_VXLAN_INTERNAL, ALL_VXLAN_EXTERNAL, and ALL_MPLS_OVER_GRE.

  2. IFM: When binding the service for the tunnel-type,program flow rules in Dispatcher Table(ingress/egress based on service mode) to match on service-priority and tunnel-type value with actions pointing to the service specific actions supplied by the application will be created on each DPN.

  3. IFM: When binding a second service, based on the service priority one more flow will be created in Dispatcher Table with matches specific to the new service priority will be created on each DPN..

Unbind Service on Tunnel Type
  1. User: While unbinding service on tunnels user gives a reserved keyword indicating the tunnel-type ,``service-priority`` and service-mode for service being unbound on all connected DPNs.

  2. IFM: When unbinding the service for the tunnel-type, IFM removes the entry in Dispatcher Tables for the service. IFM also rearranges the remaining flows for the same tunnel type to adjust the missing service priority

Delete Tunnel
  1. User: User deleted a tunnel end point

  2. IFM: When tunnel port is deleted on OVS, and the respective OpenFlow Port Notification comes, IFM unbinds the default service in Egress Dispatcher Table for the tunnel interface.

  3. IFM: If there are any outstanding services bound on the tunnel interface, all the Dispatcher Table Entries for this Tunnel will be deleted by IFM.

Application Module Startup
  1. Applications: When Application bundle comes up, they can bind respective applications on the tunnel types they are interested in, with their respective service priorities.

Configuration impact

This change doesn’t add or modify any configuration parameters.

Clustering considerations

The solution is supported on a 3-node cluster.

Scale and Performance Impact
  • The feature adds one extra transaction during tunnel port creation, since the default Egress Dispatcher Table entry has to be programmed for each tunnel.

  • The feature provides support for service-binding on tunnel type with the primary purpose of minimizing the number of flow entries in ingress/egress dispatcher tables.

Usage

Features to Install

This feature doesn’t add any new karaf feature.Installing any of the below features can enable the service:

odl-genius-ui odl-genius-rest odl-genius

REST API
Creating tunnel-interface directly in IFM

This use case is mainly for those who want to write applications using Genius and/or want to create individual tunnel interfaces. Note that this is a simpler easy way to create tunnels without needing to delve into how OVSDB Plugin creates tunnels.

Refer Genius User Guide [4]_ for more details on this.

URL: restconf/config/ietf-interfaces:interfaces

Sample JSON data

{
 "interfaces": {
 "interface": [
     {
         "name": "vxlan_tunnel",
         "type": "iana-if-type:tunnel",
         "odl-interface:tunnel-interface-type": "odl-interface:tunnel-type-vxlan",
         "odl-interface:datapath-node-identifier": "1",
         "odl-interface:tunnel-source": "192.168.56.101",
         "odl-interface:tunnel-destination": "192.168.56.102",
         "odl-interface:monitor-enabled": false,
         "odl-interface:monitor-interval": 10000,
         "enabled": true
     }
  ]
 }
}
Binding Egress Service On Tunnels

URL: http://localhost:8181/restconf/config/interface-service-bindings:service-bindings/services-info/{tunnel-interface-name}/interface-service-bindings:service-mode-egress

Sample JSON data

{
   "bound-services": [
     {
       "service-name": "service1",
       "flow-priority": "5",
       "service-type": "service-type-flow-based",
       "instruction": [
        {
         "order": 1,
         "go-to-table": {
            "table_id": 88
          }
        }],
       "service-priority": "2",
       "flow-cookie": "1"
     }
   ]
}
CLI

N.A.

Implementation

Assignee(s)
Primary assignee:

Faseela K

Work Items
  1. Create Table 0 tunnel entries to set tunnel-type and lport_tag and point to LPORT_DISPATCHER_TABLE

  2. Support of reserved keyword in interface-names for tunnel type based service binding.

  3. Program tunnel-type based service binding flows on DPN connect events.

  4. Program Lport Dispatcher Flows(17) on bind service

  5. Remove Lport Dispatcher Flows(17) on unbind service

  6. Handle multiple service bind/unbind on tunnel interface

  7. Create default Egress Service for Tunnel on Tunnel Creation

  8. Add set_field_reg_6 and resubmit(220) action to actions returned in getEgressActionsForInterface() for Tunnels.

  9. Program Egress Dispatcher Table(220) Flows on bind service

  10. Remove Egress Dispatcher Table(220) Flows on unbind service

  11. Handle multiple egress service bind/unbind on tunnel interface

  12. Delete default Egress Service for Tunnel on Tunnel Deletion

  13. Add UTs.

  14. Add CSIT.

  15. Add Documentation

  16. Trello Card : https://trello.com/c/S8lNGd9S/6-service-binding-on-tunnel-interfaces

Dependencies

Genius, Netvirt

There will be several impacts on netvirt pipeline with this change. A brief overview is given in the table below:

Testing

Capture details of testing that will need to be added.

Unit Tests

New junits will be added to InterfaceManagerConfigurationTest to cover the following :

  1. Bind/Unbind single ingress service on tunnel-type

  2. Bind/Unbind single egress service on tunnel-type

  3. Bind single ingress service on tunnel-interface

  4. Unbind single ingress service on tunnel-interface

  5. Bind multiple ingress services on tunnel in priority order

  6. Unbind multiple ingress services on tunnel in priority order

  7. Bind multiple ingress services out of priority order

  8. Unbind multiple ingress services out of priority order

  9. Delete tunnel port to check if ingress dispatcher flows for bound services get deleted

  10. Add tunnel port back to check if ingress dispatcher flows for bound services get added back

  11. Bind single egress service on tunnel

  12. Unbind single egress service on tunnel

  13. Bind multiple egress services on tunnel in priority order

  14. Unbind multiple egress services on tunnel in priority order

  15. Bind multiple egress services out of priority order

  16. Unbind multiple egress services out of priority order

  17. Delete tunnel port to check if egress dispatcher flows for bound services get deleted

  18. Add tunnel port back to check if egress dispatcher flows for bound services get added back

CSIT

The following TCs should be added to CSIT to cover this feature:

  1. Bind/Unbind single ingress/egress service on tunnel-type to see the corresponding table entries are created in switch.

  2. Bind single ingress service on tunnel to see the corresponding table entries are created in switch.

  3. Unbind single ingress service on tunnel to see the corresponding table entries are deleted in switch.

  4. Bind multiple ingress services on tunnel in priority order to see if metadata changes are proper on the flow table.

  5. Unbind multiple ingress services on tunnel in priority order to see if metadata changes are proper on the flow table on each unbind.

  6. Bind multiple ingress services out of priority order to see if metadata changes are proper on the flow table.

  7. Unbind multiple ingress services out of priority order.

  8. Delete tunnel port to check if ingress dispatcher flows for bound services get deleted.

  9. Add tunnel port back to check if ingress dispatcher flows for bound services get added back.

  10. Bind single egress service on tunnel to see the corresponding table entries are created in switch.

  11. Unbind single egress service on tunnel to see the corresponding table entries are deleted in switch.

  12. Bind multiple egress services on tunnel in priority order to see if metadata changes are proper on the flow table.

  13. Unbind multiple egress services on tunnel in priority order to see if metadata changes are proper on the flow table on each unbind.

  14. Bind multiple egress services out of priority order to see if metadata changes are proper on the flow table.

  15. Unbind multiple egress services out of priority order.

  16. Delete tunnel port to check if egress dispatcher flows for bound services get deleted.

  17. Add tunnel port back to check if egress dispatcher flows for bound services get added back.

Documentation Impact

This will require changes to User Guide and Developer Guide.

There is a pipeline change for tunnel datapath introduced due to this change. This should go in User Guide.

Developer Guide should capture how to configure egress service binding on tunnels.

References

1

Genius Carbon Release Plan https://wiki.opendaylight.org/view/Genius:Carbon_Release_Plan

2

Netvirt Pipeline Diagram http://docs.opendaylight.org/en/latest/submodules/genius/docs/pipeline.html

3

Genius Trello Card https://trello.com/c/S8lNGd9S/6-service-binding-on-tunnel-interfaces

4

Genius User Guide http://docs.opendaylight.org/en/latest/user-guide/genius-user-guide.html#creating-overlay-tunnel-interfaces

Note

This template was derived from [2], and has been modified to support our project.

This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode

Service Recovery Framework

https://git.opendaylight.org/gerrit/#/q/topic:service-recovery

Service Recovery Framework is a feature that enables recovery of services. This recovery can be trigerred by user, or eventually, be used as a self-healing mechanism.

Problem description

Status and Diagnostic adds support for reporting current status of different services. However, there is no means to recover individual service or service instances that have failed. Only recovery that can be done today is to restart the controller node(s) or manually restart the bundle or reinstall the karaf feature itself.

Restarting the controller can be overkill and needlessly disruptive. Manually restarting bundle or feature requires user to be aware of and have access to these CLIs. There may not be one-to-one mapping from a service to corresponding bundle or feature. Also, a truly secure system would provide role based access to users. Only someone with administrative rights will have access to Karaf CLI to restart/reinstall while a less privileged user should be able to trigger recovery without requiring higher level access.

Note that role based access is out of scope of this document

Use Cases

This feature will support following use cases:

  • Use Case 1: Provide RPC and CLI to trigger reinstall of a service.

  • Use Case 2: Provide RPC and CLI to trigger recover a service.

  • Use Case 3: Provide RPC and CLI to trigger recovery of specific instance object managed by a service, referred to as service instance.

Proposed change

A new module Service Recovery Manager (SRM) will be added to Genius. SRM will provide single and common point of interaction with all individual services. Recovery options will vary from highest level service restart to restarting individual service instances.

SRM Terminology

SRM will introduce concept of service entities and operations.

SRM Entities
  • EntityName - Every object in SRM is referred to as an entity and EntityName is the unique identifier for a given entity. e.g. L3VPN, ITM, VPNInstance etc.

  • EntityType - Every entity has a corresponding type. Currently supported types are service and instance. e.g. L3VPN is a entity of type service and VPNInstance is an entity of type instance

  • EntityId - Every entity of type instance will have a unique entity-id as an identifier. e.g. The uuid of VPNInstance is the entity-id identifying an individual VPN Instance from amongst many present in L3VPN service.

SRM Operations
  • reinstall - This command will be used to reinstall a service. This will be similar to karaf bundle restart, but may result in restart of more than one bundle as per the service. This operation will only be applicable to entity-type service.

  • recover - This command will be used to recover an individual entity, which can be service or instance. For entity-type: service the entity-name will be service name. For entity-type: instance the entity-name will be instance name and entity-id` will be a required field.

Example

This table gives some examples of different entities and operations for them:

OPERATION

EntityType

EntityName

EntityId

Remarks

reinstall

service

ITM

N.A.

Restart ITM

recover

service

ITM

ITM

Recover ITM Service

recover

instance

TEP

dpn-1

Recover TEP

recover

isntance

TransportZone

TZA

Recover Transport Zone

Out of Scope
  • SRM will not be implementing actual recovery mechanisms, it will only act as intermediary between user and individual services.

  • SRM will not provide status of services. Status and Diagnostic (SnD) framework is expected to provide service status.

Yang changes

We’ll be adding three new yang files

ServiceRecovery Types

This file will contain different types used by service recovery framework. Any service that wants to use ServiceRecovery will have to define its supported names and types in this file.

srm-types.yang
 module srm-types {
     namespace "urn:opendaylight:genius:srm:types";
     prefix "srmtypes";

     revision "2017-05-31" {
         description "ODL Services Recovery Manager Types Module";
     }

     /* Entity TYPEs */

     identity entity-type-base {
         description "Base identity for all srm entity types";
     }
     identity entity-type-service {
         description "SRM Entity type service";
         base entity-type-base;
     }
     identity entity-type-instance {
         description "SRM Entity type instance";
         base entity-type-base;
     }


     /* Entity NAMEs */

     /* Entity Type SERVICE names */
     identity entity-name-base {
         description "Base identity for all srm entity names";
     }
     identity genius-ifm {
         description "SRM Entity name for IFM service";
         base entity-type-base;
     }
     identity genius-itm {
         description "SRM Entity name for ITM service";
         base entity-type-base;
     }
     identity netvirt-vpn {
         description "SRM Entity name for VPN service";
         base entity-type-base;
     }
     identity netvirt-elan {
         description "SRM Entity name for elan service";
         base entity-type-base;
     }
     identity ofplugin {
         description "SRM Entity name for openflowplugin service";
         base entity-type-base;
     }


     /* Entity Type INSTANCE Names */

     /* Entity types supported by GENIUS */
     identity genius-itm-tep {
         description "SRM Entity name for ITM's tep instance";
         base entity-type-base;
     }
     identity genius-itm-tz {
         description "SRM Entity name for ITM's transportzone instance";
         base entity-type-base;
     }

     identity genius-ifm-interface {
         description "SRM Entity name for IFM's interface instance";
         base entity-type-base;
     }

     /* Entity types supported by NETVIRT */
     identity netvirt-vpninstance {
         description "SRM Entity name for VPN instance";
         base entity-type-base;
     }

     identity netvirt-elaninstance {
         description "SRM Entity name for ELAN instance";
         base entity-type-base;
     }


     /* Service operations */
     identity service-op-base {
         description "Base identity for all srm operations";
     }
     identity service-op-reinstall {
         description "Reinstall or restart a service";
         base service-op-base;
     }
     identity service-op-recover {
         description "Recover a service or instance";
         base service-op-recover;
     }

 }
ServiceRecovery Operations

This file will contain different operations that individual services must support on entities exposed by them in servicesrecovery-types.yang. These are not user facing operations but used by SRM to translate user RPC calls to

srm-ops.yang
 module srm-ops {
     namespace "urn:opendaylight:genius:srm:ops";
     prefix "srmops";

     import srm-types {
         prefix srmtype;
     }

     revision "2017-05-31" {
         description "ODL Services Recovery Manager Operations Model";
     }

     /* Operations  */

     container service-ops {
         config false;
         list services {
             key service-name
             leaf service-name {
                 type identityref {
                     base srmtype:entity-name-base
                 }
             }
             list operations {
                 key entity-name;
                 leaf entity-name {
                     type identityref {
                         base srmtype:entity-name-base;
                     }
                 }
                 leaf entity-type {
                     type identityref {
                         base srmtype:entity-type-base;
                         mandatory true;
                     }
                 }
                 leaf entity-id {
                     description "Optional when entity-type is service. Actual
                                  id depends on entity-type and entity-name"
                     type string;
                 }
                 leaf trigger-operation {
                     type identityref {
                         base srmtypes:service-op;
                         mandatory true;
                     }
                 }
             }
         }
     }

 }
ServiceRecovery RPCs

This file will contain different RPCs supported by SRM. These RPCs are user facing and SRM will translate these into ServiceRecovery Operations as defined in srm-ops.yang.

srm-rpcs.yang
 module srm-rpcs {
     namespace "urn:opendaylight:genius:srm:rpcs";
     prefix "srmrpcs";

     import srm-types {
         prefix srmtype;
     }

     revision "2017-05-31" {
         description "ODL Services Recovery Manager Rpcs Module";
     }

     /* RPCs */

     rpc reinstall {
         description "Reinstall a given service";
         input {
             leaf entity-name {
                 type identityref {
                     base srmtype:entity-name-base;
                     mandatory true;
                 }
             }
             leaf entity-type {
                 description "Currently supported entity-types:
                                 service";
                 type identityref {
                     base srmtype:entity-type-base;
                     mandatory false;
                 }
             }
         }
         output {
             leaf successful {
                 type boolean;
             }
             leaf message {
                 type string;
             }
         }
     }


     rpc recover {
         description "Recover a given service or instance";
         input {
             leaf entity-name {
                 type identityref {
                     base srmtype:entity-name-base;
                     mandatory true;
                 }
             }
             leaf entity-type {
                 description "Currently supported entity-types:
                                 service, instance";
                 type identityref {
                     base srmtype:entity-type-base;
                     mandatory true;
                 }
             }
             leaf entity-id {
                 description "Optional when entity-type is service. Actual
                              id depends on entity-type and entity-name"
                 type string;
                 mandatory false;
             }
         }
         output {
             leaf response {
                 type identityref {
                     base rpc-result-base;
                     mandatory true;
                 }
             }
             leaf message {
                 type string;
                 mandatory false;
             }
         }
     }

     /* RPC RESULTs */

     identity rpc-result-base {
         description "Base identity for all SRM RPC Results";
     }
     identity rpc-success {
         description "RPC result successful";
         base rpc-result-base;
     }
     identity rpc-fail-op-not-supported {
         description "RPC failed:
                         operation not supported for given parameters";
         base rpc-result-base;
     }
     identity rpc-fail-entity-type {
         description "RPC failed:
                         invalid entity type";
         base rpc-result-base;
     }
     identity rpc-fail-entity-name {
         description "RPC failed:
                         invalid entity name";
         base rpc-result-base;
     }
     identity rpc-fail-entity-id {
         description "RPC failed:
                         invalid entity id";
         base rpc-result-base;
     }
     identity rpc-fail-unknown {
         description "RPC failed:
                         reason not known, check message string for details";
         base rpc-result-base;
     }
 }
Clustering considerations

SRM will provide RPCs, which will only be handled on one of the nodes. In turn, it will write to srm-ops.yang and each individual service will have Clustered Listeners to track operations being triggered. Individual services will decide, based on service and instance on which recovery is triggered, if it needs to run on all nodes on cluster or individual nodes.

Other Infra considerations

Status and Diagnostics (SnD) may need to be updated to user service names similar to ones used in SRM.

Security considerations

Providing RPCs to trigger service restarts will eliminate the need to give administrative access to non-admin users just so they can trigger recovery though bundle restarts from karaf CLI. Expectation is access to these RPCs will be role based, but role based access and its implementation is out of scope of this feature.

Scale and Performance Impact

This feature allows recovery at a much fine grained level than full controller or node restart. Such restarts impact and trigger recovery of services that didn’t need to be recover. Every restart of controller cluster or individual nodes has a significant overhead that impacts scale and performance. This feature aims to eliminate these overheads by allowing targeted recovery.

Targeted Release

Nitrogen.

Alternatives

Using existing karaf CLI for feature and bundle restart was considered but rejected due to reasons already captured in earlier sections.

Usage

TBD.

REST API

TBD.

CLI
srm:reinstall

All arguments are case insensitive unless specified otherwise.

DESCRIPTION
  srm:reinstall
  reinstall a given service

SYNTAX
  srm:reinstall <service-name>

ARGUMENTS
  service-name
          Name of service. to re-install e.g. itm/ITM, ifm/IFM etc.

EXAMPLE
  srm:reinstall ifm
srm:recover
DESCRIPTION
  srm:recover
  recover a service or service instance

SYNTAX
  srm:recover <entity-type> <entity-name> [<entity-id>]

ARGUMENTS
  entity-type
          Type of entity as defined in srm-types.
          e.g. service, instance etc.
  entity-name
          Entity name as defined in srm-types.
          e.g. itm, itm-tep etc.
  entity-id
          Entity Id for instances, requierd for entity-type instance.
          e.g. 'TZA', 'tunxyz' etc.

EXAMPLES
  srm:recover service itm
  srm:recover instance itm-tep TZA
  srm:recover instance vpn-instance e5e2e1ee-31a3-4d0c-a8d8-b86d08cd14b1

Implementation

Assignee(s)
Primary assignee:

Vishal Thapar

Other contributors:

Faseela K Hema Gopalakrishnan

Work Items
  1. Add srm modules and features

  2. Add srm yang models

  3. Add code for CLI

  4. Add backend implementation for RPCs to tigger SRM Operations

  5. Optionally, for each service and supported instances, add implementation for SRM Operations

  6. Add UTs

  7. Add CSITs

Dependencies

  • Infrautils

Documentation Impact

This will require changes to User Guide based on information provided in Usage section.

References

[1] Genius Nitrogen Release Plan https://wiki.opendaylight.org/view/Genius:Nitrogen_Release_Plan

[2] https://specs.openstack.org/openstack/nova-specs/specs/kilo/template.html

[3] IFM and ITM service recovery test-plan ->

https://docs.opendaylight.org/en/latest/submodules/genius/docs/testplans/service-recovery.html

[4] ELAN service recovery test-plan ->

https://docs.opendaylight.org/projects/netvirt/en/latest/specs/service-recovery-elan.html

Note

This template was derived from [2], and has been modified to support our project.

This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode

DPN ID In ARP Notifications

https://git.opendaylight.org/gerrit/#/c/75248/

The Genius arputil component provides a notification service for ARP packets forwarded from switches via OpenFlow packet-in events. This change adds the switch’s datapath ID to the notifications.

Problem description

This change resolves the fact that the switch datapath ID is not copied from the OpenFlow packet-in event to the ARP notification sent by Genius arputil.

Use Cases

This change is primarily introduced to correctly support assigning a FIP to an Octavia VIP:

https://jira.opendaylight.org/browse/NETVIRT-1402

An Octavia VIP is a Neutron port that is not bound to any VM and is therefor not added to br-int. The VM containing the active HaProxy sends gratuitous ARPs for the VIP’s IP and ODL intercepts those and programs flows to forward VM traffic to the VMs port.

The ODL code responsible for configuring the FIP association flows on OVS currently relies on a southbound openflow port that corresponds to the neutron FIP port. The only real reason this is required is so that ODL can decide which switch should get the flows. In the case of the VIP port, there is no corresponding southbound port so the flows never get configured.

To resolve this, ODL can know which switch to program the flows on from the gratuitous ARP packet-in event which will come from the right switch (we already listen for those.) So, basically we just respond to the gratuitous ARP by correlating it with the Neutron port, checking that the port is an Octavia VIP (the owner field), and programming the flows.

Proposed change

  • Add dpn-id fields to the the arp-request-received and arp-response-received yang notifications

  • Extract the datapath ID from the PacketReceived’s ingress field and set it in the notification

Yang changes

In arputil-api, add dpn-id fields to the the arp-request-received and arp-response-received yang notifications

Targeted Release

Nitrogen and preferably backported to Oxygen

Usage

Consumers of the ARP notifications may call getDpnId() to retrieve the datapath ID of the switch that forwarded the ARP packed to the controller.

REST API

This change simply adds a field to an existing yang notification and therefor does not change any APIs.

CLI

N/A

Implementation

Work Items

Simple change, see the gerrit patch above.

Dependencies

Although ARP notifications are currently consumed by netvirt vpnmanager, this feature is backwards compatible. A new notification listener that consumes the datapath ID will be added to natmanager to resolve the issue with Octavia mentioned above.

Testing

This feature will be tested as part of the fix to the above mentioned bug.

CSIT

TBD

ITM Yang Models cleanup

This Spec review discusses the changes required as part of the clean up activity in ITM Module.

Problem description

It was discovered during the code review that in ITM yang models, there is some code which is now redundant because of various reasons, i.e. change in the requirements and new features coming in making the older code/models redundant. Hence it was felt that an activity needs to be taken to clean up such yang models so as to enhance code readability and stability.

Use Cases

There is no changes in the use cases. This is just a cleanup activity done to remove unwanted stuff from itm-yang and the corresponding code cleanup.

Proposed change

YANG changes

Changes will be needed in itm.yang and itm-config.yang.

ITM YANG changes
  1. The below container``vtep-config-schemas`` will be removed from itm-config.yang as this is no longer required.

container vtep-config-schemas {
  list vtep-config-schema {
    key schema-name;

    leaf schema-name {
        type string;
        mandatory true;
        description "Schema name";
    }

    leaf transport-zone-name {
        type string;
        mandatory true;
        description "Transport zone";
    }

    leaf tunnel-type {
        type identityref {
        base odlif:tunnel-type-base;
        }
    }

    leaf port-name {
        type string;
        mandatory true;
        description "Port name";
    }

    leaf vlan-id {
        type uint16 {
            range "0..4094";
        }
        mandatory true;
        description "VLAN ID";
    }

    leaf gateway-ip {
        type inet:ip-address;
        description "Gateway IP address";
    }

    leaf subnet {
        type inet:ip-prefix;
        mandatory true;
        description "Subnet Mask in CIDR-notation string, e.g. 10.0.0.0/24";
    }

    leaf exclude-ip-filter {
        type string;
        description "IP Addresses which needs to be excluded from the specified subnet. IP address range or comma separated IP addresses can to be specified. e.g: 10.0.0.1-10.0.0.20,10.0.0.30,10.0.0.35";
    }

    list dpn-ids {
        key "DPN";

        leaf DPN {
            type uint64;
            description "DPN ID";
        }
    }
  }
}
  1. The list “transport-zone” in container “transport-zones” will have the following modifications: -

    1. “weight” will be removed.

    2. “option-tunnel-tos” will be a part of the list.

    3. “option-of-tunnel” will be a part of the list.

    4. “monitoring” will be part of the list.

    5. “portname” will be removed.

    6. list “subnets” will be removed along with the leaves “prefix”, “gateway-ip” and “vlan-id”.

      The earlier list “vteps” and “device-vteps” which were part of the list “subnets”

      will now be part of the parent list “transport-zone”.

    7. key for list “vteps” will be only “dpn-id”.

   container transport-zones {
      list transport-zone {
      ordered-by user;
        key zone-name;
        leaf zone-name {
            type string;
            mandatory true;
        }

        leaf tunnel-type {
            type identityref {
                base odlif:tunnel-type-base;
            }
            mandatory true;
        }

        list vteps {
            key "dpn-id";
            leaf dpn-id {
                 type uint64;
            }
            leaf ip-address {
                 type inet:ip-address;
            }
            leaf option-tunnel-tos {
                description "Value of ToS bits to be set on the encapsulating
                packet.  The value of 'inherit' will copy the DSCP value
                from inner IPv4 or IPv6 packets.  When ToS is given as
                a numberic value, the least significant two bits will
                be ignored.";
                type string {
                    length "1..8";
                }
            }
            container monitoring {
                uses tunnel-monitor-params
            }
        }
         list device-vteps{
                key "ip-address";
                leaf ip-address{
                    type inet:ip-address;
                }
                leaf tunnnel-type{
                    type identityref {
                          base odlif:tunnel-type-base;
                    }
                 }
            }
         }
     }
}

grouping tunnel-monitoring-params {
    leaf enabled {
        type boolean;
        default true;
    }

    leaf monitor-protocol {
        type identityref {
            base odlif:tunnel-monitoring-type-base;
        }
        default odlif:tunnel-monitoring-type-bfd;
    }
    leaf interval {
        type uint16 {
            range "1000..30000";
        }
    }
}
  1. container “dc-gateway-ip-list” will be removed from the list “transport-zone”

  2. The list “tunnel-end-points” in the container “dpn-endpoints” in file itm-state.yang will have the below fields removed :-

    leaf portname {

    type string;

    } leaf VLAN-ID {

    type uint16;

    } leaf gw-ip-address {

    type inet:ip-address;

    } leaf subnet-mask {

    type inet:ip-prefix;

    }

  3. The rest of the fields from the list “tunnel-end-points” will become leaves in “dpn-endpoints”. The list will be removed.

  4. The leaf “internal” will be removed from the “dpn-teps-state” container in itm-state.yang

Workflow

N.A.

Configuration impact

The JSON, to create a transport zone, is going to be changed according to the new yang

Clustering considerations

Any clustering requirements are already addressed in ITM , no new requirements added as part of this feature.

Scale and Performance Impact

This solution will improve the readability and code stability so as to remove dead/unwarranted code. Targeted Release(s) ——————- Neon

Alternatives

N.A. Usage =====

Features to Install

This feature doesn’t add any new karaf feature.

REST API

For the changes listed in 2., the REST API to configure a transport-zone will be changed.

Implementation

Assignee(s)
Primary assignee:

<Chintan Apte>

Other contributors:

<Vacancies available>

Work Items
  1. YANG changes

  2. Code changes

  3. Add UTs.

  4. Add ITs.

  5. Update CSIT.

  6. Add Documentation

Testing

Unit Tests

Appropriate UTs will be added for the new code coming in once framework is in place. 2. UT should cover configuring the tunnels via tep-add commands using the new JSON format (post-cleanup).

Integration Tests

Integration tests will be added once IT framework for ITM and IFM is ready.

CSIT

2. CSIT should be updated to take care of configuring the transport-zone using the new JSON. The changes will need changes in the following: - Suites:-

Configure_ITM ITM Direct Tunnels BFD Monitoring Service Recovery

Keywords :

Create Vteps Set Json

CSIT/Variables/Genius : Itm_creation_no_vlan.json l2vlanmember.json

Documentation Impact

2. The change in the JSON format for configuring the transport-zone needs to be documented. The genius user guide will be modified to reflect the same.

Troubleshooting

This section will be updated with the changes needed in ODLTools for this cleanup activity. A JIRA will be raised in ODLTools for this.

Upgrade Impact

In case of upgrade issue related to tunnel name, ‘vlanid’ and ‘portname’ to be configured in the ‘genius-itm-config.xml’.

Support for compute node scale in and scale out

https://git.opendaylight.org/gerrit/#/q/topic:compute-scalein-scaleout

Add support for adding a new compute node into the existing topology and for removing or decomissioning existing compute node from topology.

Problem description

Support for adding a new compute node is already available. But when we scale in a compute node, we have to cleanup its relevant flows from openflow tables and cleanup the vxlan tunnel endpoints from other compute nodes. Also if the scaled in compute node is the designated compute for a particular service like nat or subnetroute etc , then those services have to choose a new compute node.

Use Cases
  • Scale out of compute nodes.

  • Scale in of single compute node

  • Scale in of a bunch of compute nodes

Proposed change

The following are steps taken by administrator to achieve compute node scale in.

  • The Nova Compute(s) shall be set into maintenance mode (nova service-disable <hostname> nova-compute).

This to avoid VM’s to be scheduled to these Compute Hosts.

  • Call a new rpc scalein-computes-start <list of scaledin compute node ids> to mark them as tombstoned.

  • VMs still residing on the Compute Host(s), shall be migrated from the Compute Host(s).

  • Disconnect the compute node from opendaylight controller node.

  • Call a new rpc scalein-computes-tep-delete <list of scaledin compute node ids> to delete their teps from the controller.

  • Call a new rpc scalein-computes-end <list of scaledin compute node ids> multiple times, till you get the output - DONE.

This is to signal the end of scale-in process.

Incase vm migration or deletion from some of these compute nodes fails

The following recovery rpc will be invoked

scalein-compute-recover <list of not scaled in compute node names which were passed as arg in scalein-computes-start>

Following is the typical sequence of operations.

scalein-computes-start A,B,C delete/migrate vms of A ( success ) delete/migrate vms of B ( fail ) delete/migrate vms of C ( wont be triggered ) scalein-computes-tep-delete A scalein-computes-end A scalein-computes-recover B,C

Typically When a single compute node gets scaled in as it gets disconnected from controller all the services who designated this compute as their designated compute would re-elect another compute node.

But when multiple compute nodes are getting scaled in during that window some of these computes should not be elected as designated compute.

To achieve that these scaled in computes are marked as tombstoned and they should be avoided when doing designated switch election or programming new services.

After calling the scalein-computes-start rpc and migrating the vms, orchestrator calls the scalein-computes-tep-delete rpc for deleting the tep ips of the computes. Once this is done, orchestrator should call scalein-computes-end rpc call multiple times till its output changes from INPROGRESS to DONE. This would indicate that the teps have been deleted successfully.

When we receive scalein-computes-end rpc call then corresponding computes config inventory and topology database also can be deleted.

When we receive scalein-computes-recover rpc call then corresponding computes tombstoned flag is set to false. If there are any services that do not have any compute node designated then they should start election of computes and possibly choose from these recovered computes.

Yang changes

The following rpcs will be added.

scalein-api.yang
     rpc scalein-computes-start {
         description "To trigger start of scale in the given dpns";
         input {
             leaf-list scalein-compute-names {
                 type string;
             }
         }
     }

     rpc scalein-computes-end {
         description "To end the scale in of the given dpns output DONE/INPROGRESS";
         input {
             leaf-list scalein-compute-names {
                 type string;
             }
         }
         output {
             leaf status {
                 type string;
             }
         }
     }

     rpc scalein-computes-recover {
         description "To recover the dpns which are marked for scale in";
         input {
             leaf-list recover-compute-names {
                 type string;
             }
         }
     }

     rpc scalein-computes-tep-delete {
         description "To delete the tep endpoints of the scaled in dpns";
         input {
             leaf-list scalein-compute-names {
                 type string;
             }
         }
     }

Topology node bridge-external-ids will be updated with additional key called “tombstoned”.

Usage

N/A.

Features to Install

odl-netvirt-openstack

REST API

N/A.

CLI

N/A.

Dependencies

No new dependencies.

Testing

  • Verify that scaled out compute vms should be able to communicate with inter and intra compute vms.

  • Verify that scale in compute flows be removed and existing service continue work.

  • Verify that scale in compute nodes config inventory and topology datastores are cleaned.

  • Identify a compute node which is designated for NAT/subnetroute functionality , scale in that compute, verify that NAT/subnetroute functionality continues to work. Verify that its relevant flows are reprogrammed.

  • While the scale in work flow is going on for few computes, create a new NAT/subnetroute resource, make sure that one of these compute nodes are not chosen.

  • Verify the recovery procedure of scale in workflow, make sure that the recovered compute gets its relevant flows.

  • Scale in a compute which is designated and no other compute has presence of that service (vpn) to be designated, make sure that all its flows and datastores are deleted.

  • Start scale in for a compute which is designated and no other compute has presence of that service (vpn) to be designated, recover the compute and make sure that all its flows and datastores are recovered.

CSIT
  • Verify that scale out compute vms should be able to communicate with inter and intra compute vms.

  • Verify that scale in compute flows be removed and existing service continue work.

  • Identify a compute node which is designated for NAT/subnetroute functionality , scale in that compute, verify that NAT/subnetroute functionality continues to work. Verify that its relevant flows are reprogrammed.

  • Verify the recovery procedure of scale in workflow, make sure that the recovered compute gets its relevant flows.

Genius Test Plans

Starting from Oxygen, Genius uses RST format Test Plan document for all new Test Suites.

Contents:

Interface Manager

Test Suite for testing basic Interface Manager functions.

Test Setup

Test setup consists of ODL with odl-genius installed and two switches (DPNs) connected to ODL over OVSDB and OpenflowPlugin.

Testbed Topologies

This suit uses the default Genius topology.

Default Topology
+--------+       +--------+
|  BR1   | data  |  BR2   |
|        <------->        |
+---^----+       +----^---+
    |       mgmt      |
+---v-----------------v---+
|                         |
|           ODL           |
|                         |
|        odl-genius       |
|                         |
+-------------------------+
Software Requirements

OVS 2.6+ Mininet ???

Test Suite Requirements

Test Suite Bringup

Following steps are followed at beginning of test suite:

  • Bring up ODL with odl-genius feature installed

  • Add bridge to DPN

  • Add tap interfaces to bridge created above

  • Add OVSDB manager to DPN using ovs-vsctl set-manager

  • Connect bridge to OpenFlow using ovs-vsctl set-controller

  • Repeat above steps for other DPNs

  • Create REST session to ODL

Test Suite Cleanup

Following steps are followed at beginning of test suite:

  • Delete bridge DPN

  • Delete OVSDB manager ‘ovs-vsctl del-manager’

  • Repeat above steps for other DPNs

  • Delete REST session to ODL

Debugging

Following DataStore models are captured at end of each test case:

  • config/itm-config:tunnel-monitor-enabled

  • config/itm-config:tunnel-monitor-interval

  • config/itm-state:dpn-endpoints

  • config/itm-state:external-tunnel-list

  • config/itm:transport-zones

  • config/network-topology:network-topology

  • config/opendaylight-inventory:nodes

  • operational/ietf-interfaces:interfaces

  • operational/ietf-interfaces:interfaces-state

  • operational/itm-config:tunnel-monitor-enabled

  • operational/itm-config:tunnel-monitor-interval

  • operational/itm-state:tunnels_state

  • operational/network-topology:network-topology

  • operational/odl-interface-meta:bridge-ref-info

Test Cases

Create l2vlan Transparent Interface

This creates a transparent l2vlan interface between two dpns

Test Steps and Pass Criteria
  1. Create transparent l2vlan interface through REST

    1. Interface shows up in config

    2. Interface state shows up in operational

    3. Flows are added to Table0 on the bridge

Troubleshooting

N.A.

Delete l2vlan Transparent Interface

This testcase deletes the l2vlan transparent interface created in previous test case.

Test Steps and Pass Criteria
  1. Remove all interfaces in config

    1. Interface config is empty

    2. Interface states in operational is empty

    3. Flows are deleted from Table0 on bridge

Troubleshooting

N.A.

Create l2vlan Trunk Interface

This testcase creates a l2vlan trunk interface between 2 DPNs.

Test Steps and Pass Criteria
  1. Create l2vlan trunk interface through REST

    1. Interface shows up in config

    2. Interface state shows up in operational

    3. Flows are added to Table0 on the bridge

Troubleshooting

N.A.

Create l2vlan Trunk Member Interface

This testcase creates a l2vlan Trunk member interface for the l2vlan trunk interface created in previous testcase.

Test Steps and Pass Criteria
  1. Create l2vlan trunk member interface through REST

    1. Interface shows up in config

    2. Interface state shows up in operational

    3. Flows are added to Table0 on the bridge

    4. Flows match on dl_vlan

    5. Flows have action=pop_vlan

Troubleshooting

N.A.

Bind service on Interface

This testcase binds service to the L2vlan Trunk Interface earlier.

Test Steps and Pass Criteria
  1. Add service bindings for elan and VPN services on L2Vlan Trunk Interface using REST

    1. Check bindings for VPN and elan services exist on L2Vlan Trunk interface

    2. Flows are added to Table17 on the bridge

    3. Flows have action goto_table:21

    4. Flows have action goto_table:50

Troubleshooting

N.A.

Unbind service on Interface

This testcase Unbinds the services which were bound in previous testcase.

Test Steps and Pass Criteria
  1. Delete service bindings for elan and VPN services on L2Vlan Trunk Interface using REST

    1. Check bindings for VPN and elan services on L2Vlan Trunk interface don’t exist

    2. No flows on Table0

    3. No flows with action goto_table:21

    4. No flows with action goto_table:50

Troubleshooting

N.A.

Delete L2vlan Trunk Interface

Delete l2vlan trunk interface created and used in earlier test cases

Test Steps and Pass Criteria
  1. Remove all interfaces in config

    1. Interface config is empty

    2. Interface states in operational is empty

    3. Flows are deleted from Table0 on bridge

Troubleshooting

N.A.

Implementation

Assignee(s)
Primary assignee:

<developer-a>

Other contributors:

<developer-b> <developer-c>

ITM Scalability

This document serves as the test plan for the ITM Scalability – OF Based tunnels. This document comprises of test cases pertaining to all the use case covered by the Functional Spec.

Note

Name of suite and test cases should map exactly to as they appear in Robot reports.

Test Setup

Brief description of test setup.

Testbed Topologies

Topology device software and inter node communication details -

  1. ODL Node – 1 or 3 Node ODL Environment should be used

  2. Switch Node - 2 or 3 Nodes with OVS 2.6

Test Topology
+--------+       +--------+
|  BR1   | data  |  BR2   |
|        <------->        |
+---^----+       +----^---+
    |       mgmt      |
+---v-----------------v---+
|                         |
|           ODL           |
|                         |
|        odl-genius       |
|                         |
+-------------------------+
Hardware Requirements
  1. 1 controller with 2 OVS for functional testing

  2. 3 controller with 2 OVS for functional testing

Test Suite Requirements

Test Suite Bringup

In test suit bringup build the topology as described in the Test Topology. Bring all the tunnels UP.

Test Suite Cleanup

Final steps after all tests in suite are done. This should include any cleanup, sanity checks, configuration etc. that needs to be done once all test cases in suite are done.

Debugging

Capture any debugging information that is captured at start of suite and end of suite.

Test Cases

Verify Tunnel Creation with enabled IFM Bypass

Change the config parameter to enable IFM Bypass of ITM provisioning and Verify Tunnel Creation is successful.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Verify that the tunnels are built properly between all the End Points with VxLan Encapsulation.

    2. Change the configuration parameter as per the new way of ITM provisioning.

    3. Verify the tunnel creation is successful.

Verify Tunnel Creation with disabled IFM Bypass

Change the config parameter to enable without IFM Bypass of ITM provisioning and Verify Tunnel Creation is successful.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Verify that the tunnels are built properly between all the End Points with VxLan Encapsulation.

    2. Change the configuration parameter as per the old way of ITM provisioning.

    3. Verify the tunnel creation is successful.

Change ITM provisioning parameter to enable IFM Bypass

Clean up existing ITM config, change ITM provisioning parameter to provide IFM Bypass, Verify ITM creation succeeds.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Check for any existing ITM configuration in the system.

    2. Do a clean up of all the existing ITM configuration.

    3. Configure the ITM as per the new way of provisioning.

    4. Verify the tunnel creation is successful.

Change ITM provisioning parameter to disable IFM Bypass

Clean up existing ITM config, change ITM provisioning parameter to disable IFM Bypass, Verify ITM creation succeeds.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Check for any existing ITM configuration in the system.

    2. Do a clean up of all the existing ITM configuration.

    3. Configure the ITM as per the old way of provisioning

    4. Verify the tunnel creation is successful

Bring DOWN the datapath

Configure ITM tunnel Mesh, Bring DOWN the datapath and Verify Tunnel status is updated in ODL.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Verify that the tunnels are built properly between all the End Points with VxLan Encapsulation.

    2. Configure the ITM tunnel mesh.

    3. Verify the tunnel creation is successful.

    4. Bring down the datapath on the system.

    5. Verify the tunnel status is updated in ODL.

Bring UP the datapath

Configure ITM tunnel Mesh, Bring UP the datapath and Verify Tunnel status is updated in ODL.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Configure the ITM tunnel mesh.

    2. Verify the tunnel creation is successful.

    3. Bring UP the datapath on the system.

    4. Verify the tunnel status is updated in ODL.

Enable BFD Monitoring for ITM Tunnels

Change ITM config parameters to enable IFM Bypass and Verify BFD monitoring can be enabled for ITM tunnels.

Test Steps and Pass Criteria
  1. And configure the tunnel monitoring to BFD.

    1. Bring up the ITM config as per the new way of provisioning.

    2. Verify the tunnel creation is successful.

    3. Verify whether the BFD monitoring is enabled.

Disable BFD Monitoring for ITM Tunnels

Change ITM config parameters to enable IFM Bypass and Verify BFD monitoring can be disabled for ITM tunnels

Test Steps and Pass Criteria
  1. Configure the tunnel monitoring to BFD.

    1. Bring up the ITM config as per the new way of provisioning.

    2. Verify the tunnel creation is successful.

    3. Disable BFD monitoring.

    4. Verify whether the BFD monitoring is disabled

Enable/Disable BFD to verify tunnel status alarm

Enable BFD and check for the data path alarm and as well as control path alarms.

Test Steps and Pass Criteria
  1. Configure the tunnel monitoring to BFD.

    1. Bring up the ITM config as per the new way of provisioning.

    2. Verify the tunnel creation is successful.

    3. Verify whether the BFD monitoring is enabled.

    4. Bring down the tunnel and check for the Alarms.

    5. Disable alarm support and verify whether alarm is not reporting.

Verify Tunnel down alarm is reported

Enable Tunnel status alarm and Bring down the Tunnel port, and verify Tunnel down alarm is reported.

Test Steps and Pass Criteria
  1. Configure the tunnel monitoring to BFD.

    1. Bring up the ITM config as per the new way of provisioning.

    2. Verify the tunnel creation is successful.

    3. Verify whether the BFD monitoring is enabled.

    4. Enable the alarms for the tunnel UP/DOWN notification.

    5. Bring down the tunnel and check for the Alarms.

Verify Tunnel status for the Disconnected DPN

Disconnect DPN from ODL and verify Tunnel status is shown as UNKNOWN for the Disconnected DPN.

Test Steps and Pass Criteria
  1. And configure the tunnel monitoring to BFD.

    1. Bring up the ITM config as per the new way of provisioning.

    2. Verify the tunnel creation is successful.

    3. Disconnect the DPN from the ODL.

    4. Verify tunnel status is shown as ‘UNKNOWN’ for the disconnected DPN.

Verify Tunnel down alarm is cleared

Enable Tunnel status alarm and Bring up the Tunnel port which is down, and verify Tunnel down alarm is cleared.

Test Steps and Pass Criteria
  1. And configure the tunnel monitoring to BFD.

    1. Bring up the ITM config as per the new way of provisioning.

    2. Verify the tunnel creation is successful.

    3. Enable the alarms for the tunnel UP/DOWN notification.

    4. Bring ‘DOWN’ the tunnel and check for the alarm notification.

Perform ODL reboot

Create ITM with provisioning config parameter set to true, Perform ODL reboot and Verify dataplane is intact.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Bring up the ITM config as per the new way of provisioning.

    2. Do a ODL Reboot.

    3. Verify the dataplane is intact.

Verify Re-sync is successful once connection is up

Create ITM with provisioning config parameter set to true for IFM Bypass, bring down control plane connection(between ODL–OVS), modify ODL config, Verify Re-sync is successful once connection is up.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Bring up the ITM config as per the new way of provisioning.

    2. Bring down the control plane connection between ODL – OVS.

    3. Modify ODL configuration.

    4. Check whether the Re-sync is successful once the connection is UP.

Verify ITM creation with 2 DPNs
Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Check for any existing ITM configuration in the system.

    2. Do a clean up of all the existing ITM configuration.

    3. Configure the ITM as per the old way of provisioning

    4. Verify the tunnel creation is successful

Verify TEP Creation

Add new TEP’s and verify Creation is successful

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Verify that the tunnels are built properly between all the End Points with VxLan Encapsulation.

    2. Add new TEP’s to the existing configuration.

    3. Monitor the time taken for tunnel addition and flow programming.

    4. Verify the tunnel creation is successful.

Verify TEP Deletion

Delete few TEP’s and verify Deletion is successful and no stale(flows,config) is left.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Verify that the tunnels are built properly between all the End Points with VxLan Encapsulation.

    2. Delete the newly added TEP configuration.

    3. Monitor the time taken for tunnel deletion and flow re-programming.

    4. Verify the deletion is successful and no stale entries left.

Verify ITM creation by Re-adding TEPs

Re-add deleted TEP’s and Verify ITM creation is successful

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Verify that the tunnels are built properly between all the End Points with VxLan Encapsulation.

    2. Re-add the deleted TEP entries

    3. Monitor the time taken for tunnel re-addition and flow programming

    4. Verify the tunnel creation is successful.

Verify Deletion of All TEPs

Delete all TEP’s and verify Deletion is successful and no stale(flows,config) is left

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Verify that the tunnels are built properly between all the End Points with VxLan Encapsulation.

    2. Delete all the TEP entries.

    3. Monitor the time taken for tunnel deletion and flow re-programming

    4. Verify the deletion is successful and no stale entries left.

Implementation

N.A.

Assignee(s)

Who is contributing test cases? In case of multiple authors, designate a primary assignee and other contributors. Primary assignee is also expected to be maintainer once test code is in.

Primary assignee:

Faseela K

Other contributors:

Nidhi Adhvaryu, Sathwik Boggarapu

Service Recovery Test Plan

Test plan for testing service recovery manager functionalities.

Test Setup

Test setup consists of ODL with odl-genius-rest installed and two switches (DPNs) connected to ODL over OVSDB and OpenflowPlugin.

Testbed Topologies

This suit uses the default Genius topology.

Default Topology
+--------+       +--------+
|  BR1   | data  |  BR2   |
|        <------->        |
+---^----+       +----^---+
    |       mgmt      |
+---v-----------------v---+
|                         |
|           ODL           |
|                         |
|        odl-genius       |
|                         |
+-------------------------+

Test Suite Requirements

Test Suite Bringup

Following steps are followed at beginning of test suite:

  • Bring up ODL with odl-genius feature installed

  • Add bridge to DPN

  • Add tap interfaces to bridge created above

  • Add OVSDB manager to DPN using ovs-vsctl set-manager

  • Connect bridge to OpenFlow using ovs-vsctl set-controller

  • Repeat above steps for other DPNs

  • Create REST session to ODL

Test Suite Cleanup

Following steps are followed at the end of test suite:

  • Delete bridge DPN

  • Delete OVSDB manager ‘ovs-vsctl del-manager’

  • Repeat above steps for other DPNs

  • Delete REST session to ODL

Debugging

Capture any debugging information that is captured at start of suite and end of suite.

Test Cases

ITM TEP Recovery

Verify SRM by recovering TEP instance by using transportzone name and TEP’s ip address.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Create the setup as per the default-topology.

    2. Create ITM tunnel using REST API.

    3. Verify the ITM is created on both controller and ovs.

    4. Delete a tunnel port from any one of the OVS manually or delete any ietf Tunnel interface via REST.

    5. Check ietf Tunnel interface is deleted on both controller and ovs.

    6. Login to karaf and issue instance recovery command using transportzone name and TEP’s ip address.

    7. Above deleted ITM is recovered

    8. Verify in controller and ovs.

ITM Transportzone Recovery

Verify SRM by recovering TZ instance using transportzone name.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Create the setup as per the default-topology.

    2. Create ITM tunnel using REST API.

    3. Verify the ITM is created on both controller and ovs.

    4. Delete a tunnel port from any one of the OVS manually or delete any ietf Tunnel interface via REST.

    5. Check ietf Tunnel interface is deleted on both controller and ovs.

    6. Login to karaf and issue instance recovery command using transportzone name.

    7. Above deleted ITM is recovered

    8. Verify in controller and ovs.

ITM Service Recovery

Verify SRM by recovering service ITM.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Create the setup as per the default-topology.

    2. Create ITM tunnel using REST API.

    3. Verify the ITM is created on both controller and ovs.

    4. Delete any one of the ietf Tunnel interface in ietf-interface config datastore via REST.

    5. Check ietf Tunnel interface is deleted on both controller and ovs.

    6. Login to karaf and issue service recovery command using service name.

    7. Above deleted ITM is recovered

    8. Verify in controller and ovs.

IFM Instance Recovery

Verify SRM instance recovery using interface port name.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Create the setup as per the default-topology.

    2. Create ITM tunnel using REST API.

    3. Verify the ITM is created on both controller and ovs.

    4. Delete a tunnel port from any one of the OVS manually or delete any ietf Tunnel interface via REST.

    5. Check ietf Tunnel interface is deleted on both controller and ovs.

    6. Login to karaf and issue instance recovery command using interface port name.

    7. Above deleted ITM is recovered

    8. Verify in controller and ovs.

IFM Service Recovery

Verify SRM by recovering service IFM.

Test Steps and Pass Criteria
  1. Create the VxLAN Tunnels between the OVS to OVS.

    1. Create the setup as per the default-topology.

    2. Create ITM tunnel using REST API.

    3. Verify the ITM is created on both controller and ovs.

    4. Delete any one of the Tunnel interface in network-topology interface datastore via REST.

    5. Check Tunnel interface is deleted on both controller and ovs.

    6. Login to karaf and issue service recovery command using service name.

    7. Above deleted ITM is recovered

    8. Verify in controller and ovs.

Implementation

N.A.

Assignee(s)

Who is contributing test cases? In case of multiple authors, designate a primary assignee and other contributors. Primary assignee is also expected to be maintainer once test code is in.

Primary assignee:

Nidhi Adhvaryu

Other contributors:

N.A

Genius User Guide

Overview

The Genius project provides generic network interfaces, utilities and services. Any OpenDaylight application can use these to achieve interference-free co-existence with other applications using Genius.

Modules and Interfaces

In the first phase delivered in OpenDaylight Boron release, Genius provides following modules —

  • Modules providing a common view of network interfaces for different services

    • Interface (logical port) Manager

      • Allows bindings/registration of multiple services to logical ports/interfaces

      • Ability to plug in different types of southbound protocol renderers

    • Overlay Tunnel Manager

      • Creates and maintains overlay tunnels between configured Tunnel Endpoints (TEPs)

  • Modules providing commonly used functions as shared services to avoid duplication of code and waste of resources

    • Liveness Monitor

      • Provides tunnel/nexthop liveness monitoring services

    • ID Manager

      • Generates persistent unique integer IDs

    • MD-SAL Utils

      • Provides common generic APIs for interaction with MD-SAL

Interface Manager Operations

Creating interfaces

The YANG file Data Model odl-interface.yang contains the interface configuration data-model.

You can create interfaces at the MD-SAL Data Node Path /config/if:interfaces/interface, with the following attributes —

*Common attributes*

  • name — unique interface name, can be any unique string (e.g., UUID string)

  • type — interface type, currently supported iana-if-type:l2vlan and iana-if-type:tunnel

  • enabled — admin status, possible values true or false

  • parent-refs : used to specify references to parent interface/port feeding to this interface

  • datapath-node-identifier — identifier for a fixed/physical dataplane node, can be physical switch identifier

  • parent-interface — can be a physical switch port (in conjunction of above), virtual switch port (e.g., neutron port) or another interface

  • list node-identifier — identifier of the dependant underlying configuration protocol

    • topology-id — can be ovsdb configuration protocol

    • node-id — can be hwvtep node-id

*Type specific attributes*

  • when type = l2vlan

    • vlan-id — VLAN id for trunk-member l2vlan interfaces

    • l2vlan-mode — currently supported ones are transparent, trunk or trunk-member

  • when type = stacked_vlan (Not supported yet)

    • stacked-vlan-id — VLAN-Id for additional/second VLAN tag

  • when type = tunnel

    • tunnel-interface-type — tunnel type, currently supported ones are:

      • tunnel-type-vxlan

      • tunnel-type-gre

      • tunnel-type-mpls-over-gre

    • tunnel-source — tunnel source IP address

    • tunnel-destination — tunnel destination IP address

    • tunnel-gateway — gateway IP address

    • monitor-enabled — tunnel monitoring enable control

    • monitor-interval — tunnel monitoring interval in millisiconds

  • when type = mpls (Not supported yet)

    • list labelStack — list of lables

    • num-labels — number of lables configured

Supported REST calls are GET, PUT, DELETE, POST

Creating L2 port interfaces

Interfaces on normal L2 ports (e.g. Neutron tap ports) are created with type l2vlan and l2vlan-mode as transparent. This type of interface classifies packets passing through a particular L2 (OpenFlow) port. In dataplane, packets belonging to this interface are classified by matching in-port against the of-port-id assigned to the base port as specified in parent-interface.

URL: /restconf/config/ietf-interfaces:interfaces

Sample JSON data

"interfaces": {
    "interface": [
        {
            "name": "4158408c-942b-487c-9a03-0b603c39d3dd",
            "type": "iana-if-type:l2vlan",                       <--- interface type 'l2vlan' for normal L2 port
            "odl-interface:l2vlan-mode": "transparent",          <--- 'transparent' VLAN port mode allows any (tagged, untagged) ethernet packet
            "odl-interface:parent-interface": "tap4158408c-94",  <--- port-name as it appears on southbound interface
            "enabled": true
        }
    ]
}
Creating VLAN interfaces

A VLAN interface is created as a l2vlan interface in trunk-member mode, by configuring a VLAN-Id and a particular L2 (vlan trunk) interface. Parent VLAN trunk interface is created in the same way as the transparent interface as specified above. A trunk-member interface defines a flow on a particular L2 port and having a particular VLAN tag. On ingress, after classification the VLAN tag is popped out and corresponding unique dataplane-id is associated with the packet, before delivering the packet to service processing. When a service module delivers the packet to this interface for egress, it pushes corresponding VLAN tag and sends the packet out of the parent L2 port.

URL: /restconf/config/ietf-interfaces:interfaces

Sample JSON data

"interfaces": {
    "interface": [
        {
            "name": "4158408c-942b-487c-9a03-0b603c39d3dd:100",
            "type": "iana-if-type:l2vlan",
            "odl-interface:l2vlan-mode": "trunk-member",        <--- for 'trunk-member', flow is classified with particular vlan-id on an l2 port
            "odl-interface:parent-interface": "4158408c-942b-487c-9a03-0b603c39d3dd",  <--- Parent 'trunk' iterface name
            "odl-interface:vlan-id": "100",
            "enabled": true
        }
    ]
}
Creating Overlay Tunnel Interfaces

An overlay tunnel interface is created with type tunnel and particular tunnel-interface-type. Tunnel interfaces are created on a particular data plane node (virtual switches) with a pair of (local, remote) IP addresses. Currently supported tunnel interface types are VxLAN, GRE and MPLSoverGRE.

URL: /restconf/config/ietf-interfaces:interfaces

Sample JSON data

"interfaces": {
    "interface": [
        {
            "name": "MGRE_TUNNEL:1",
            "type": "iana-if-type:tunnel",
            "odl-interface:tunnel-interface-type": "odl-interface:tunnel-type-mpls-over-gre",
            "odl-interface:datapath-node-identifier": 156613701272907,
            "odl-interface:tunnel-source": "11.0.0.43",
            "odl-interface:tunnel-destination": "11.0.0.66",
            "odl-interface:monitor-enabled": false,
            "odl-interface:monitor-interval": 10000,
            "enabled": true
        }
    ]
}

Binding services on interface

The YANG file odl-interface-service-bindings.yang contains the service binding configuration data model.

An application can bind services to a particular interface by configuring MD-SAL data node at path /config/interface-service-binding. Binding services on interface allows particular service to pull traffic arriving on that interface depending upon the service priority. Service modules can specify openflow-rules to be applied on the packet belonging to the interface. Usually these rules include sending the packet to specific service table/pipeline. Service modules are responsible for sending the packet back (if not consumed) to service dispatcher table, for next service to process the packet.

URL:/restconf/config/interface-service-bindings:service-bindings/

Sample JSON data

"service-bindings": {
  "services-info": [
    {
      "interface-name": "4152de47-29eb-4e95-8727-2939ac03ef84",
      "bound-services": [
        {
          "service-name": "ELAN",
          "service-type": "interface-service-bindings:service-type-flow-based"
          "service-priority": 3,
          "flow-priority": 5,
          "flow-cookie": 134479872,
          "instruction": [
            {
              "order": 2,
              "go-to-table": {
                "table_id": 50
              }
            },
            {
              "order": 1,
              "write-metadata": {
                "metadata": 83953188864,
                "metadata-mask": 1099494850560
              }
            }
          ],
        },
        {
         "service-name": "L3VPN",
         "service-type": "interface-service-bindings:service-type-flow-based"
         "service-priority": 2,
         "flow-priority": 10,
         "flow-cookie": 134217729,
         "instruction": [
            {
              "order": 2,
              "go-to-table": {
                "table_id": 21
              }
            },
            {
              "order": 1,
              "write-metadata": {
                "metadata": 100,
                "metadata-mask": 4294967295
              }
            }
          ],
        }
      ]
    }
  ]
}

Interface Manager RPCs

In addition to the above defined configuration interfaces, Interface Manager also provides several RPCs to access interface operational data and other helpful information. Interface Manger RPCs are defined in odl-interface-rpc.yang

The following RPCs are available —

get-dpid-from-interface

This RPC is used to retrieve dpid/switch hosting the root port from given interface name.

rpc get-dpid-from-interface {
    description "used to retrieve dpid from interface name";
    input {
        leaf intf-name {
            type string;
        }
    }
    output {
        leaf dpid {
            type uint64;
        }
    }
}
get-port-from-interface

This RPC is used to retrieve south bound port attributes from the interface name.

rpc get-port-from-interface {
    description "used to retrieve south bound port attributes from the interface name";
    input {
        leaf intf-name {
            type string;
        }
    }
    output {
        leaf dpid {
            type uint64;
        }
        leaf portno {
            type uint32;
        }
        leaf portname {
            type string;
        }
    }
}
get-egress-actions-for-interface

This RPC is used to retrieve group actions to use from interface name.

rpc get-egress-actions-for-interface {
    description "used to retrieve group actions to use from interface name";
    input {
        leaf intf-name {
            type string;
            mandatory true;
        }
        leaf tunnel-key {
            description "It can be VNI for VxLAN tunnel ifaces, Gre Key for GRE tunnels, etc.";
            type uint32;
            mandatory false;
        }
    }
    output {
        uses action:action-list;
    }
}
get-egress-instructions-for-interface

This RPC is used to retrieve flow instructions to use from interface name.

rpc get-egress-instructions-for-interface {
    description "used to retrieve flow instructions to use from interface name";
    input {
        leaf intf-name {
            type string;
            mandatory true;
        }
        leaf tunnel-key {
            description "It can be VNI for VxLAN tunnel ifaces, Gre Key for GRE tunnels, etc.";
            type uint32;
            mandatory false;
        }
    }
    output {
        uses offlow:instruction-list;
    }
}
get-endpoint-ip-for-dpn

This RPC is used to get the local ip of the tunnel/trunk interface on a particular DPN (Data Plane Node).

rpc get-endpoint-ip-for-dpn {
    description "to get the local ip of the tunnel/trunk interface";
    input {
        leaf dpid {
            type uint64;
        }
    }
    output {
        leaf-list local-ips {
            type inet:ip-address;
        }
    }
}
get-interface-type

This RPC is used to get the type of the interface (vlan/vxlan or gre).

rpc get-interface-type {
description "to get the type of the interface (vlan/vxlan or gre)";
    input {
        leaf intf-name {
            type string;
        }
    }
    output {
        leaf interface-type {
            type identityref {
                base if:interface-type;
            }
        }
    }
}
get-tunnel-type

This RPC is used to get the type of the tunnel interface(vxlan or gre).

rpc get-tunnel-type {
description "to get the type of the tunnel interface (vxlan or gre)";
    input {
        leaf intf-name {
            type string;
        }
    }
    output {
        leaf tunnel-type {
            type identityref {
                base odlif:tunnel-type-base;
            }
        }
    }
}
get-nodeconnector-id-from-interface

This RPC is used to get node-connector-id associated with an interface.

rpc get-nodeconnector-id-from-interface {
description "to get nodeconnector id associated with an interface";
    input {
        leaf intf-name {
            type string;
        }
    }
    output {
        leaf nodeconnector-id {
            type inv:node-connector-id;
        }
    }
}
get-interface-from-if-index

This RPC is used to get interface associated with an if-index (dataplane interface id).

rpc get-interface-from-if-index {
    description "to get interface associated with an if-index";
        input {
            leaf if-index {
                type int32;
            }
        }
        output {
            leaf interface-name {
                type string;
            }
        }
    }
create-terminating-service-actions

This RPC is used to create the tunnel termination service table entries.

rpc create-terminating-service-actions {
description "create the ingress terminating service table entries";
    input {
         leaf dpid {
             type uint64;
         }
         leaf tunnel-key {
             type uint64;
         }
         leaf interface-name {
             type string;
         }
         uses offlow:instruction-list;
    }
}
remove-terminating-service-actions

This RPC is used to remove the tunnel termination service table entries.

rpc remove-terminating-service-actions {
description "remove the ingress terminating service table entries";
    input {
         leaf dpid {
             type uint64;
         }
         leaf interface-name {
             type string;
         }
         leaf tunnel-key {
             type uint64;
         }
    }
}

ID Manager

TBD.