Table of Contents
https://git.opendaylight.org/gerrit/#/q/topic:ITM-Scale-Improvements
ITM creates tunnel mesh among switches with the help of interfacemanager. This spec describes re-designing of ITM to create the tunnel mesh independently without interface manager. This is expected to improve ITM performance and therefore support a larger tunnel mesh.
ITM creates tunnels among the switches. When ITM receives the configuration from NBI, it creates interfaces on ietf interface config DS, which the interface manager listens to and creates the tunnel constructs on the switches. This involves an additional hop from ITM to interface manager which constitutes many DCNs and DS reads and writes. This induces a lot of load on the system, especially in a scale setup. Also, tunnel interfaces are catagorized as generic ietf-interface along with tap and vlan interfaces. Interface manager deals with all these interfaces. Applications listening for interface state gets updates on tunnel interfaces both from interface manager and ITM. This degrades the performance and hence the internal tunnel mesh creation does not scale up very well beyond 80 switches.
This feature will support the following use cases.
In order to improve the scale numbers, handling of tunnel interface is separated from other interfaces. Hence, ITM module is being re-architectured to by-pass interface manager and create/delete the tunnels between the switches directly. ITM will also provide the tunnel status without the support of interface manager.
By-passing interface manager provides the following advantage * removes the creation of ietf interfaces in config DS * reduces a number of DCN being generated * reduces the number of datastore reads and writes. * Applications to get tunnel updates only from ITM
All this should improves the performance and thereby the scale numbers.
Further improvements that can be done is to
This feature will not be used along with per-tunnel specific service binding use case as both use cases together are not supported. Multiple Vxlan Tunnel feature will not work with this feature as it needs service binding on tunnels.
Most of the code for this proposed changes will be in separate package for code maintainability. There will be minimal changes in some common code in ITM and interface manager to switch between the old and the new way of tunnel creation
If itm-direct-tunnels
flag is ON, then
– itm:transport-zones listener will trigger the new code upon receiving transport zone configuration.
– interface manager will ignore events pertaining to OVSDB tunnel port and tunnel interface related inventory
changes.
– When ITM gets the NBI tep configuration
o ITM wires the tunnels by forming tunnel interface name and stores the Tep information in dpn ITM does not
create the tunnel interfaces in the ietf-interface config DS. Stores the tunnel name in the dpn-teps-state
in itm-state.yang
.
group id
in all other CSSs in order to reach this tep.
if-index
for each tep interface. This will be storedin if-indexes-interface-map
in odl-itm-meta.yang
irrespective of the switch being connected.
information from the odl-itm-meta.yang
.
OvsdbBridgeAugmentation
. When switch getsconnected add ports to the bridge (in the pre-configured case)
FlowCapableNodeConnector
.– push the table 0 flow entries
– populate the tunnels_state
in itm-state.yang
tunnel state that comes in OF Port status.
– update the group with watch-port for handling traffic switchover in dataplane.
– If this feature is not enabled, then ITM will take the usual route of configuring ietf-interfaces.
If the alarm-generation-enabled
is enabled, then register for changes in tunnels_state
to generate the alarms.
ITM will support individual tunnels to be monitored.
If Global monitoring flag is enabled, then all tunnels will be monitored.
If Global flag is turned OFF, then individual per tunnel monitoring flag will take effect.
ITM will support dynamic enable/disable of bfd global flag / individual flag.
BFD dampening logic for bfd states is as follows,
– On tunnel creation, ITM will consider initial tunnel status to be UP and LIVE and mark it as in ‘dampening’ state
– If it receives UP and LIVE event, the tunnel will come out of dampening state, no change/event will be is triggered
– If it does not receive UP and LIVE, for a configured duration, it will set the tunnel state to DOWN
– There be a configuration parameter for the above - bfd-dampening-timeout
.
External Tunnel (HWVTEP and DC Gateway) Handling will take same existing path, that is through interfacemanager.
OF Tunnel (flow based tunnelling) implementation will also be done directly by ITM following the same approach.
Pipeline will change as the egress action will be pointing to a group instead of output on port
A new container dpn-teps-state
will be added. This will be a config DS
list dpns-teps {
key "source-dpn-id";
leaf source-dpn-id {
type uint64;
}
leaf tunnel-type {
type identityref {
base odlif:tunnel-type-base;
}
}
leaf group-id {
type uint32;
}
/* Remote DPNs to which this DPN-Tep has a tunnel */
list remote-dpns {
key "destination-dpn-id";
leaf destination-dpn-id {
type uint64;
}
leaf tunnel-name {
type string;
}
leaf monitor-enabled { // Will be enhanced to support monitor id.
type boolean;
default true;
}
}
}
}
A new Yang ''odl-itm-meta.yang'' will be create to store OVS bridge related information.
container bridge-tunnel-info {
description "Contains the list of dpns along with the tunnel interfaces configured on them.";
list ovs-bridge-entry {
key dpid;
leaf dpid {
type uint64;
}
leaf ovs-bridge-reference {
type southbound:ovsdb-bridge-ref;
description "This is the reference to an ovs bridge";
}
list ovs-bridge-tunnel-entry {
key tunnel-name;
leaf tunnel-name {
type string;
}
}
}
}
container ovs-bridge-ref-info {
config false;
description "The container that maps dpid with ovs bridge ref in the operational DS.";
list ovs-bridge-ref-entry {
key dpid;
leaf dpid {
type uint64;
}
leaf ovs-bridge-reference {
type southbound:ovsdb-bridge-ref;
description "This is the reference to an ovs bridge";
}
}
}
container if-indexes-tunnel-map {
config false;
list if-index-tunnel {
key if-index;
leaf if-index {
type int32;
}
leaf interface-name {
type string;
}
}
}
New config parameters to be added to ``interfacemanager-config``
New config parameters to be added to ``itm-config``
The RPC call itm-rpc:get-egress-action
will return the group Id which will point to tunnel port (when the tunnel
port is created on the switch) between the source and destination dpn id.
rpc get-egress-action {
input {
leaf source-dpid {
type uint64;
}
leaf destination-dpid {
type uint64;
}
leaf tunnel-type {
type identityref {
base odlif:tunnel-type-base;
}
}
}
output {
leaf group-id {
type uint32;
}
}
}
ITM will also support another RPC ``get-tunnel-type``
rpc get-tunnel-type {
description "to get the type of the tunnel interface(vxlan, vxlan-gpe, gre, etc.)";
input {
leaf intf-name {
type string;
}
}
output {
leaf tunnel-type {
type identityref {
base odlif:tunnel-type-base;
}
}
}
}
For the two above RPCs, when this feature is enabled ITM will service the two RPCs for internal tunnels and for the external tunnels, ITM will forward it to interfacemanager. When this feature is disabled, ITM will forward the RPCs for both internal and external to interfacemanager. Applications should now start using the above two RPCs from ITM and not interfacemanager.
ITM will enhance the existing RPCs
create-terminating-service-actions
andremove-terminating-service-actions
.New RPC will be supported by ITM to enable monitoring of individual tunnels - internal or external.
rpc set-bfd-enable-on-tunnel {
description "used for turning ON/OFF to monitor individual tunnels";
input {
leaf source-node {
type string;
}
leaf destination-node {
type string;
}
leaf monitoring-params {
type itmcfg:tunnel-monitor-params;
}
}
}
Following are the configuration changes and impact in the OpenDaylight.
genius-interfacemanager-config.xml
:itm-direct-tunnels
: this is boolean type parameter which enables or disables the new ITM realization
of the tunnel mesh. Default value is false
.genius-itm-config.xml
:alarm-generation-enabled
: this is boolean type parameter which enables or disables the new generation of alarms
by ITM. Default value is true
.bfd-dampening-timeout
: timeout in seconds. Config parameter which the dampening logic will use <interfacemanager-config xmlns="urn:opendaylight:genius:itm:config">
<itm-direct-tunnels>false</itm-direct-tunnels>
</interfacemanager-config>
<itm-config xmlns="urn:opendaylight:genius:itm:config">
<alarm-generation-enabled>true</alarm-generation-enabled>
<bfd-dampening-timeout>30</bfd-dampening-timeout> -- value in seconds. Whats the ideal default value ?
</itm-config>
Runtime changes to the parameters of this config file will not be taken into consideration.
The solution is supported on a 3-node cluster.
Upgrading ODL versions from the previous ITM tunnel mesh creation logic to this new tunnel mesh creation logic will
be supported. When the itm-direct-tunnels
flag changes from disable
from previous version to enable
in this
version, ITM will automatically mesh tunnels in the new way and clean up any data that was persisted in the previous
tunnel creation method.
This solution will improve scale numbers by reducing no. of interfaces
created in ietf-interfaces
and this will cut down on the additional processing done by interface manager.
This feature will provide fine granularity in bfd monitoring per tunnels. This should considerably reduce the
number bfd events generated for all the tunnels, instead monitoring only those tunnels that are required.
Overall this should improve the ITM performance and scale numbers.
Oxygen
N.A
This feature doesn’t add any new karaf feature.Installing any of the below features can enable the service:
odl-genius-rest odl-genius
Before starting the controller, enable this feature in genius-interfacemanager-config.xml, by editing it as follows:-
<interfacemanager-config xmlns="urn:opendaylight:genius:interface:config">
<itm-direct-tunnels>true</itm-direct-tunnels>
</interfacemanager-config>
Post the ITM transport zone configuration from the REST.
URL: restconf/config/itm:transport-zones/
Sample JSON data
{
"transport-zone": [
{
"zone-name": "TZA",
"subnets": [
{
"prefix": "192.168.56.0/24",
"vlan-id": 0,
"vteps": [
{
"dpn-id": "1",
"portname": "eth2",
"ip-address": "192.168.56.101",
},
{
"dpn-id": "2",
"portname": "eth2",
"ip-address": "192.168.56.102",
}
],
"gateway-ip": "0.0.0.0"
}
],
"tunnel-type": "odl-interface:tunnel-type-vxlan"
}
]
}
URL: restconf/operations/itm-rpc:get-egress-action
emphasize-lines: 495-501
This feature will not add any new CLI for configuration. Some debug CLIs to dump the cache information may be added for debugging purpose.
Trello card:
itm-direct-tunnels
.OvsdbBridgeAugmentation
.FlowCapableNodeConnector
.bridge-interface-info
, bridge-ref-info
from odl-itm-meta.yang
.alarm-generation-enabled
.dpn-teps-state
in itm-state.yang
.The following work items will be taken up later
This requires minimum of OVS 2.8
where the BFD state can be received in of-port events.
The dependent applications in netvirt and SFC will have to use the ITM RPC to get the Egress actions. ITM will respond with egress actions for internal tunnels and for external tunnels ITM will forward the RPC to to interface manager, fetch the output and forward it to the applications.
Appropriate UTs will be added for the new code coming in for this feature. This includes but not limited to :-
alarm-generation-enabled
and check if the alarm were generated / supressed based on the flag.The following test cases will be added to genius CSIT.
alarm-generation-enabled
and check if the alarm were generated / supressed based on the flag.This will require changes to User Guide and Developer Guide.
User Guide will need to add information for below details: For the scale setup, this feature needs to be enabled so as to support tunnel mesh among scaled number of DPNs.
itm-direct-tunnels
flag to true.Developer Guide will need to capture how to use the ITM RPC -