Welcome to OpenDaylight Documentation

The OpenDaylight documentation site acts as a central clearinghouse for OpenDaylight project and release documentation. If you would like to contribute to documentation, refer to the Documentation Guide.

Getting Started with OpenDaylight

OpenDaylight Downloads

Release Notes

Execution

OpenDaylight includes Karaf containers, OSGi (Open Service Gateway Initiative) bundles, and Java class files, which are portable and can run on any Java 8-compliant JVM (Java virtual machine). Any add-on project or feature of a specific project may have additional requirements.

Development

OpenDaylight is written in Java and utilizes Maven as a build tool. Therefore, the only requirements needed to develop projects within OpenDaylight include:

If an application or tool is built on top of OpenDaylight’s REST APIs, it does not have any special requirement beyond what is necessary to run the application or tool to make REST calls.

In some instances, OpenDaylight uses the Xtend lamguage. Even though Maven downloads all appropriate tools to build applications; additional plugins may be required to support IDE.

Projects with additional requirements for execution typically have similar or additional requirements for development. See the platforms release notes for details.

Platform Release Notes
Neon Platform Upgrade

This document describes the steps to help users upgrade to the Neon planned platform. Refer to Managed Release Integrated (MRI) project for more information.

Preparation
Version Bump

Before performing platform upgrade, do the following to bump the odlparent versions (for example, bump-odl-version):

  1. Update the odlparent version from 3.1.3 to 4.0.14. There should not be any reference to org.opendaylight.odlparent, except for other 4.0.2.

bump-odl-version odlparent 3.1.3 4.0.14
  1. Update the direct yangtools version references from 2.0.10 to 2.1.14, There should not be any reference to org.opendaylight.yangtools, except for 2.1.1.

  2. Update the MDSAL version from 0.14.0-SNAPSHOT to 3.0.12. There should not be any reference to org.opendaylight.mdsal, except for 3.0.12.

rpl -R 0.14.0-SNAPSHOT 3.0.12
or
rpl -R 2.6.0-SNAPSHOT 3.0.12
Install Dependent Projects

Before performing platform upgrade, users must also install any dependent project. To locally install a dependent project, pull and install the respective neon-mri changes for a dependent project. At a minimum, pull and install controller, AAA and NETCONF.

Perform the following steps to save time when locally installing any dependent project:

  • For quick install:

mvn -Pq clean install
  • If previously installed, go offline and/or use the no-snapshot-update option.

mvn -Pq -o -nsu clean install
Upgrade the ODL Parent

The following sub-section describes how to upgrade to the ODL Parent version 4. Refer to the ODL Parent Release Notes for more information on upgrading the ODL parent.

Maven

ODL Parent 4 requires Maven 3.5.0 or later. Refer to the following link for more information on Maven, including the latest downloads and release notes:

Features

The following features are required to be replaced:

  • Replace references to odl-guava-23 with odl-guava.

  • Change any version range that refers to version 3 of the ODL Parent to [4,5) for ODL Parent 4. For example:

<feature name="odl-infrautils-caches">
     <feature version="[4,5)">odl-guava</feature>
 </feature>

The following features are available to wrap the following dependencies. They should be used if any feature depends on the corresponding library:

  • Apache Commons Code: odl-apache-commons-codec

  • Apache Commons Lang 3: odl-apache-commons-lang3 Please migrate if you are using version 2.

  • Apache Commons Net: odl-apache-commons-net

  • Apache Commons Text: odl-apache-commons-text

  • Apache SSHD: odl-apache-sshd

Note

For more information on Apache Commons, refer to Apache Common

  • Jackson 2.9: odl-jackson-2.9. Replacing odl-jackson-2.8.

Any references to the latter must be updated.

The preceding features should be used in the same way as existing ODL Parent features. That is, do not use them in plain JAR bundles or OSGi bundles. Only use the features POMs. For example, to use odl-apache-commons-lang3, add

<dependency>
    <groupId>org.opendaylight.odlparent</groupId>
    <artifactId>odl-apache-commons-lang3</artifactId>
    <type>xml</type>
    <classifier>features</classifier>
</dependency>

For example, to use the POM feature to ensure that corresponding feature template exists in the src/main/feature/feature.xml file (in the same module as the feature POM):

<?xml version="1.0" encoding="UTF-8"?>
<features name="YOUR-PROJECT-FEATURES" xmlns="http://karaf.apache.org/xmlns/features/v1.2.0">
    <feature name="YOUR-FEATURE" version="${project.version}">
        <feature version="[4,5)">odl-apache-commons-lang3</feature>
    </feature>
</features>
Mockito

For the Mockito framework, update to the changes in version 2. Refer to What’s new in Mockito 2 and Migrating to Mockito 2.1. The latter is a practical review of the process.

PowerMock

For the PowerMock framework, revert to an older version of Mockito and Javassist, because the current versions are not compatible with PowerMock. Switch to powermock-api-mockito2, instead of powermock-api-mockito:

<dependency>
  <groupId>org.powermock</groupId>
  <artifactId>powermock-api-mockito2</artifactId>
  <version>1.7.4</version>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.javassist</groupId>
  <artifactId>javassist</artifactId>
  <version>3.21.0-GA</version>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.mockito</groupId>
  <artifactId>mockito-core</artifactId>
  <version>2.8.9</version>
  <scope>test</scope>
</dependency>

If all else fails, you can revert to Mockito 1 and PowerMock 1.6.4, as used in previous versions of the ODL platform:

<dependency>
   <groupId>org.powermock</groupId>
   <artifactId>powermock-api-mockito</artifactId>
   <version>1.6.4</version>
   <scope>test</scope>
 </dependency>
 <dependency>
   <groupId>org.javassist</groupId>
   <artifactId>javassist</artifactId>
   <version>3.21.0-GA</version>
   <scope>test</scope>
 </dependency>
 <dependency>
   <groupId>org.mockito</groupId>
   <artifactId>mockito-core</artifactId>
   <version>1.10.19</version>
   <scope>test</scope>
 </dependency>
 <dependency>
   <groupId>org.powermock</groupId>
   <artifactId>powermock-module-junit4</artifactId>
   <version>1.6.4</version>
   <scope>test</scope>
 </dependency>
 <dependency>
   <groupId>org.powermock</groupId>
   <artifactId>powermock-api-support</artifactId>
   <version>1.6.4</version>
   <scope>test</scope>
 </dependency>
 <dependency>
   <groupId>org.powermock</groupId>
   <artifactId>powermock-reflect</artifactId>
   <version>1.6.4</version>
   <scope>test</scope>
 </dependency>
 <dependency>
   <groupId>org.powermock</groupId>
   <artifactId>powermock-core</artifactId>
   <version>1.6.4</version>
   <scope>test</scope>
 </dependency>
XMLUnit 2

For the XMLUnit testing tool, migrate to XMLUnit 2, which is now the default. The xmlunit-legacy is available, if necessary. Refer to Migrating from XMLUnit 1.x to 2.x

Blueprint Declarations

Blueprint XML files now must be shipped in the OSGI-INF/blueprint. For manually-defined XML files, find . -name “.xml” | grep “src/main/”, and move them from src/main/resources/org/opendaylight/blueprint/ to src/main/resources/OSGI-INF/blueprint. The Maven plugin already does this for any configuration provided by the ODL Parent for generated BP XML. Use this magic incantation (from c/75180) to move handwritten sources:

find . -path '/src/main/resources/org/opendaylight/blueprint/*.xml' -execdir sh -c "mkdir -p ../../../OSGI-INF/blueprint; git mv {} ../../../OSGI-INF/blueprint" \;

When bundles are included in features that have no dependency to the controller’s ODL blueprint extender bundle, this might cause the SFT to fail with a message of “Missing dependencies: (&(objectClass=org.apache.aries.blueprint.NamespaceHandler) (osgi.service.blueprint.namespace=http://opendaylight.org/xmlns/blueprint/v1.0.0))”. This can be solved by either adding an artificial controller feature dependency or by removing the object that is not required. For more information, refer to the patch set 77008

If a project uses blueprint-maven-plugin, migrate from pax-cdi-api to blueprint-maven-plugin-annotation. In addition, users must add a POM, remove the pax-cdi-api dependency, and replace @OsgiServiceProvider on a bean class declaration with @Service (using its classes argument), @OsgiService with @Reference on injection points like constructors. Also, @OsgiService on a bean declaration, if any, with @Service; those were wrong. Check that the resulting autowire.xml is identical to the previous version.

<dependency>
  <groupId>org.apache.aries.blueprint</groupId>
  <artifactId>blueprint-maven-plugin-annotation</artifactId>
  <optional>true</optional>
</dependency>

In Eclipse, the fastest way to do above is to use the following commands:

rpl -R @OsgiServiceProvider @Service .
rpl -R @OsgiService @Reference .

In this order, you get “@ReferenceProvider.” Then, right-click a project to Source > Organize Imports.

Refer to Issue 75699 For an example patch, refer to Issue 74891

org.eclipse.persistence

If the project uses EclipseLink (org.eclipse.persistence) for JSON processing, then refer to the note ODLPARENT-166.

YANG Tools Impacts
odl-triemap and triemap.jar

This feature and its artifact were deprecated, since the code was migrated outside of OpenDaylight. Refer to Triemap. The replacement feature is tech.pantheon.triemap:pt-triemap, the replacement jar is tech.pantheon.triemap:triemap. yangtools-2.1.1 is using version 1.0.1, which is version-converged on odlparent-4.0.2.

As before, this feature was pulled in transitively by odl-yangtools-util. Also, the jar is pulled in by org.opendaylight.yangtools:util.

DataTree Removes Empty Lists, Choices and Augmentations

As per YANGTOOLS-585, InMemoryDataTree, which underpins all known MD-SAL datastore implementations, will subject lists, choice and augmentation nodes to the same lifecycle as non-presence containers. In addition, they will disappear as soon as they become empty and then reappear as soon as they are populated.

MD-SAL Impacts
ietf-inet-types

Replace dependencies to org.opendaylight.mdsal.model:ietf-inet-types-2013-07-15 and ietf-yang-types-20130715 artifacts in the POMs by org.opendaylight.mdsal.binding.model.ietf:rfc6991.

For more details, see the “Updating model artifact packaging” thread on the mdsal-dev mailing list from April 25-26th. In addition, contact the mdsal-dev list for clarifications about further doubts. Please do update this section with any new information useful to others. Issue 001656

ietf-interfaces

Replace dependencies to org.opendaylight.mdsal.model:ietf-interfaces with org.opendaylight.mdsal.binding.model.ietf:rfc7223.

rfc7895.jar

This model was moved. Update any reference to point to org.opendaylight.mdsal.binding.model.ietf:rfc7895.

iana-if-type

Replace dependencies to org.opendaylight.mdsal.model:iana-if-type-2014-05-08 with org.opendaylight.mdsal.binding.model.iana:iana-if-type. In addition, replace imports in Java code from rev140508 to rev170119.

Datastore Lifecycle

As noted previously, datastores now automatically remove empty lists, choices and augmentations. In addition, it will recreate them when they are implied by their children.

Performing WriteTransaction.put() to write an empty list has the same effect as deleting a list. Storing a new list entry into a list no longer requires ensureParentsByMerge.

Project Release Notes
AAA
Major Features
odl-aaa-shiro
  • Feature URL: Shiro

  • Feature Description: ODL Shiro-based AAA implementation.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: CSIT

odl-aaa-cert
  • Feature URL: Cert

  • Feature Description: MD-SAL based encrypted certificate management.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: CIST

odl-aaa-cli
  • Feature URL: CLI

  • Feature Description: Basic Karaf CLI commands for interacting with AAA.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: CSIT

Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • No

  • Other security issues?

    • No

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, no specific steps needed.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bugs Fixed
  • List of bugs fixed since the previous release.

Known Issues
  • List key known issues with workarounds.

    • None

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • None

Standards
  • List of standards implemented and to what extent.

    • LDAP, JDBC, ActiveDirectory

Release Mechanics
  • Describe any major shifts in release schedule from the release plan.

    • None

BGP LS PCEP
BGP Plugin

The OpenDaylight controller provides an implementation of BGP (Border Gateway Protocol), which is based on RFC 4271 as a south-bound protocol plugin. The implementation renders all basic BGP speaker capabilities, including:

  • inter/intra-AS peering

  • routes advertising

  • routes originating

  • routes storage

The plugin’s north-bound API (REST/Java) provides to user:

  • fully dynamic runtime standardized BGP configuration

  • read-only access to all RIBs

  • read-write programmable RIBs

  • read-only reachability/linkstate topology view

PCEP Plugin

The OpenDaylight Path Computation Element Communication Protocol (PCEP) plugin provides all basic service units necessary to build-up a PCE-based controller. Defined by rfc8231, PCEP offers LSP management functionality for Active Stateful PCE, which is the cornerstone for majority of PCE-enabled SDN solutions. It consists of the following components:

  • Protocol library

  • PCEP session handling

  • Stateful PCE LSP-DB

  • Active Stateful PCE LSP Operations

Major Features
odl-bgpcep-bgp
  • Feature URL: BGPCEP BGP

  • Feature Description: OpenDaylight Border Gateway Protocol (BGP) plugin.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: CSIT

odl-bgpcep-bmp
  • Feature URL: BGPCEP BMP

  • Feature Description: OpenDaylight BGP Monitoring Protocol (BMP) plugin.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: CSIT

odl-bgpcep-pcep
  • Feature URL: BGPCEP PCEP

  • Feature Description: OpenDaylight Path Computation Element Configuration Protocol (PCEP) plugin.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: CSIT

Security Considerations
  • None Known: All protocol implements the TCP Authentication Option (TCP MD5)

Quality Assurance

The BGP extensions were tested manually with a vendor’s BGP router implementation or other software implementations (exaBGP, bagpipeBGP). Also, they are covered by the unit tests and automated system tests.

Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, no specific steps outside of configuration adjustments are needed.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • Yes, configuration needs to be updated to latest features configuration as documented in the user guides.

New Features
Bug Fixes
Known Issues
Release Mechanics
Controller
Major Features
odl-mdsal-broker
  • Feature URL: Broker

  • Feature Description: Core MD-SAL implementations.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: CSIT

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • Yes, akka uses port 2550 and by default communicates with unencrypted, unauthenticated messages. Securing akka communication is not described here, but those concerned should refer to: Configuring SSL/TLS for Akka Remoting.

  • Other security issues?

    • No

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, no specific steps needed.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • Some deprecated APIs have been removed

  • Any configuration changes?

    • No

Bugs Fixed
  • List of bugs fixed since the previous release.

Known Issues
  • List key known issues with workarounds.

    • None

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • The controller binding and DOM MD-SAL APIs, classes and interfaces in packages prefixed with org.opendaylight.controller, have been deprecated in favor of the APIs in the MDSAL project prefixed with org.opendaylight.mdsal.

    • Various other APIs and classes in the controller project that have been long since deprecated and no longer used have been removed.

Standards
  • List of standards implemented and to what extent.

    • None

Release Mechanics
Data Export/Import
Major Features
odl-daexim
  • Feature URL: Daexim

  • Feature Description: This wrapper feature includes all the sub features provided by Daexim project.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test:

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • No

  • Other security issues?

    • None

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Migration should work across all releases.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bugs Fixed
  • List of bugs fixed since the previous release.

    • All known bugs have been resolved.

Known Issues
End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • None

Standards
  • List of standards implemented and to what extent.

    • None

Release Mechanics
  • Describe any major shifts in release schedule from the release plan.

    • None

Integration/Distribution
Major Features
odl-integration-all
  • Gitweb URL: Integration

  • Description: An aggregate feature grouping Managed projects user facing ODL features. This feature is used to verify all user features can be installed together without Karaf becoming unusable or without port conflicts.

  • Top Level: Yes.

  • User Facing: Yes, but not intended for production use (only for testing purposes).

  • Experimental: No.

  • CSIT Test: CSIT

odl-integration-compatible-with-all
  • Gitweb URL: Compatibility

  • Description: An aggregate feature grouping Managed projects user facing ODL features that are not pro-active and which (as a group) should be compatible with most other ODL features. This feature is used in CSIT multi-project feature test (-all- CSIT job).

  • Top Level: Yes.

  • User Facing: Yes, but not intended for production use (only for testing purposes).

  • Experimental: No.

  • CSIT Test: CSIT

Managed distribution archive
  • Gitweb URL: Managed archive

  • Description: Zip or tar.gz; when extracted, a self-consistent ODL installation with Managed projects is created.

  • Top Level: Yes.

  • User Facing: Yes.

  • Experimental: No.

  • CSIT Test: CSIT

Full distribution archive
  • Gitweb URL: Distribution archive

  • Description: Zip or tar.gz; when extracted, a self-consistent ODL installation with all projects is created.

  • Top Level: Yes.

  • User Facing: Yes.

  • Experimental: No.

  • CSIT Test: CSIT

Documentation
Security Considerations
  • Karaf 4 exposes ssh console on port 8101. The security is basically the same as in upstream Karaf of corresponding versions, except library version overrides implemented in odlparent:karaf-parent. See Securing the Karaf container.

Quality Assurance
Migration

Every distribution major release comes with new and deprecated project features as well as new Karaf version. Because of this it is recommend performing a new ODL installation.

Compatibility

Test features change every release, but these are only intended for distribution test.

Bugs Fixed

No significant bugs were fixed in this release.

Known Issues
  • ODLPARENT-110

    Successive feature installation from karaf4 console causes bundles refresh.

    Workaround:

    • Use –no-auto-refresh option in the karaf feature install command.

      feature:install --no-auto-refresh odl-netconf-topology
      
    • List all the features you need in the karaf config boot file.

    • Install all features at once in console, for example:

      feature:install odl-restconf odl-netconf-mdsal odl-mdsal-apidocs odl-clustering-test-app odl-netconf-topology
      
  • ODLPARENT-113

    The ssh-dss method is used by Karaf SSH console, but no longer supported by clients such as OpenSSH.

    Workaround:

    • Use the bin/client script, which uses karaf:karaf as the default credentials.

    • Use this ssh option:

      ssh -oHostKeyAlgorithms=+ssh-dss -p 8101 karaf@localhost
      

    After restart, Karaf is unable to re-use the generated host.key file.

    Workaround: Delete the etc/host.key file before starting Karaf again.

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • None

Standards

No standard implemented directly (see upstream projects).

Release Mechanics

See Managed Release

Genius

Genius project provides Generic Network Interfaces, Utilities & Services. Any ODL application can use these to achieve interference-free co-existence with other applications using Genius. OpenDaylight Neon Genius provides following modules:

Title

Module

Description

Interface (logical port) Manager

Allows bindings/registration of multiple services to logical ports/interfaces.

Overlay Tunnel Manager

Creates and maintains overlay tunnels between configured tunnel endpoints.

Aliveness Monitor

Provides tunnel/nexthop aliveness monitoring services.

ID Manager

Generates cluster-wide persistent unique integer IDs.

MD-SAL Utils

Provides common generic APIs for interaction with MD-SAL.

Resource Manager

Provides a resource sharing framework for applications sharing common resources e.g. table-ids, group-ids etc.

FCAPS Application

Generates various alarms and counters for the different genius modules.

FCAPS Framework

Module collectively fetches all data generated by fcaps application. Any underlying infrastructure can subscribe for its events to have a generic overview of the various alarms and counters.

Major Features
odl-genius-api
  • Feature URL: API

  • Feature Description: This feature includes API for all the functionalities provided by Genius.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test:

odl-genius
  • Feature URL: Genius

  • Feature Description: This feature provides all functionalities provided by genius modules, including interface manager, tunnel manager, resource manager and ID manager and MDSAL Utils. It includes Genius APIs and implementation.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test:

This feature was also tested by the netvirt CSIT suites.

odl-genius-rest
  • Feature URL: Genius Rest

  • Feature Description: This feature includes RESTCONF with odl-genius. feature.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test:

odl-genius-fcaps-application
  • Feature URL: FCAPS Application

  • Feature Description: Includes genius FCAPS application.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: None

odl-genius-fcaps-framework
  • Feature URL: FCAPS Framework

  • Feature Description: Includes genius FCAPS framework.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: None

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • No

  • Other security issues?

    • N/A

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, a normal upgrade of the software should work.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bugs Fixed
  • List of bugs fixed since the previous release.

Known Issues
  • List key known issues with workarounds.

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

Release Mechanics
Infrautils

Infrautils project provides low level utilities for use by other OpenDaylight projects, including:

  • @Inject DI

  • Utils incl. org.opendaylight.infrautils.utils.concurrent

  • Test Utilities

  • Job Coordinator

  • Ready Service

  • Integration Test Utilities (itestutils)

  • Caches

  • Diagstatus

  • Metrics

Major Features
odl-infrautils-all
  • Feature URL: All features

  • Feature Description: This feature exposes all infrautils framework features.

  • Top Level: Yes

  • User Facing: No

  • Experimental: Yes

  • CSIT Test:

odl-infrautils-jobcoordinator
  • Feature URL: Jobcoordinator

  • Feature Description: This feature provides technical utilities and infrastructures for other projects to use.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: Covered by Netvirt and Genius CSITs

odl-infrautils-metrics
  • Feature URL: Metrics

  • Feature Description: This feature exposes the new infrautils.metrics API with labels and first implementation based on Dropwizard incl. thread watcher.

  • Top Level: Yes

  • User Facing: No

  • Experimental: Yes

  • CSIT Test: Covered by Netvirt and Genius CSITs.

odl-infrautils-ready
  • Feature URL: Ready

  • Feature Description: This feature exposes the system readiness framework.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Covered by Netvirt and Genius CSITs.

odl-infrautils-caches
  • Feature URL: Cache

  • Feature Description: This feature exposes new infrautils.caches API, CLI commands for monitoring, and first implementation based on Guava.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: Covered by Netvirt and Genius CSITs.

odl-infrautils-diagstatus
  • Feature URL: Diagstatus

  • Feature Description: This feature exposes the status and diagnostics framework.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Covered by Netvirt and Genius CSITs.

odl-infrautils-metrics-prometheus
  • Feature URL: Prometheus

  • Feature Description: This feature exposes metrics by HTTP on /metrics/prometheus from the local ODL to an external Prometheus setup.

  • Top Level: Yes

  • User Facing: No

  • Experimental: Yes

  • CSIT Test: None

Documentation
Security Considerations
Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, a normal upgrade of the software should work.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bugs Fixed
  • List of bugs fixed since the previous release

    • Fixed 5 major bugs related to diagstatus (INFRAUTILS-44, INFRAUTILS-39, INFRAUTILS-38, INFRAUTILS-36, INFRAUTILS-37),

    • Fixed bugs

    • Added 2 noteworthy improvements (INFRAUTILS-33, INFRAUTILS-31) related to diagstatus.

    • Also fixed many minor bugs and technical enhancements.

Known Issues
End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • Counters infrastructure (replaced by metrics).

Standards
  • List of standards implemented and to what extent.

    • N/A

Release Mechanics
LISP Flow Mapping
Major Features
odl-lispflowmapping-msmr
  • Feature URL: MSMR

  • Feature Description: This is the core feature that provides the Mapping Services and includes the LISP southbound plugin feature as well.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: CSIT

odl-lispflowmapping-neutron
  • Feature URL: Neutron

  • Feature Description: This feature provides Neutron integration.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

  • Yes, the southbound plugin.

    • If so, how are they secure?

      • LISP southbound plugin follows LISP RFC6833 security guidelines.

    • What port numbers do they use?

      • Port used: 4342

  • Other security issues?

    • None

Quality Assurance
  • Link to Sonar Report (59.6%)

  • Link to CSIT Jobs

  • All modules have been unit tested. Integration tests have been performed for all major features. System tests have been performed on most major features.

  • Registering and retrieval of basic mappings have been tested more thoroughly. More complicated mapping policies have gone through less testing.

Migration
  • Is it possible to migrate from the previous release? If so, how?

    • LISP Flow Mapping service will auto-populate the data structures from existing MD-SAL data upon service start if the data has already been migrated separately. No automated way for transferring the data is provided in this release.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bugs Fixed
  • None

Known Issues
  • Clustering is still an experimental feature and may have some issues particularly related to operational datastore consistency.

  • Link to Open Bugs

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

Standards
  • The LISP implementation module and southbound plugin conforms to the IETF RFC6830 and RFC6833, with the following exceptions:

    • In Map-Request message, M bit(Map-Reply Record exist in the MapRequest) is processed, but any mapping data at the bottom of a Map-Request are discarded.

    • LISP LCAFs are limited to only up to one level of recursion, as described in the IETF LISP YANG draft.

    • No standards exist for the LISP Mapping System northbound API as of this date.

Release Mechanics
NETCONF
Major Features

For each top-level feature, identify the name, URL, description, etc. User-facing features are used directly by end users.

odl-netconf-topology
  • Feature URL: NETCONF Topology

  • Feature Description: NETCONF southbound plugin single-node, configuration through MD-SAL.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: NETCONF CSIT

odl-netconf-clustered-topology
  • Feature URL: Clustered Topology

  • Feature Description: NETCONF southbound plugin clustered, configuration through MD-SAL.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: Cluster CSIT

odl-netconf-console
  • Feature URL: Console

  • Feature Description: NETCONF southbound configuration with Karaf CLI.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

odl-netconf-mdsal
  • Feature URL: MD-SAL

  • Feature Description: NETCONF server for MD-SAL.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: MD-SAL CSIT

odl-restconf
  • Feature URL: RESTCONF

  • Feature Description: RESTCONF

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Tested by any suite that uses RESTCONF.

odl-mdsal-apidocs
  • Feature URL: API Docs

  • Feature Description: MD-SAL - apidocs

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

odl-yanglib
  • Feature URL: YANG Lib

  • Feature Description: Yanglib server.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

odl-netconf-callhome-ssh
  • Feature URL: Call Hhome SSH

  • Feature Description: NETCONF Call Home.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Call Home CSIT.

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    Yes, we have MD-SAL and CSS NETCONF servers. Also, a server for NETCONF Call Home.

    • If so, how are they secure?

      • NETCONF over SSH

    • What port numbers do they use?

      • Refer to Ports. NETCONF Call Home uses TCP port 6666.

  • Other security issues?

    • None

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes. No additional steps required.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bugs Fixed
  • List of bugs fixed since the previous release.

Known Issues
  • List key known issues with workarounds.

End-of-life

List of features/APIs that were EOLed, deprecated, and/or removed from this release:

  • The RFC8040 RESTCONF endpoint has been promoted to stable, which marks its first release. Its un-authenticated feature has been removed.

  • Since this endpoint corresponds to a published standard and supports various YANG 1.1 features, we will be transitioning to it.

  • To that effect, the bierman02 endpoint is now deprecated, and users should start testing and migrating to the RFC8040 endpoint.

  • The old endpoint will not be removed for at least two releases, after which we will re-evaluate the cost of carrying this code.

Standards
Release Mechanics
NetVirt
Major Features
Feature Name
  • Feature Name: odl-netvirt-openstack

  • Feature URL: odl-netvirt-openstack

  • Feature Description: NetVirt is a network virtualization solution that includes the following components:

    • Open vSwitch based virtualization for software switches.

    • Hardware VTEP for hardware switches.

    • Service Function Chaining support within a virtualized environment.

    • Support for OVS and DPDK-accelerated.

    • OVS data paths, L3VPN (BGPVPN), EVPN, ELAN, distributed L2 and L3, NAT and Floating IPs, IPv6, Security Groups, MAC and IP learning.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: NetVirt CSIT

Documentation
Security Considerations

No known issues.

Quality Assurance
Migration

Nothing beyond general migration requirements.

Compatibility

Nothing beyond general compatibility requirements.

Bugs Fixed
  • List of bugs fixed since the previous release.

Known Issues
  • List key known issues with workarounds.

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • None

Standards

N/A

Release Mechanics
OpenFlowPlugin Project
New Features
  • No new feature is being introduced in Neon release.

Improvements

Neon release contains the following improvements:

  • Blueprint improvements (moving to annotations from xml, Blueprint xml cleanup).

  • Code cleanup (related to guava, jdk deprecated features).

  • Migration from md-sal deprecated APIs (Entity Ownership Service APIs).

  • Documentation improvements.

  • Multiple Bug fixes.

odl-openflowjava-protocol
  • Feature URL: JAVA Protocol

  • Feature Description: OpenFlow protocol implementation.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: JAVA CSIT

odl-openflowplugin-app-config-pusher
  • Feature URL: Config Pusher

  • Feature Description: Pushes node configuration changes to OpenFlow device.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: Pusher CSIT

odl-openflowplugin-app-forwardingrules-manager
  • Feature URL: Forwarding Rules Manager

  • Feature Description: Sends changes in config datastore to OpenFlow device incrementally. forwardingrules-manager can be replaced with forwardingrules-sync and vice versa.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: FR Manager CSIT

odl-openflowplugin-app-forwardingrules-sync
  • Feature URL: Forwarding Rules Sync

  • Feature Description: Sends changes in config datastore to OpenFlow devices taking previous state in account and doing diffs between previous and new state. forwardingrules-sync can be replaced with forwardingrules-manager and vice versa.

  • Top Level: Yes

  • User Facing: No

  • Experimental: Yes

  • CSIT Test: FR Sync CSIT

odl-openflowplugin-app-table-miss-enforcer
  • Feature URL: Miss Enforcer

  • Feature Description: Sends table miss flows to OpenFlow device when it connects.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: Enforcer CSIT

odl-openflowplugin-app-topology
  • Feature URL: App Topology

  • Feature Description: Discovers topology of connected OpenFlow devices. It a wrapper feature that loads the following features:

    • odl-openflowplugin-app-lldp-speaker

    • odl-openflowplugin-app-topology-lldp-discovery

    • odl-openflowplugin-app-topology-manager).

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: App Topology CSIT

odl-openflowplugin-app-lldp-speaker
  • Feature URL: LLDP Speaker

  • Feature Description: Send periodic LLDP packets on all the ports of all the connected OpenFlow devices.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: LLDP Speaker CSIT

odl-openflowplugin-app-topology-lldp-discovery
  • Feature URL: LLDP Discovery

  • Feature Description: Receives the LLDP packet sent by LLDP speaker service and generate the link information and publish to the downstream services looking for link notifications.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: LLDP Discovery CSIT

odl-openflowplugin-app-topology-manager
  • Feature URL: Topology Manager

  • Feature Description: Listen to the link added/removed notification and node connect/disconnection notification and update the link information in the OpenFlow topology.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: Topology Manager CSIT

odl-openflowplugin-nxm-extensions
  • Feature URL: NXM Extensions

  • Feature Description: Support for OpenFlow Nicira Extensions.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: NXM Extensions CSIT

odl-openflowplugin-onf-extensions
  • Feature URL: ONF Extensions

  • Feature Description: Support for Open Networking Foundation Extensions.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: No

odl-openflowplugin-flow-services
  • Feature URL: Flow Services

  • Feature Description: Wrapper feature for standard applications.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Flow Services CSIT

odl-openflowplugin-flow-services-rest
odl-openflowplugin-flow-services-ui
  • Feature URL: Serices UI

  • Feature Description: Wrapper + REST interface + UI.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Flow Services UI CSIT

odl-openflowplugin-nsf-model
  • Feature URL: NSF Model

  • Feature Description: OpenFlowPlugin YANG models.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: NSF CSIT

odl-openflowplugin-southbound
  • Feature URL: Southbound

  • Feature Description: Southbound API implementation.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: Southbound CSIT

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • Yes, OpenFlow devices

  • Other security issues?

    • Insecure OpenFlowPlugin <–> OpenFlow device connections

    • Topology spoofing: non-authenticated LLDP packets to detect links between switches that makes it vulnerable to a number of attacks, one of which is topology spoofing. The problem is that all controllers we have tested set chassisSubtype value to the MAC address of the local port of the switch, which makes it easy for an adversary to spoof that switch since controllers use that MAC address as a unique identifier of the switch. By intercepting clear LLDP packets containing MAC addresses, a malicious switch can spoof other switches to falsify the controller’s topology graph.

    • DoS: An adversary switch could generate LLDP flood resulting in bringing down the openflow network

    • Refer to DoS attack when the switch rejects to receive packets from the controller: DoS Attacks

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, API’s from Fluorine release are supported in Neon release.

Compatibility
  • Is this release compatible with the previous release? Yes

Bugs Fixed

List of bugs fixed since the previous release.

Known Issues
  • List key known issues with workarounds:

    • In case of heavy load, multiple devices (40+) are connected and user is trying to install 100K+ flows, devices sometime proactive disconnect because controller is not able to response to echo request because of the heavy load. To workaround this issue, set the echo time interval in switch to high value (30 seconds).

  • Open Bugs

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • None

Standards

OpenFlow versions:

Release Mechanics
OVSDB Project
Major Features
odl-ovsdb-southbound-api
  • Feature URL: Southbound API

  • Feature Description: This feature provides the YANG models for northbound users to configure the OVSDB device. These YANG models are designed based on the OVSDB schema. This feature does not provide the implementation of YANG models. If user/developer prefer to write their own implementation they can use this feature to load the YANG models in the controller.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test:

odl-ovsdb-southbound-impl
  • Feature URL: Southbound IMPL

  • Feature Description: This feature is the main feature of the OVSDB Southbound plugin. This plugin handles the OVS device that supports the OVSDB schema and uses the OVSDB protocol. This feature provides the implementation of the defined YANG models. Developers developing the in-controller application that want to leverage OVSDB for device configuration can add a dependency on this feature and all the required modules will be loaded.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test:

odl-ovsdb-southbound-impl-rest
  • Feature URL: Southbound IMPL Rest

  • Feature Description: This feature is the wrapper feature that installs the odl-ovsdb-southbound-api & odl-ovsdb-southbound-impl feature with other required features for restconf access to provide a functional OVSDB southbound plugin. Users who want to develop applications that manage the OVSDB supported devices, but want to run the application outside of the OpenDaylight controller must install this feature.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test:

odl-ovsdb-hwvtepsouthbound-api
  • Feature URL: HWVT Southbound API

  • Feature Description: This feature provides the YANG models for northbound users to configure the device that supports OVSDB Hardware vTEP schema. These YANG models are designed based on the OVSDB Hardware vTEP schema. This feature does not provide the implementation of YANG models. If user/developer prefer to write their own implementation of the defined YANG model, they can use this feature to install the YANG models in the controller.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: Minimal set of CSIT test is already in place. More work is in progress and will have better functional coverage in future release: CSIT

odl-ovsdb-hwvtepsouthbound
  • Feature URL: HWVTEP Southbound

  • Feature Description: This feature is the main feature of the OVSDB Hardware vTep Southbound plugin. This plugin handles the OVS device that supports the OVSDB Hardware vTEP schema and uses the OVSDB protocol. This feature provides the implementation of the defined YANG models. Developers developing the in-controller application that want to leverage OVSDB Hardware vTEP plugin for device configuration can add a dependency on this feature, and all the required modules will be loaded.

  • Top Level: Yes

  • User Facing: No

  • Experimental: Yes

  • CSIT Test: Minimal set of CSIT test is already in place. More work is in progress and will have better functional coverage in future release. CSIT

odl-ovsdb-hwvtepsouthbound-rest
  • Feature URL: HWVTEP Southbound Rest

  • Feature Description: This feature is the wrapper feature that installs the odl-ovsdb-hwvtepsouthbound-api & odl-ovsdb-hwvtepsouthbound features with other required features for restconf access to provide a functional OVSDB Hardware vTEP plugin. Users who want to develop applications that manage the Hardware vTEP supported devices, but want to run the applications outside of the OpenDaylight controller must install this feature.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: Minimal set of CSIT test is already in place. More work is in progress and will have better functional coverage in future release. CSIT

odl-ovsdb-library
  • Feature URL: Library

  • Feature Description: Encode/decoder library for OVSDB and Hardware vTEP schema.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test:

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF? Yes, Southbound Connection to OVSDB/Hardware vTEP devices.

  • Other security issues?

    • Plugin’s connection to device is by default unsecured. Users need to explicitly enable the TLS support through ovsdb library configuration file. Users can refer to the wiki page here for the instructions.

Quality Assurance
  • Link to Sonar Report (57%)

  • Link to CSIT Jobs

  • OVSDB southbound plugin is extensively tested through Unit Tests, IT test and system tests. OVSDB southbound plugin is tested in both single node setup as well as three node cluster setup. Hardware vTEP plugin is currently tested through:

    • Unit testing

    • CSIT testing

    • NetVirt project L2 Gateway features CSIT tests

    • Manual testing

Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes. User facing features and interfaces are not changed, only enhancements are done.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No changes in the YANG models from previous release.

  • Any configuration changes?

    • No

Bugs Fixed
  • List of bugs fixed since the previous release.

Known Issues
  • List key known issues with workarounds.

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

Release Mechanics
SERVICEUTILS

ServiceUtils is an infrastructure project for OpenDaylight aimed at providing utilities that will assist in Operation and Maintenance of different services provided by OpenDaylight. A service is a functionality provided by the ODL controller as seen by the operator. These services can be categorized as Networking services, e.g. L2, L3/VPN, NAT etc., and Infra services, e.g. Openflow. These services are provided by different ODL projects like Netvirt, Genius and Openflowplugin and are comprised of a set of Java Karaf bundles and associated MD-SAL datastores.

Major Features
odl-serviceutils-srm
  • Feature URL: SRM

  • Feature Description: This feature provides service recovery functionaility for ODL services.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test:

odl-serviceutils-tools
  • Feature URL: Tools

  • Feature Description: This feature currently has utilities for datatree listeners, as well as Upgrade support.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Does not have CSIT on its own, but heavily tested by Genius and Netvirt CSITs.

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • No

  • Other security issues?

    • N/A

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, a normal upgrade of the software should work.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bugs Fixed
  • List of bugs fixed since the previous release.

Known Issues
  • List key known issues with workarounds

    • None

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • None

Standards
  • List of standards implemented and to what extent.

    • N/A

Release Mechanics
Service Release Notes
Neon-SR1 Release Notes

This page details changes and bug fixes between the Neon Release and the Neon Stability Release 1 (Neon-SR1) of OpenDaylight.

Projects with No Noteworthy Changes
aaa
bgpcep
controller
coe
  • 367b394 : Bump mdsal to 3.0.8

  • d6cd13e : Bump odlparent to 4.0.10

  • dabada1 : Migrate from godep to go modules

daexim
genius
  • 470a29f3 : Bump mdsal to 3.0.8

  • 07b6324a : Bump yangtools to 2.1.10

  • b01dd65e : Bump odlparent to 4.0.10

  • 0d05116c : Fix build with Java 11

  • 6b639640 : Use scope=provided for karaf shell

  • 77a806e5 : Drop Karaf console dependency from genius-api

  • ce8862ec NETVIRT-1262 : NETVIRT-1262: Tep’s are not part of transport zone when a transport zone is removed and added from NB

infrautils
  • 75cc0989 : Bump odlparent to 4.0.10

  • b4e9bd63 : Allow NamedLocks to be instantiated

  • c379c2ce : diagstatus : remote status summary enhancement

integration/distribution
  • b622dbb : Bump mdsal to 3.0.8

  • 2e4f90c : Bump yangtools to 2.1.10

  • 74ea66f : Bump odlparent to 4.0.10

  • 8ae15a8 : Add SNMP4SDN to distribution for Neon SR1

  • 9e5ad0e : add telemetry to distribution for Neon SR1 release

  • 9b134b9 : Enable TPCE in Neon distribution

  • e80da7d : Add a dependency-convergence profile

  • 60f2456 : Update version after neon release

  • 3bf100e : Update Neon platform version

  • 8763647 : Adjust common distribution for neon release

lispflowmapping
netconf
netvirt
neutron
openflowplugin
ovsdb
serviceutils
sfc
Neon-SR2 Release Notes

This page details changes and bug fixes between the Neon Stability Release 1 (Neon-SR1) and the Neon Stability Release 2 (Neon-SR2) of OpenDaylight.

Projects with No Noteworthy Changes
aaa
bgpcep
controller
coe
daexim
  • 71d583a : Bump mdsal to 3.0.10

  • f001920 : Bump mdsal to 3.0.9

  • 3891e67 : Bump odlparent to 4.0.11

  • 8d13ed3 : Derive artifacts from odlparent-lite

genius
infrautils
integration/distribution
  • e68f4ea : Remove telemetry from distribution

  • d9498f4 : Bump mdsal to 3.0.10

  • faf31f7 : Bump mdsal to 3.0.9

  • 9e123f4 : Bump yangtools to 2.1.11

  • 01f5e51 : Bump odlparent to 4.0.11

  • e169330 : Update version after neon SR1

  • d05151d : Update versions to reflect Neon SR1 release

  • 8667149 : Remove SNMP4SDN from Neon distribution

  • 4a69491 : Pin pygments to 2.3.1

lispflowmapping
netconf
netvirt
neutron
openflowplugin
ovsdb
serviceutils
sfc
Neon-SR3 Release Notes

This page details changes and bug fixes between the Neon Stability Release 2 (Neon-SR2) and the Neon Stability Release 3 (Neon-SR3) of OpenDaylight.

Projects with No Noteworthy Changes
aaa
  • c219e358 : Add the support for token to the script

  • 7ba1a066 : Remove comons-beanutils overrides

  • d610923a : Bump odlparent/yangtools/mdsal to 4.0.14/2.1.14/3.0.13

  • 14e6eddd AAA-114 : Fix idmtool.py for handling errors

  • 699a7ad0 : Remove install/deploy plugin configuration

  • aaac0e26 : Fixup aaa-cert-mdsal pyang warnings

  • d5bad7b0 : Bump to odlparent-4.0.13/yangtools-2.1.13/mdsal-3.0.7

  • 1362995d : Fix checkstyle

bgpcep
controller
coe
  • 74a2904 : Bump odlparent/yangtools/mdsal to 4.0.14/2.1.14/3.0.13

  • bd83b0f : Bump to odlparent-4.0.13/yangtools-2.1.13/mdsal-3.0.7

daexim
  • f128af1 DAEXIM-15 : On daexim boot import, check models only if models file is present

  • 6fcb0db : Bump odlparent/yangtools/mdsal to 4.0.14/2.1.14/3.0.13

  • 155c79b : Bump to odlparent-4.0.13/yangtools-2.1.13/mdsal-3.0.7

genius
  • 1d7267b4 : Tunnel mesh was not fully created.

  • dd41ae56 MDSAL-389 : Expose TypedReadTransaction.exists(InstanceIdentifier)

  • 110d4244 : Bump odlparent/yangtools/mdsal to 4.0.14/2.1.14/3.0.13

  • 6c0c5798 : Bump to odlparent-4.0.13/yangtools-2.1.13/mdsal-3.0.7

  • fffd7ec5 : Fix checkstyle

  • 0763fc5b : ITM recovery is not working

  • ddd42ea0 : Fix for NPE in ArpUtil

  • 45e20ebe : In ITM Scale-In-Out when same IP is used, some stale tunnels hanging in new mesh creation.

infrautils
integration/distribution
  • 688ed90 : Restore ONAP distribution version

  • a7b2167 : Fix versions in ONAP distribution

  • 0d49640 : Bump odlparent/yangtools/mdsal to 4.0.14/2.1.14/3.0.13

  • 356f3ca : Add missing packaging pom

  • 3a88e44 INIDIST-106 : Add Neon ONAP distribution

  • 1df8fe3 : Bump MRI versions for Neon SR3

  • d136876 : Update common dist version after Neon SR2

lispflowmapping
  • 60880e6c : Fix junit-addons scope

  • e150d1c5 : Bump odlparent/yangtools/mdsal to 4.0.14/2.1.14/3.0.13

  • e2bcab7c : Bump to odlparent-4.0.13/yangtools-2.1.13/mdsal-3.0.7

  • 48f73900 : Fix checkstyle violations

netconf
netvirt
neutron
  • 5d6951e3 : Bump odlparent/yangtools/mdsal to 4.0.14/2.1.14/3.0.13

  • 6c6768e3 : Bump to odlparent-4.0.13/yangtools-2.1.13/mdsal-3.0.7

openflowplugin
ovsdb
  • e99c2d0f1 OVSDB-428 : Eliminate TransactionInvokerImpl.successfulTransactionQueue

  • c6337f51a OVSDB-428 : Speed up inputQueue interaction

  • ff4883df8 OVSDB-454 : Get rid of useless (Hwvtep)SouthboundProvider thread

  • 998935819 OVSDB-454 : Migrate OvsdbDataTreeChangeListenerTest

  • dbccf7846 OVSDB-454 : Eliminate server startup threads

  • 04a54e4f9 OVSDB-331 : Add support for using epoll Netty transport

  • 6e8667671 OVSDB-411 : Add NettyBootstrapFactory to hold OVSDB network threads

  • 335bc7a16 : Reuse StringEncoders for all connections

  • 171549eec : Reuse MappingJsonFactory across all sessions

  • f38993d13 : Do not use reflection in TransactCommandAggregator

  • 4e5ef7e02 : Fix NPEs in HwvtepOperGlobalListener

  • dc4092fdd : RowUpdate should be a static class

  • b671c750f : Eliminate OvsdbClientImpl duplication

  • 290de0d77 : Cleanup HwvtepConnectionManager.getHwvtepGlobalTableEntry()

  • e001c3152 : Do not allow DatabaseSchema name/version to be mutated

  • 22b98085d : Do not allow TableSchema columns to be directly set

  • ad191d470 : Refactor ColumnType

  • 13e4abcc1 : Cleanup ColumnSchema

  • cd57e8b5a : Add generated serialVersionUUID to exceptions

  • 9e91f3643 : Make GenericTableSchema.fromJson() a factory method

  • 2f39dd9ce : Move ObjectMapper to JsonRpcEndpoint

  • 56e02b931 : Improve schemas population

  • 1aa41b470 : Remove use of deprecated Guava methods

  • 1c80d0ab0 : Turn JsonRpcEndpoint into a proper OvsdbRPC implementation

  • 1c01dbf48 : Reuse ObjectMapper across all connections

  • f409d6603 : Use a constant ObjectMapper in UpdateNotificationDeser

  • 7aac912b2 : Use proper constant in JsonUtils

  • c47ba4d82 : Do not reconfigure ObjectMapper in FutureTransformUtils

  • decb716b5 : Bump odlparent/yangtools/mdsal to 4.0.14/2.1.14/3.0.13

  • f326d04bf : Bump to odlparent-4.0.13/yangtools-2.1.13/mdsal-3.0.7

  • ba42de715 : Fix checkstyle

  • 84efe1721 : Do not use Foo.toString() when logging

serviceutils
  • edd74f7 : Bump odlparent/yangtools/mdsal to 4.0.14/2.1.14/3.0.13

  • 242696d : Bump to odlparent-4.0.13/yangtools-2.1.13/mdsal-3.0.7

  • ab46790 : Fix checkstyle

  • b0fd12f : Fix a parent mis-reference

sfc
  • 07eeaa5a : Bump odlparent/yangtools/mdsal to 4.0.14/2.1.14/3.0.13

  • 108742ce : Bump to odlparent-4.0.13/yangtools-2.1.13/mdsal-3.0.7

  • 2bc2fcdc : Fix checkstyle

Getting Started Guide

Introduction

The OpenDaylight project is an open source platform for Software Defined Networking (SDN) that uses open protocols to provide centralized, programmatic control and network device monitoring.

Much as your operating system provides an interface for the devices that comprise your computer, OpenDaylight provides an interface that allows you to control and manage network devices.

What’s different about OpenDaylight

Major distinctions of OpenDaylight’s SDN compared to other SDN options are the following:

  • A microservices architecture, in which a “microservice” is a particular protocol or service that a user wants to enable within their installation of the OpenDaylight controller, for example:

    • A plugin that provides connectivity to devices via the OpenFlow protocols (openflowplugin).

    • A platform service such as Authentication, Authorization, and Accounting (AAA).

    • A network service providing VM connectivity for OpenStack (netvirt).

  • Support for a wide and growing range of network protocols: OpenFlow, P4 BGP, PCEP, LISP, NETCONF, OVSDB, SNMP and more.

  • Model Driven Service Abstraction Layer (MD-SAL). Yang models play a key role in OpenDaylight and are used for:

    • Creating datastore schemas (tree based structure).

    • Generating application REST API (RESTCONF).

    • Automatic code generation (Java interfaces and Data Transfer Objects).

OpenDaylight concepts and tools

In this section we discuss some of the concepts and tools you encounter with basic use of OpenDaylight. The guide walks you through the installation process in a subsequent section, but for now familiarize yourself with the information below.

  • To date, OpenDaylight developers have formed more than 50 projects to address ways to extend network functionality. The projects are a formal structure for developers from the community to meet, document release plans, code, and release the functionality they create in an OpenDaylight release.

    The typical OpenDaylight user will not join a project team, but you should know what projects are as we refer to their activities and the functionality they create. The Karaf features to install that functionality often share the project team’s name.

  • Apache Karaf provides a lightweight runtime to install the Karaf features you want to implement and is included in the OpenDaylight platform software. By default, OpenDaylight has no pre-installed features.

    Features and feature repositories can be managed in the Karaf configuration file etc/org.apache.karaf.features.cfg using the featuresRepositories and featuresBoot variables.

  • Model-Driven Service Abstraction Layer (MD-SAL) is the OpenDaylight framework that allows developers to create new Karaf features in the form of services and protocol drivers and connects them to one another. You can think of the MD-SAL as having the following two components:

    1. A shared datastore that maintains the following tree-based structures:

      1. The Config Datastore, which maintains a representation of the desired network state.

      2. The Operational Datastore, which is a representation of the actual network state based on data from the managed network elements.

    2. A message bus that provides a way for the various services and protocol drivers to notify and communicate with one another.

  • If you’re interacting with OpenDaylight through the REST APIs while using the OpenDaylight interfaces, the microservices architecture allows you to select available services, protocols, and REST APIs.

Installing OpenDaylight

You complete the following steps to install your networking environment, with specific instructions provided in the subsections below.

Before detailing the instructions for these, we address the following: Java Runtime Environment (JRE) and operating system information Target environment Known issues and limitations

Install OpenDaylight
Downloading and installing OpenDaylight

The default distribution can be found on the OpenDaylight software download page: http://www.opendaylight.org/software/downloads

The Karaf distribution has no features enabled by default. However, all of the features are available to be installed.

Note

For compatibility reasons, you cannot enable all the features simultaneously. We try to document known incompatibilities in the Install the Karaf features section below.

Running the karaf distribution

To run the Karaf distribution:

  1. Unzip the zip file.

  2. Navigate to the directory.

  3. run ./bin/karaf.

For Example:

$ ls karaf-0.8.x-Oxygen.zip
karaf-0.8.x-Oxygen.zip
$ unzip karaf-0.8.x-Oxygen.zip
Archive:  karaf-0.8.x-Oxygen.zip
   creating: karaf-0.8.x-Oxygen/
   creating: karaf-0.8.x-Oxygen/configuration/
   creating: karaf-0.8.x-Oxygen/data/
   creating: karaf-0.8.x-Oxygen/data/tmp/
   creating: karaf-0.8.x-Oxygen/deploy/
   creating: karaf-0.8.x-Oxygen/etc/
   creating: karaf-0.8.x-Oxygen/externalapps/
   ...
   inflating: karaf-0.8.x-Oxygen/bin/start.bat
   inflating: karaf-0.8.x-Oxygen/bin/status.bat
   inflating: karaf-0.8.x-Oxygen/bin/stop.bat
$ cd distribution-karaf-0.8.x-Oxygen
$ ./bin/karaf

    ________                       ________                .__  .__       .__     __
    \_____  \ ______   ____   ____ \______ \ _____  ___.__.\|  \| \|__\| ____ \|  \|___/  \|_
     /   \|   \\____ \_/ __ \ /    \ \|    \|  \\__  \<   \|  \|\|  \| \|  \|/ ___\\|  \|  \   __\
    /    \|    \  \|_> >  ___/\|   \|  \\|    `   \/ __ \\___  \|\|  \|_\|  / /_/  >   Y  \  \|
    \_______  /   __/ \___  >___\|  /_______  (____  / ____\|\|____/__\___  /\|___\|  /__\|
            \/\|__\|        \/     \/        \/     \/\/            /_____/      \/
  • Press tab for a list of available commands

  • Typing [cmd] --help will show help for a specific command.

  • Press ctrl-d or type system:shutdown or logout to shutdown OpenDaylight.

Note

Please take a look at the Deployment Recommendations and following sections under Security Considerations if you’re planning on running OpenDaylight outside of an isolated test lab environment.

Install the Karaf features

To install a feature, use the following command, where feature1 is the feature name listed in the table below:

feature:install <feature1>

You can install multiple features using the following command:

feature:install <feature1> <feature2> ... <featureN-name>

Note

For compatibility reasons, you cannot enable all Karaf features simultaneously. The table below documents feature installation names and known incompatibilities.Compatibility values indicate the following:

  • all - the feature can be run with other features.

  • self+all - the feature can be installed with other features with a value of all, but may interact badly with other features that have a value of self+all. Not every combination has been tested.

Uninstalling features

To uninstall a feature, you must shut down OpenDaylight, delete the data directory, and start OpenDaylight up again.

Important

Uninstalling a feature using the Karaf feature:uninstall command is not supported and can cause unexpected and undesirable behavior.

Listing available features

To find the complete list of Karaf features, run the following command:

feature:list

To list the installed Karaf features, run the following command:

feature:list -i

The decription of these features is in the project specific relase notes Project-specific Release Notes section.

Karaf running on Windows 10

Windows 10 cannot be identify by Karaf (equinox). Issue occurs during installation of karaf features e.g.:

opendaylight-user@root>feature:install odl-restconf
Error executing command: Can't install feature odl-restconf/0.0.0:
Could not start bundle mvn:org.fusesource.leveldbjni/leveldbjni-all/1.8-odl in feature(s) odl-akka-leveldb-0.7: The bundle "org.fusesource.leveldbjni.leveldbjni-all_1.8.0 [300]" could not be resolved. Reason: No match found for native code: META-INF/native/windows32/leveldbjni.dll; processor=x86; osname=Win32, META-INF/native/windows64/leveldbjni.dll; processor=x86-64; osname=Win32, META-INF/native/osx/libleveldbjni.jnilib; processor=x86; osname=macosx, META-INF/native/osx/libleveldbjni.jnilib; processor=x86-64; osname=macosx, META-INF/native/linux32/libleveldbjni.so; processor=x86; osname=Linux, META-INF/native/linux64/libleveldbjni.so; processor=x86-64; osname=Linux, META-INF/native/sunos64/amd64/libleveldbjni.so; processor=x86-64; osname=SunOS, META-INF/native/sunos64/sparcv9/libleveldbjni.so; processor=sparcv9; osname=SunOS

Workaround is to add

org.osgi.framework.os.name = Win32

to the karaf file

etc/system.properties

The workaround and further info are in this thread: http://stackoverflow.com/questions/35679852/karaf-exception-is-thrown-while-installing-org-fusesource-leveldbjni

Setting Up Clustering

Clustering Overview

Clustering is a mechanism that enables multiple processes and programs to work together as one entity. For example, when you search for something on google.com, it may seem like your search request is processed by only one web server. In reality, your search request is processed by may web servers connected in a cluster. Similarly, you can have multiple instances of OpenDaylight working together as one entity.

Advantages of clustering are:

  • Scaling: If you have multiple instances of OpenDaylight running, you can potentially do more work and store more data than you could with only one instance. You can also break up your data into smaller chunks (shards) and either distribute that data across the cluster or perform certain operations on certain members of the cluster.

  • High Availability: If you have multiple instances of OpenDaylight running and one of them crashes, you will still have the other instances working and available.

  • Data Persistence: You will not lose any data stored in OpenDaylight after a manual restart or a crash.

The following sections describe how to set up clustering on both individual and multiple OpenDaylight instances.

Multiple Node Clustering

The following sections describe how to set up multiple node clusters in OpenDaylight.

Deployment Considerations

To implement clustering, the deployment considerations are as follows:

  • To set up a cluster with multiple nodes, we recommend that you use a minimum of three machines. You can set up a cluster with just two nodes. However, if one of the two nodes fail, the cluster will not be operational.

    Note

    This is because clustering in OpenDaylight requires a majority of the nodes to be up and one node cannot be a majority of two nodes.

  • Every device that belongs to a cluster needs to have an identifier. OpenDaylight uses the node’s role for this purpose. After you define the first node’s role as member-1 in the akka.conf file, OpenDaylight uses member-1 to identify that node.

  • Data shards are used to contain all or a certain segment of a OpenDaylight’s MD-SAL datastore. For example, one shard can contain all the inventory data while another shard contains all of the topology data.

    If you do not specify a module in the modules.conf file and do not specify a shard in module-shards.conf, then (by default) all the data is placed in the default shard (which must also be defined in module-shards.conf file). Each shard has replicas configured. You can specify the details of where the replicas reside in module-shards.conf file.

  • If you have a three node cluster and would like to be able to tolerate any single node crashing, a replica of every defined data shard must be running on all three cluster nodes.

    Note

    This is because OpenDaylight’s clustering implementation requires a majority of the defined shard replicas to be running in order to function. If you define data shard replicas on two of the cluster nodes and one of those nodes goes down, the corresponding data shards will not function.

  • If you have a three node cluster and have defined replicas for a data shard on each of those nodes, that shard will still function even if only two of the cluster nodes are running. Note that if one of those remaining two nodes goes down, the shard will not be operational.

  • It is recommended that you have multiple seed nodes configured. After a cluster member is started, it sends a message to all of its seed nodes. The cluster member then sends a join command to the first seed node that responds. If none of its seed nodes reply, the cluster member repeats this process until it successfully establishes a connection or it is shut down.

  • After a node is unreachable, it remains down for configurable period of time (10 seconds, by default). Once a node goes down, you need to restart it so that it can rejoin the cluster. Once a restarted node joins a cluster, it will synchronize with the lead node automatically.

Clustering Scripts

OpenDaylight includes some scripts to help with the clustering configuration.

Note

Scripts are stored in the OpenDaylight distribution/bin folder, and maintained in the distribution project repository in the folder distribution-karaf/src/main/assembly/bin/.

Configure Cluster Script

This script is used to configure the cluster parameters (e.g. akka.conf, module-shards.conf) on a member of the controller cluster. The user should restart the node to apply the changes.

Note

The script can be used at any time, even before the controller is started for the first time.

Usage:

bin/configure_cluster.sh <index> <seed_nodes_list>
  • index: Integer within 1..N, where N is the number of seed nodes. This indicates which controller node (1..N) is configured by the script.

  • seed_nodes_list: List of seed nodes (IP address), separated by comma or space.

The IP address at the provided index should belong to the member executing the script. When running this script on multiple seed nodes, keep the seed_node_list the same, and vary the index from 1 through N.

Optionally, shards can be configured in a more granular way by modifying the file “custom_shard_configs.txt” in the same folder as this tool. Please see that file for more details.

Example:

bin/configure_cluster.sh 2 192.168.0.1 192.168.0.2 192.168.0.3

The above command will configure the member 2 (IP address 192.168.0.2) of a cluster made of 192.168.0.1 192.168.0.2 192.168.0.3.

Setting Up a Multiple Node Cluster

To run OpenDaylight in a three node cluster, perform the following:

First, determine the three machines that will make up the cluster. After that, do the following on each machine:

  1. Copy the OpenDaylight distribution zip file to the machine.

  2. Unzip the distribution.

  3. Open the following .conf files:

    • configuration/initial/akka.conf

    • configuration/initial/module-shards.conf

  4. In each configuration file, make the following changes:

    Find every instance of the following lines and replace _127.0.0.1_ with the hostname or IP address of the machine on which this file resides and OpenDaylight will run:

    netty.tcp {
      hostname = "127.0.0.1"
    

    Note

    The value you need to specify will be different for each node in the cluster.

  5. Find the following lines and replace _127.0.0.1_ with the hostname or IP address of any of the machines that will be part of the cluster:

    cluster {
      seed-nodes = ["akka.tcp://opendaylight-cluster-data@${IP_OF_MEMBER1}:2550",
                    <url-to-cluster-member-2>,
                    <url-to-cluster-member-3>]
    
  6. Find the following section and specify the role for each member node. Here we assign the first node with the member-1 role, the second node with the member-2 role, and the third node with the member-3 role:

    roles = [
      "member-1"
    ]
    

    Note

    This step should use a different role on each node.

  7. Open the configuration/initial/module-shards.conf file and update the replicas so that each shard is replicated to all three nodes:

    replicas = [
        "member-1",
        "member-2",
        "member-3"
    ]
    

    For reference, view a sample config files <<_sample_config_files,below>>.

  8. Move into the +<karaf-distribution-directory>/bin+ directory.

  9. Run the following command:

    JAVA_MAX_MEM=4G JAVA_MAX_PERM_MEM=512m ./karaf
    
  10. Enable clustering by running the following command at the Karaf command line:

    feature:install odl-mdsal-clustering
    

OpenDaylight should now be running in a three node cluster. You can use any of the three member nodes to access the data residing in the datastore.

Sample Config Files

Sample akka.conf file:

odl-cluster-data {
  bounded-mailbox {
    mailbox-type = "org.opendaylight.controller.cluster.common.actor.MeteredBoundedMailbox"
    mailbox-capacity = 1000
    mailbox-push-timeout-time = 100ms
  }

  metric-capture-enabled = true

  akka {
    loglevel = "DEBUG"
    loggers = ["akka.event.slf4j.Slf4jLogger"]

    actor {

      provider = "akka.cluster.ClusterActorRefProvider"
      serializers {
                java = "akka.serialization.JavaSerializer"
                proto = "akka.remote.serialization.ProtobufSerializer"
              }

              serialization-bindings {
                  "com.google.protobuf.Message" = proto

              }
    }
    remote {
      log-remote-lifecycle-events = off
      netty.tcp {
        hostname = "10.194.189.96"
        port = 2550
        maximum-frame-size = 419430400
        send-buffer-size = 52428800
        receive-buffer-size = 52428800
      }
    }

    cluster {
      seed-nodes = ["akka.tcp://opendaylight-cluster-data@10.194.189.96:2550",
                    "akka.tcp://opendaylight-cluster-data@10.194.189.98:2550",
                    "akka.tcp://opendaylight-cluster-data@10.194.189.101:2550"]

      auto-down-unreachable-after = 10s

      roles = [
        "member-2"
      ]

    }
  }
}

odl-cluster-rpc {
  bounded-mailbox {
    mailbox-type = "org.opendaylight.controller.cluster.common.actor.MeteredBoundedMailbox"
    mailbox-capacity = 1000
    mailbox-push-timeout-time = 100ms
  }

  metric-capture-enabled = true

  akka {
    loglevel = "INFO"
    loggers = ["akka.event.slf4j.Slf4jLogger"]

    actor {
      provider = "akka.cluster.ClusterActorRefProvider"

    }
    remote {
      log-remote-lifecycle-events = off
      netty.tcp {
        hostname = "10.194.189.96"
        port = 2551
      }
    }

    cluster {
      seed-nodes = ["akka.tcp://opendaylight-cluster-rpc@10.194.189.96:2551"]

      auto-down-unreachable-after = 10s
    }
  }
}

Sample module-shards.conf file:

module-shards = [
    {
        name = "default"
        shards = [
            {
                name="default"
                replicas = [
                    "member-1",
                    "member-2",
                    "member-3"
                ]
            }
        ]
    },
    {
        name = "topology"
        shards = [
            {
                name="topology"
                replicas = [
                    "member-1",
                    "member-2",
                    "member-3"
                ]
            }
        ]
    },
    {
        name = "inventory"
        shards = [
            {
                name="inventory"
                replicas = [
                    "member-1",
                    "member-2",
                    "member-3"
                ]
            }
        ]
    },
    {
         name = "toaster"
         shards = [
             {
                 name="toaster"
                 replicas = [
                    "member-1",
                    "member-2",
                    "member-3"
                 ]
             }
         ]
    }
]
Cluster Monitoring

OpenDaylight exposes shard information via MBeans, which can be explored with JConsole, VisualVM, or other JMX clients, or exposed via a REST API using Jolokia, provided by the odl-jolokia Karaf feature. This is convenient, due to a significant focus on REST in OpenDaylight.

The basic URI that lists a schema of all available MBeans, but not their content itself is:

GET  /jolokia/list

To read the information about the shards local to the queried OpenDaylight instance use the following REST calls. For the config datastore:

GET  /jolokia/read/org.opendaylight.controller:type=DistributedConfigDatastore,Category=ShardManager,name=shard-manager-config

For the operational datastore:

GET  /jolokia/read/org.opendaylight.controller:type=DistributedOperationalDatastore,Category=ShardManager,name=shard-manager-operational

The output contains information on shards present on the node:

{
  "request": {
    "mbean": "org.opendaylight.controller:Category=ShardManager,name=shard-manager-operational,type=DistributedOperationalDatastore",
    "type": "read"
  },
  "value": {
    "LocalShards": [
      "member-1-shard-default-operational",
      "member-1-shard-entity-ownership-operational",
      "member-1-shard-topology-operational",
      "member-1-shard-inventory-operational",
      "member-1-shard-toaster-operational"
    ],
    "SyncStatus": true,
    "MemberName": "member-1"
  },
  "timestamp": 1483738005,
  "status": 200
}

The exact names from the “LocalShards” lists are needed for further exploration, as they will be used as part of the URI to look up detailed info on a particular shard. An example output for the member-1-shard-default-operational looks like this:

{
  "request": {
    "mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-default-operational,type=DistributedOperationalDatastore",
    "type": "read"
  },
  "value": {
    "ReadWriteTransactionCount": 0,
    "SnapshotIndex": 4,
    "InMemoryJournalLogSize": 1,
    "ReplicatedToAllIndex": 4,
    "Leader": "member-1-shard-default-operational",
    "LastIndex": 5,
    "RaftState": "Leader",
    "LastCommittedTransactionTime": "2017-01-06 13:19:00.135",
    "LastApplied": 5,
    "LastLeadershipChangeTime": "2017-01-06 13:18:37.605",
    "LastLogIndex": 5,
    "PeerAddresses": "member-3-shard-default-operational: akka.tcp://opendaylight-cluster-data@192.168.16.3:2550/user/shardmanager-operational/member-3-shard-default-operational, member-2-shard-default-operational: akka.tcp://opendaylight-cluster-data@192.168.16.2:2550/user/shardmanager-operational/member-2-shard-default-operational",
    "WriteOnlyTransactionCount": 0,
    "FollowerInitialSyncStatus": false,
    "FollowerInfo": [
      {
        "timeSinceLastActivity": "00:00:00.320",
        "active": true,
        "matchIndex": 5,
        "voting": true,
        "id": "member-3-shard-default-operational",
        "nextIndex": 6
      },
      {
        "timeSinceLastActivity": "00:00:00.320",
        "active": true,
        "matchIndex": 5,
        "voting": true,
        "id": "member-2-shard-default-operational",
        "nextIndex": 6
      }
    ],
    "FailedReadTransactionsCount": 0,
    "StatRetrievalTime": "810.5 μs",
    "Voting": true,
    "CurrentTerm": 1,
    "LastTerm": 1,
    "FailedTransactionsCount": 0,
    "PendingTxCommitQueueSize": 0,
    "VotedFor": "member-1-shard-default-operational",
    "SnapshotCaptureInitiated": false,
    "CommittedTransactionsCount": 6,
    "TxCohortCacheSize": 0,
    "PeerVotingStates": "member-3-shard-default-operational: true, member-2-shard-default-operational: true",
    "LastLogTerm": 1,
    "StatRetrievalError": null,
    "CommitIndex": 5,
    "SnapshotTerm": 1,
    "AbortTransactionsCount": 0,
    "ReadOnlyTransactionCount": 0,
    "ShardName": "member-1-shard-default-operational",
    "LeadershipChangeCount": 1,
    "InMemoryJournalDataSize": 450
  },
  "timestamp": 1483740350,
  "status": 200
}

The output helps identifying shard state (leader/follower, voting/non-voting), peers, follower details if the shard is a leader, and other statistics/counters.

The ODLTools team is maintaining a Python based tool, that takes advantage of the above MBeans exposed via Jolokia.

Geo-distributed Active/Backup Setup

An OpenDaylight cluster works best when the latency between the nodes is very small, which practically means they should be in the same datacenter. It is however desirable to have the possibility to fail over to a different datacenter, in case all nodes become unreachable. To achieve that, the cluster can be expanded with nodes in a different datacenter, but in a way that doesn’t affect latency of the primary nodes. To do that, shards in the backup nodes must be in “non-voting” state.

The API to manipulate voting states on shards is defined as RPCs in the cluster-admin.yang file in the controller project, which is well documented. A summary is provided below.

Note

Unless otherwise indicated, the below POST requests are to be sent to any single cluster node.

To create an active/backup setup with a 6 node cluster (3 active and 3 backup nodes in two locations) there is an RPC to set voting states of all shards on a list of nodes to a given state:

POST  /restconf/operations/cluster-admin:change-member-voting-states-for-all-shards

This RPC needs the list of nodes and the desired voting state as input. For creating the backup nodes, this example input can be used:

{
  "input": {
    "member-voting-state": [
      {
        "member-name": "member-4",
        "voting": false
      },
      {
        "member-name": "member-5",
        "voting": false
      },
      {
        "member-name": "member-6",
        "voting": false
      }
    ]
  }
}

When an active/backup deployment already exists, with shards on the backup nodes in non-voting state, all that is needed for a fail-over from the active “sub-cluster” to backup “sub-cluster” is to flip the voting state of each shard (on each node, active AND backup). That can be easily achieved with the following RPC call (no parameters needed):

POST  /restconf/operations/cluster-admin:flip-member-voting-states-for-all-shards

If it’s an unplanned outage where the primary voting nodes are down, the “flip” RPC must be sent to a backup non-voting node. In this case there are no shard leaders to carry out the voting changes. However there is a special case whereby if the node that receives the RPC is non-voting and is to be changed to voting and there’s no leader, it will apply the voting changes locally and attempt to become the leader. If successful, it persists the voting changes and replicates them to the remaining nodes.

When the primary site is fixed and you want to fail back to it, care must be taken when bringing the site back up. Because it was down when the voting states were flipped on the secondary, its persisted database won’t contain those changes. If brought back up in that state, the nodes will think they’re still voting. If the nodes have connectivity to the secondary site, they should follow the leader in the secondary site and sync with it. However if this does not happen then the primary site may elect its own leader thereby partitioning the 2 clusters, which can lead to undesirable results. Therefore it is recommended to either clean the databases (i.e., journal and snapshots directory) on the primary nodes before bringing them back up or restore them from a recent backup of the secondary site (see section Backing Up and Restoring the Datastore).

If is also possible to gracefully remove a node from a cluster, with the following RPC:

POST  /restconf/operations/cluster-admin:remove-all-shard-replicas

and example input:

{
  "input": {
    "member-name": "member-1"
  }
}

or just one particular shard:

POST  /restconf/operations/cluster-admin:remove-shard-replica

with example input:

{
  "input": {
    "shard-name": "default",
    "member-name": "member-2",
    "data-store-type": "config"
  }
}

Now that a (potentially dead/unrecoverable) node was removed, another one can be added at runtime, without changing the configuration files of the healthy nodes (requiring reboot):

POST  /restconf/operations/cluster-admin:add-replicas-for-all-shards

No input required, but this RPC needs to be sent to the new node, to instruct it to replicate all shards from the cluster.

Note

While the cluster admin API allows adding and removing shards dynamically, the module-shard.conf and modules.conf files are still used on startup to define the initial configuration of shards. Modifications from the use of the API are not stored to those static files, but to the journal.

Extra Configuration Options

Name

Type

Default

Description

max-shard-data-change-executor-queue-size

uint32 (1..max)

1000

The maximum queue size for each shard’s data store data change notification executor.

max-shard-data-change-executor-pool-size

uint32 (1..max)

20

The maximum thread pool size for each shard’s data store data change notification executor.

max-shard-data-change-listener-queue-size

uint32 (1..max)

1000

The maximum queue size for each shard’s data store data change listener.

max-shard-data-store-executor-queue-size

uint32 (1..max)

5000

The maximum queue size for each shard’s data store executor.

shard-transaction-idle-timeout-in-minutes

uint32 (1..max)

10

The maximum amount of time a shard transaction can be idle without receiving any messages before it self-destructs.

shard-snapshot-batch-count

uint32 (1..max)

20000

The minimum number of entries to be present in the in-memory journal log before a snapshot is to be taken.

shard-snapshot-data-threshold-percentage

uint8 (1..100)

12

The percentage of Runtime.totalMemory() used by the in-memory journal log before a snapshot is to be taken

shard-hearbeat-interval-in-millis

uint16 (100..max)

500

The interval at which a shard will send a heart beat message to its remote shard.

operation-timeout-in-seconds

uint16 (5..max)

5

The maximum amount of time for akka operations (remote or local) to complete before failing.

shard-journal-recovery-log-batch-size

uint32 (1..max)

5000

The maximum number of journal log entries to batch on recovery for a shard before committing to the data store.

shard-transaction-commit-timeout-in-seconds

uint32 (1..max)

30

The maximum amount of time a shard transaction three-phase commit can be idle without receiving the next messages before it aborts the transaction

shard-transaction-commit-queue-capacity

uint32 (1..max)

20000

The maximum allowed capacity for each shard’s transaction commit queue.

shard-initialization-timeout-in-seconds

uint32 (1..max)

300

The maximum amount of time to wait for a shard to initialize from persistence on startup before failing an operation (eg transaction create and change listener registration).

shard-leader-election-timeout-in-seconds

uint32 (1..max)

30

The maximum amount of time to wait for a shard to elect a leader before failing an operation (eg transaction create).

enable-metric-capture

boolean

false

Enable or disable metric capture.

bounded-mailbox-capacity

uint32 (1..max)

1000

Max queue size that an actor’s mailbox can reach

persistent

boolean

true

Enable or disable data persistence

shard-isolated-leader-check-interval-in-millis

uint32 (1..max)

5000

the interval at which the leader of the shard will check if its majority followers are active and term itself as isolated

These configuration options are included in the etc/org.opendaylight.controller.cluster.datastore.cfg configuration file.

Persistence and Backup

Set Persistence Script

This script is used to enable or disable the config datastore persistence. The default state is enabled but there are cases where persistence may not be required or even desired. The user should restart the node to apply the changes.

Note

The script can be used at any time, even before the controller is started for the first time.

Usage:

bin/set_persistence.sh <on/off>

Example:

bin/set_persistence.sh off

The above command will disable the config datastore persistence.

Backing Up and Restoring the Datastore

The same cluster-admin API described in the cluster guide for managing shard voting states has an RPC allowing backup of the datastore in a single node, taking only the file name as a parameter:

POST  /restconf/operations/cluster-admin:backup-datastore

RPC input JSON:

{
  "input": {
    "file-path": "/tmp/datastore_backup"
  }
}

Note

This backup can only be restored if the YANG models of the backed-up data are identical in the backup OpenDaylight instance and restore target instance.

To restore the backup on the target node the file needs to be placed into the $KARAF_HOME/clustered-datastore-restore directory, and then the node restarted. If the directory does not exist (which is quite likely if this is a first-time restore) it needs to be created. On startup, ODL checks if the journal and snapshots directories in $KARAF_HOME are empty, and only then tries to read the contents of the clustered-datastore-restore directory, if it exists. So for a successful restore, those two directories should be empty. The backup file name itself does not matter, and the startup process will delete it after a successful restore.

The backup is node independent, so when restoring a 3 node cluster, it is best to restore it on each node for consistency. For example, if restoring on one node only, it can happen that the other two empty nodes form a majority and the cluster comes up with no data.

Security Considerations

This document discusses the various security issues that might affect OpenDaylight. The document also lists specific recommendations to mitigate security risks.

This document also contains information about the corrective steps you can take if you discover a security issue with OpenDaylight, and if necessary, contact the Security Response Team, which is tasked with identifying and resolving security threats.

Overview of OpenDaylight Security

There are many different kinds of security vulnerabilities that could affect an OpenDaylight deployment, but this guide focuses on those where (a) the servers, virtual machines or other devices running OpenDaylight have been properly physically (or virtually in the case of VMs) secured against untrusted individuals and (b) individuals who have access, either via remote logins or physically, will not attempt to attack or subvert the deployment intentionally or otherwise.

While those attack vectors are real, they are out of the scope of this document.

What remains in scope is attacks launched from a server, virtual machine, or device other than the one running OpenDaylight where the attack does not have valid credentials to access the OpenDaylight deployment.

The rest of this document gives specific recommendations for deploying OpenDaylight in a secure manner, but first we highlight some high-level security advantages of OpenDaylight.

  • Separating the control and management planes from the data plane (both logically and, in many cases, physically) allows possible security threats to be forced into a smaller attack surface.

  • Having centralized information and network control gives network administrators more visibility and control over the entire network, enabling them to make better decisions faster. At the same time, centralization of network control can be an advantage only if access to that control is secure.

    Note

    While both previous advantages improve security, they also make an OpenDaylight deployment an attractive target for attack making understanding these security considerations even more important.

  • The ability to more rapidly evolve southbound protocols and how they are used provides more and faster mechanisms to enact appropriate security mitigations and remediations.

  • OpenDaylight is built from OSGi bundles and the Karaf Java container. Both Karaf and OSGi provide some level of isolation with explicit code boundaries, package imports, package exports, and other security-related features.

  • OpenDaylight has a history of rapidly addressing known vulnerabilities and a well-defined process for reporting and dealing with them.

OpenDaylight Security Resources
Deployment Recommendations

We recommend that you follow the deployment guidelines in setting up OpenDaylight to minimize security threats.

  • The default credentials should be changed before deploying OpenDaylight.

  • OpenDaylight should be deployed in a private network that cannot be accessed from the internet.

  • Separate the data network (that connects devices using the network) from the management network (that connects the network devices to OpenDaylight).

    Note

    Deploying OpenDaylight on a separate, private management network does not eliminate threats, but only mitigates them. By construction, some messages must flow from the data network to the management network, e.g., OpenFlow packet_in messages, and these create an attack surface even if it is a small one.

  • Implement an authentication policy for devices that connect to both the data and management network. These are the devices which bridge, likely untrusted, traffic from the data network to the management network.

Securing OSGi bundles

OSGi is a Java-specific framework that improves the way that Java classes interact within a single JVM. It provides an enhanced version of the java.lang.SecurityManager (ConditionalPermissionAdmin) in terms of security.

Java provides a security framework that allows a security policy to grant permissions, such as reading a file or opening a network connection, to specific code. The code maybe classes from the jarfile loaded from a specific URL, or a class signed by a specific key. OSGi builds on the standard Java security model to add the following features:

  • A set of OSGi-specific permission types, such as one that grants the right to register an OSGi service or get an OSGi service from the service registry.

  • The ability to dynamically modify permissions at runtime. This includes the ability to specify permissions by using code rather than a text configuration file.

  • A flexible predicate-based approach to determining which rules are applicable to which ProtectionDomain. This approach is much more powerful than the standard Java security policy which can only grant rights based on a jarfile URL or class signature. A few standard predicates are provided, including selecting rules based upon bundle symbolic-name.

  • Support for bundle local permissions policies with optional further constraints such as DENY operations. Most of this functionality is accessed by using the OSGi ConditionalPermissionAdmin service which is part of the OSGi core and can be obtained from the OSGi service registry. The ConditionalPermissionAdmin API replaces the earlier PermissionAdmin API.

For more information, refer to http://www.osgi.org/Main/HomePage.

Securing the Karaf container

Apache Karaf is a OSGi-based runtime platform which provides a lightweight container for OpenDaylight and applications. Apache Karaf uses either Apache Felix Framework or Eclipse Equinox OSGi frameworks, and provide additional features on top of the framework.

Apache Karaf provides a security framework based on Java Authentication and Authorization Service (JAAS) in compliance with OSGi recommendations, while providing RBAC (Role-Based Access Control) mechanism for the console and Java Management Extensions (JMX).

The Apache Karaf security framework is used internally to control the access to the following components:

  • OSGi services

  • console commands

  • JMX layer

  • WebConsole

The remote management capabilities are present in Apache Karaf by default, however they can be disabled by using various configuration alterations. These configuration options may be applied to the OpenDaylight Karaf distribution.

Note

Refer to the following list of publications for more information on implementing security for the Karaf container.

Disabling the remote shutdown port

You can lock down your deployment post installation. Set karaf.shutdown.port=-1 in etc/custom.properties or etc/config.properties to disable the remote shutdown port.

Securing Southbound Plugins

Many individual southbound plugins provide mechanisms to secure their communication with network devices. For example, the OpenFlow plugin supports TLS connections with bi-directional authentication and the NETCONF plugin supports connecting over SSH. Meanwhile, the Unified Secure Channel plugin provides a way to form secure, remote connections for supported devices.

When deploying OpenDaylight, you should carefully investigate the secure mechanisms to connect to devices using the relevant plugins.

Securing OpenDaylight using AAA

AAA stands for Authentication, Authorization, and Accounting. All three of these services can help improve the security posture of an OpenDaylight deployment.

The vast majority of OpenDaylight’s northbound APIs (and all RESTCONF APIs) are protected by AAA by default when installing the +odl-restconf+ feature. In the cases that APIs are not protected by AAA, this will be noted in the per-project release notes.

By default, OpenDaylight has only one user account with the username and password admin. This should be changed before deploying OpenDaylight.

Securing RESTCONF using HTTPS

To secure Jetty RESTful services, including RESTCONF, you must configure the Jetty server to utilize SSL by performing the following steps.

  1. Issue the following command sequence to create a self-signed certificate for use by the ODL deployment.

    keytool -keystore .keystore -alias jetty -genkey -keyalg RSA
     Enter keystore password:  123456
    What is your first and last name?
      [Unknown]:  odl
    What is the name of your organizational unit?
      [Unknown]:  odl
    What is the name of your organization?
      [Unknown]:  odl
    What is the name of your City or Locality?
      [Unknown]:
    What is the name of your State or Province?
      [Unknown]:
    What is the two-letter country code for this unit?
      [Unknown]:
    Is CN=odl, OU=odl, O=odl,
    L=Unknown, ST=Unknown, C=Unknown correct?
      [no]:  yes
    
  2. After the key has been obtained, make the following changes to the etc/custom.properties file to set a few default properties.

    org.osgi.service.http.secure.enabled=true
    org.osgi.service.http.port.secure=8443
    org.ops4j.pax.web.ssl.keystore=./etc/.keystore
    org.ops4j.pax.web.ssl.password=123456
    org.ops4j.pax.web.ssl.keypassword=123456
    
  3. Then edit the etc/jetty.xml file with the appropriate HTTP connectors.

    For example:

    <?xml version="1.0"?>
    <!--
     Licensed to the Apache Software Foundation (ASF) under one
     or more contributor license agreements.  See the NOTICE file
     distributed with this work for additional information
     regarding copyright ownership.  The ASF licenses this file
     to you under the Apache License, Version 2.0 (the
     "License"); you may not use this file except in compliance
     with the License.  You may obtain a copy of the License at
    
       http://www.apache.org/licenses/LICENSE-2.0
    
    Unless required by applicable law or agreed to in writing,
    software distributed under the License is distributed on an
    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
     KIND, either express or implied.  See the License for the
     specific language governing permissions and limitations
     under the License.
    -->
    <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//
    DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd">
    
    <Configure id="Server" class="org.eclipse.jetty.server.Server">
    
        <!-- Use this connector for many frequently idle connections and for
            threadless continuations. -->
        <New id="http-default" class="org.eclipse.jetty.server.HttpConfiguration">
            <Set name="secureScheme">https</Set>
            <Set name="securePort">
                <Property name="jetty.secure.port" default="8443" />
            </Set>
            <Set name="outputBufferSize">32768</Set>
            <Set name="requestHeaderSize">8192</Set>
            <Set name="responseHeaderSize">8192</Set>
    
            <!-- Default security setting: do not leak our version -->
            <Set name="sendServerVersion">false</Set>
    
            <Set name="sendDateHeader">false</Set>
            <Set name="headerCacheSize">512</Set>
        </New>
    
        <Call name="addConnector">
            <Arg>
                <New class="org.eclipse.jetty.server.ServerConnector">
                    <Arg name="server">
                        <Ref refid="Server" />
                    </Arg>
                    <Arg name="factories">
                        <Array type="org.eclipse.jetty.server.ConnectionFactory">
                            <Item>
                                <New class="org.eclipse.jetty.server.HttpConnectionFactory">
                                    <Arg name="config">
                                        <Ref refid="http-default"/>
                                    </Arg>
                                </New>
                            </Item>
                        </Array>
                    </Arg>
                    <Set name="host">
                        <Property name="jetty.host"/>
                    </Set>
                    <Set name="port">
                        <Property name="jetty.port" default="8181"/>
                    </Set>
                    <Set name="idleTimeout">
                        <Property name="http.timeout" default="300000"/>
                    </Set>
                    <Set name="name">jetty-default</Set>
                </New>
            </Arg>
        </Call>
    
        <!-- =========================================================== -->
        <!-- Configure Authentication Realms -->
        <!-- Realms may be configured for the entire server here, or -->
        <!-- they can be configured for a specific web app in a context -->
        <!-- configuration (see $(jetty.home)/contexts/test.xml for an -->
        <!-- example). -->
        <!-- =========================================================== -->
        <Call name="addBean">
            <Arg>
                <New class="org.eclipse.jetty.jaas.JAASLoginService">
                    <Set name="name">karaf</Set>
                    <Set name="loginModuleName">karaf</Set>
                    <Set name="roleClassNames">
                        <Array type="java.lang.String">
                            <Item>org.apache.karaf.jaas.boot.principal.RolePrincipal</Item>
                        </Array>
                    </Set>
                </New>
            </Arg>
        </Call>
        <Call name="addBean">
            <Arg>
               <New class="org.eclipse.jetty.jaas.JAASLoginService">
                    <Set name="name">default</Set>
                    <Set name="loginModuleName">karaf</Set>
                    <Set name="roleClassNames">
                        <Array type="java.lang.String">
                            <Item>org.apache.karaf.jaas.boot.principal.RolePrincipal</Item>
                        </Array>
                    </Set>
                </New>
            </Arg>
        </Call>
    </Configure>
    

The configuration snippet above adds a connector that is protected by SSL on port 8443. You can test that the changes have succeeded by restarting Karaf, issuing the following curl command, and ensuring that the 2XX HTTP status code appears in the returned message.

curl -u admin:admin -v -k https://localhost:8443/restconf/modules
Security Considerations for Clustering

While OpenDaylight clustering provides many benefits including high availability, scale-out performance, and data durability, it also opens a new attack surface in the form of the messages exchanged between the various instances of OpenDaylight in the cluster. In the current OpenDaylight release, these messages are neither encrypted nor authenticated meaning that anyone with access to the management network where OpenDaylight exchanges these clustering messages can forge and/or read the messages. This means that if clustering is enabled, it is even more important that the management network be kept secure from any untrusted entities.

What to Do with OpenDaylight

OpenDaylight (ODL) is a modular open platform for customizing and automating networks of any size and scale.

The following section provides links to documentation with examples of OpenDaylight deployment use cases.

Note

If you are an OpenDaylight contributor, we encourage you to add links to documentation with examples of interesting OpenDaylight deployment use cases in this section.

How to Get Help

Users and developers can get support from the OpenDaylight community through the mailing lists, IRC and forums.

  1. Create your question on ServerFault or Stackoverflow with the tag #opendaylight.

    Note

    It is important to tag questions correctly to ensure that the questions reach individuals subscribed to the tag.

  2. Mail discuss@lists.opendaylight.org or dev@lists.opendaylight.org.

  3. Directly mail the PTL as indicated on the specific projects page.

  4. IRC: Connect to #opendaylight or #opendaylight-meeting channel on freenode. The Linux Foundation’s IRC guide may be helpful. You’ll need an IRC client, or can use the freenode webchat, or perhaps you’ll like IRCCloud.

  5. For infrastructure and release engineering queries, mail helpdesk@opendaylight.org. IRC: Connect to #lf-releng channel on freenode.

Developing Apps on the OpenDaylight controller

This section provides information that is required to develop apps on the OpenDaylight controller.

You can either develop apps within the controller using the model-driven SAL (MD-SAL) archetype or develop external apps and use the RESTCONF to communicate with the controller.

Overview

This section enables you to get started with app development within the OpenDaylight controller. In this example, you perform the following steps to develop an app.

  1. Create a local repository for the code using a simple build process.

  2. Start the OpenDaylight controller.

  3. Test a simple remote procedure call (RPC) which you have created based on the principle of hello world.

Pre requisites

This example requires the following.

  • A development environment with following set up and working correctly from the shell:

    • Maven 3.5.2 or later

    • Java 8-compliant JDK

    • An appropriate Maven settings.xml file. A simple way to get the default OpenDaylight settings.xml file is:

      cp -n ~/.m2/settings.xml{,.orig} ; wget -q -O - https://raw.githubusercontent.com/opendaylight/odlparent/master/settings.xml > ~/.m2/settings.xml
      

Note

If you are using Linux or Mac OS X as your development OS, your local repository is ~/.m2/repository. For other platforms the local repository location will vary.

Building an example module

To develop an app perform the following steps.

  1. Create an Example project using Maven and an archetype called the opendaylight-startup-archetype. If you are downloading this project for the first time, then it will take sometime to pull all the code from the remote repository.

    mvn archetype:generate -DarchetypeGroupId=org.opendaylight.archetypes -DarchetypeArtifactId=opendaylight-startup-archetype \
    -DarchetypeCatalog=remote -DarchetypeVersion=<VERSION>
    

    The correct VERSION depends on desired Simultaneous Release:

    Archetype versions

    OpenDaylight Simultaneous Release

    opendaylight-startup-archetype version

    Neon

    1.1.0

    Neon SR1

    1.1.1

    Neon SR2

    1.1.2

    Neon SR3

    1.1.3

  2. Update the properties values as follows. Ensure that the values for the groupId and the artifactId are in lower case.

    Define value for property 'groupId': : org.opendaylight.example
    Define value for property 'artifactId': : example
    Define value for property 'version':  1.0-SNAPSHOT: : 1.0.0-SNAPSHOT
    Define value for property 'package':  org.opendaylight.example: :
    Define value for property 'classPrefix':  ${artifactId.substring(0,1).toUpperCase()}${artifactId.substring(1)}
    Define value for property 'copyright': : Copyright (c) 2015 Yoyodyne, Inc.
    
  3. Accept the default value of classPrefix that is, (${artifactId.substring(0,1).toUpperCase()}${artifactId.substring(1)}). The classPrefix creates a Java Class Prefix by capitalizing the first character of the artifactId.

    Note

    In this scenario, the classPrefix used is “Example”. Create a top-level directory for the archetype.

    ${artifactId}/
    example/
    cd example/
    api/
    artifacts/
    features/
    impl/
    karaf/
    pom.xml
    
  4. Build the example project.

    Note

    Depending on your development machine’s specification this might take a little while. Ensure that you are in the project’s root directory, example/, and then issue the build command, shown below.

    mvn clean install
    
  5. Start the example project for the first time.

    cd karaf/target/assembly/bin
    ls
    ./karaf
    
  6. Wait for the karaf cli that appears as follows. Wait for OpenDaylight to fully load all the components. This can take a minute or two after the prompt appears. Check the CPU on your dev machine, specifically the Java process to see when it calms down.

    opendaylight-user@root>
    
  7. Verify if the “example” module is built and search for the log entry which includes the entry ExampleProvider Session Initiated.

    log:display | grep Example
    
  8. Shutdown OpenDaylight through the console by using the following command.

    shutdown -f
    

Defining a Simple Hello World RPC

  1. Build a hello example from the Maven archetype opendaylight-startup-archetype, same as above.
  2. Now view the entry point to understand where the log line came from. The entry point is in the impl project:

    impl/src/main/java/org/opendaylight/hello/impl/HelloProvider.java
    
  3. Add any new things that you are doing in your implementation by using the HelloProvider.onSessionInitiate method. It’s analogous to an Activator.

    @Override
    public void onSessionInitiated(ProviderContext session) {
        LOG.info("HelloProvider Session Initiated");
    }
    

Add a simple HelloWorld RPC API

  1. Navigate to the file.

    Edit
    api/src/main/yang/hello.yang
    
  2. Edit this file as follows. In the following example, we are adding the code in a YANG module to define the hello-world RPC:

    module hello {
        yang-version 1;
        namespace "urn:opendaylight:params:xml:ns:yang:hello";
        prefix "hello";
        revision "2015-01-05" {
            description "Initial revision of hello model";
        }
        rpc hello-world {
            input {
                leaf name {
                    type string;
                }
            }
            output {
                leaf greeting {
                    type string;
                }
            }
        }
    }
    
  3. Return to the hello/api directory and build your API as follows.

    cd ../../../
    mvn clean install
    

Implement the HelloWorld RPC API

  1. Define the HelloService, which is invoked through the hello-world API.

    cd ../impl/src/main/java/org/opendaylight/hello/impl/
    
  2. Create a new file called HelloWorldImpl.java and add in the code below.

    package org.opendaylight.hello.impl;
    
    import com.google.common.util.concurrent.ListenableFuture;
    import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev150105.HelloService;
    import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev150105.HelloWorldInput;
    import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev150105.HelloWorldOutput;
    import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev150105.HelloWorldOutputBuilder;
    import org.opendaylight.yangtools.yang.common.RpcResult;
    import org.opendaylight.yangtools.yang.common.RpcResultBuilder;
    
    public class HelloWorldImpl implements HelloService {
    
        @Override
        public ListenableFuture<RpcResult<HelloWorldOutput>> helloWorld(HelloWorldInput input) {
            HelloWorldOutputBuilder helloBuilder = new HelloWorldOutputBuilder();
            helloBuilder.setGreeting("Hello " + input.getName());
            return RpcResultBuilder.success(helloBuilder.build()).buildFuture();
        }
    }
    
  3. The HelloProvider.java file is in the current directory. Register the RPC that you created in the hello.yang file in the HelloProvider.java file. You can either edit the HelloProvider.java to match what is below or you can simple replace it with the code below.

    /*
     * Copyright(c) Yoyodyne, Inc. and others.  All rights reserved.
     *
     * This program and the accompanying materials are made available under the
     * terms of the Eclipse Public License v1.0 which accompanies this distribution,
     * and is available at http://www.eclipse.org/legal/epl-v10.html
     */
    package org.opendaylight.hello.impl;
    
    import org.opendaylight.controller.sal.binding.api.BindingAwareBroker.ProviderContext;
    import org.opendaylight.controller.sal.binding.api.BindingAwareBroker.RpcRegistration;
    import org.opendaylight.controller.sal.binding.api.BindingAwareProvider;
    import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev150105.HelloService;
    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
    
    public class HelloProvider implements BindingAwareProvider, AutoCloseable {
    
        private static final Logger LOG = LoggerFactory.getLogger(HelloProvider.class);
        private RpcRegistration<HelloService> helloService;
    
        @Override
        public void onSessionInitiated(ProviderContext session) {
            LOG.info("HelloProvider Session Initiated");
            helloService = session.addRpcImplementation(HelloService.class, new HelloWorldImpl());
        }
    
        @Override
        public void close() throws Exception {
            LOG.info("HelloProvider Closed");
            if (helloService != null) {
                helloService.close();
            }
        }
    }
    
  4. Optionally, you can also build the Java classes which will register the new RPC. This is useful to test the edits you have made to HelloProvider.java and HelloWorldImpl.java.

    cd ../../../../../../../
    mvn clean install
    
  5. Return to the top level directory

    cd ../
    
  6. Build the entire hello again, which will pickup the changes you have made and build them into your project:

    mvn clean install
    

Execute the hello project for the first time

  1. Run karaf

    cd ../karaf/target/assembly/bin
    ./karaf
    
  2. Wait for the project to load completely. Then view the log to see the loaded Hello Module:

    log:display | grep Hello
    

Test the hello-world RPC via REST

There are a lot of ways to test your RPC. Following are some examples.

  1. Using the API Explorer through HTTP

  2. Using a browser REST client

Using the API Explorer through HTTP
  1. Navigate to apidoc UI with your web browser.
    NOTE: In the URL mentioned above, Change localhost to the IP/Host name to reflect your development machine’s network address.
  2. Select

    hello(2015-01-05)
    
  3. Select

    POST /operations/hello:hello-world
    
  4. Provide the required value.

    {"hello:input": { "name":"Your Name"}}
    
  5. Click the button.

  6. Enter the username and password, by default the credentials are admin/admin.

  7. In the response body you should see.

    {
      "output": {
        "greeting": "Hello Your Name"
      }
    }
    
Using a browser REST client
For example, use the following information in the Firefox plugin RESTClient https://github.com/chao/RESTClient
POST: http://192.168.1.43:8181/restconf/operations/hello:hello-world

Header:

application/json

Body:

{"input": {
    "name": "Andrew"
  }
}

Troubleshooting

If you get a response code 501 while attempting to POST /operations/hello:hello-world, check the file: HelloProvider.java and make sure the helloService member is being set. By not invoking “session.addRpcImplementation()” the REST API will be unable to map /operations/hello:hello-world url to HelloWorldImpl.

OpenDaylight Contributor Guides

Documentation Guide

This guide provides details on how to contribute to the OpenDaylight documentation. OpenDaylight currently uses reStructuredText for documentation and Sphinx to build it. These documentation tools are widely used in open source communities to produce both HTML and PDF documentation and can be easily versioned alongside the code. reStructuredText also offers similar syntax to Markdown, which is familiar to many developers.

Style Guide

This section serves two purposes:

  1. A guide for those writing documentation.

  2. A guide for those reviewing documentation.

Note

When reviewing content, assuming that the content is usable, the documentation team is biased toward merging the content rather than blocking it due to relatively minor editorial issues.

Formatting Preferences

In general, when reviewing content, the documentation team ensures that it is comprehensible but tries not to be overly pedantic. Along those lines, while it is preferred that the following formatting preferences are followed, they are generally not an exclusive reason to give a “-1” reply to a patch in Gerrit:

  • No trailing whitespace

  • Line wrapping at something reasonable, that is, 72–100 characters

Key terms
  • Functionality: something useful a project provides abstractly

  • Feature: a Karaf feature that somebody could install

  • Project: a project within OpenDaylight; projects ship features to provide functionality

  • OpenDaylight: this refers to the software we release; use this in place of OpenDaylight controller, the OpenDaylight controller, not ODL, not ODC

    • Since there is a controller project within OpenDaylight, using other terms is hard.

Common writing style mistakes
  • In per-project user documentation, you should never say git clone, but should assume people have downloaded and installed the controller per the getting started guide and start with feature:install <something>

  • Avoid statements which are true about part of OpenDaylight, but not generally true.

    • For example: “OpenDaylight is a NETCONF controller.” It is, but that is not all it is.

  • In general, developer documentation should target external developers to your project so should talk about what APIs you have and how they could use them. It should not document how to contribute to your project.

Grammar Preferences
  • Avoid contractions: Use “cannot” instead of “can’t”, “it is” instead of “it’s”, and so on.

Word Choice

Note

The following word choice guidelines apply when using these terms in text. If these terms are used as part of a URL, class name, or any instance where modifying the case would create issues, use the exact capitalization and spacing associated with the URL or class name.

  • ACL: not Acl or acl

  • API: not api

  • ARP: not Arp or arp

  • datastore: not data store, Data Store, or DataStore (unless it is a class/object name)

  • IPsec, not IPSEC or ipsec

  • IPv4 or IPv6: not Ipv4, Ipv6, ipv4, ipv6, IPV4, or IPV6

  • Karaf: not karaf

  • Linux: not LINUX or linux

  • NETCONF: not Netconf or netconf

  • Neutron: not neutron

  • OSGi: not osgi or OSGI

  • Open vSwitch: not OpenvSwitch, OpenVSwitch, or Open V Switch.

  • OpenDaylight: not Opendaylight, Open Daylight, or OpenDayLight.

    Note

    Also, avoid Opendaylight abbreviations like ODL and ODC.

  • OpenFlow: not Openflow, Open Flow, or openflow.

  • OpenStack: not Open Stack or Openstack

  • QoS: not Qos, QOS, or qos

  • RESTCONF: not Restconf or restconf

  • RPC not Rpc or rpc

  • URL not Url or url

  • VM: not Vm or vm

  • YANG: not Yang or yang

reStructuredText-based Documentation

When using reStructuredText, follow the Python documentation style guidelines. See: https://docs.python.org/devguide/documenting.html

One of the best references for reStrucutedText syntax is the Sphinx Primer on reStructuredText.

To build and review the reStructuredText documentation locally, you must have the following packages installed locally:

  • python

  • python-tox

Note

Both packages should be available in most distribution package managers.

Then simply run tox and open the HTML produced by using your favorite web browser as follows:

git clone https://git.opendaylight.org/gerrit/docs
cd docs
git submodule update --init
tox
firefox docs/_build/html/index.html
Directory Structure

The directory structure for the reStructuredText documentation is rooted in the docs directory inside the docs git repository.

Note

There are guides hosted directly in the docs git repository and there are guides hosted in remote git repositories. Documentation hosted in remote git repositories are generally provided for project-specific information.

For example, here is the directory layout on June, 28th 2016:

$ tree -L 2
.
├── Makefile
├── conf.py
├── documentation.rst
├── getting-started-guide
│   ├── api.rst
│   ├── concepts_and_tools.rst
│   ├── experimental_features.rst
│   ├── index.rst
│   ├── installing_opendaylight.rst
│   ├── introduction.rst
│   ├── karaf_features.rst
│   ├── other_features.rst
│   ├── overview.rst
│   └── who_should_use.rst
├── index.rst
├── make.bat
├── opendaylight-with-openstack
│   ├── images
│   ├── index.rst
│   ├── openstack-with-gbp.rst
│   ├── openstack-with-ovsdb.rst
│   └── openstack-with-vtn.rst
└── submodules
    └── releng
        └── builder

The getting-started-guide and opendaylight-with-openstack directories correspond to two guides hosted in the docs repository, while the submodules/releng/builder directory houses documentation for the RelEng/Builder project.

Each guide includes an index.rst file, which uses a toctree directive that includes the other files associated with the guide. For example:

.. toctree::
   :maxdepth: 1

   getting-started-guide/index
   opendaylight-with-openstack/index
   submodules/releng/builder/docs/index

This example creates a table of contents on that page where each heading of the table of contents is the root of the files that are included.

Note

When including .rst files using the toctree directive, omit the .rst file extension at the end of the file name.

Adding a submodule

If you want to import a project underneath the documentation project so that the docs can be kept in the separate repo, you can do it by using the git submodule add command as follows:

git submodule add -b master ../integration/packaging docs/submodules/integration/packaging
git commit -s

Note

Most projects will not want to use -b master, but instead use the branch ., which tracks whatever branch of the documentation project you happen to be on.

Unfortunately, -b . does not work, so you have to manually edit the .gitmodules file to add branch = . and then commit it. For example:

<edit the .gitmodules file>
git add .gitmodules
git commit --amend

When you’re done you should have a git commit something like:

$ git show
commit 7943ce2cb41cd9d36ce93ee9003510ce3edd7fa9
Author: Daniel Farrell <dfarrell@redhat.com>
Date:   Fri Dec 23 14:45:44 2016 -0500

    Add Int/Pack to git submodules for RTD generation

    Change-Id: I64cd36ca044b8303cb7fc465b2d91470819a9fe6
    Signed-off-by: Daniel Farrell <dfarrell@redhat.com>

diff --git a/.gitmodules b/.gitmodules
index 91201bf6..b56e11c8 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -38,3 +38,7 @@
        path = docs/submodules/ovsdb
        url = ../ovsdb
        branch = .
+[submodule "docs/submodules/integration/packaging"]
+       path = docs/submodules/integration/packaging
+       url = ../integration/packaging
+       branch = master
diff --git a/docs/submodules/integration/packaging b/docs/submodules/integration/packaging
new file mode 160000
index 00000000..fd5a8185
--- /dev/null
+++ b/docs/submodules/integration/packaging
@@ -0,0 +1 @@
+Subproject commit fd5a81853e71d45945471d0f91bbdac1a1444386

As usual, you can push it to Gerrit with git review.

Important

It is critical that the Gerrit patch be merged before the git commit hash of the submodule changes. Otherwise, Gerrit is not able to automatically keep it up-to-date for you.

Documentation Layout and Style

As mentioned previously, OpenDaylight aims to follow the Python documentation style guidelines, which defines a few types of sections:

# with overline, for parts
* with overline, for chapters
=, for sections
-, for subsections
^, for subsubsections
", for paragraphs

OpenDaylight documentation is organized around the following structure based on that recommendation:

docs/index.rst                 -> entry point
docs/____-guide/index.rst      -> part
docs/____-guide/<chapter>.rst  -> chapter

In the ____-guide/index.rst we use the # with overline at the very top of the file to determine that it is a part and then within each chapter file we start the document with a section using * with overline to denote that it is the chapter heading and then everything in the rest of the chapter should use:

=, for sections
-, for subsections
^, for subsubsections
", for paragraphs
Referencing Sections

This section provides a quick primer for creating references in OpenDaylight documentation. For more information, refer to Cross-referencing documents.

Within a single document, you can reference another section simply by:

This is a reference to `The title of a section`_

Assuming that somewhere else in the same file, there a is a section title something like:

The title of a section
^^^^^^^^^^^^^^^^^^^^^^

It is typically better to use :ref: syntax and labels to provide links as they work across files and are resilient to sections being renamed. First, you need to create a label something like:

.. _a-label:

The title of a section
^^^^^^^^^^^^^^^^^^^^^^

Note

The underscore (_) before the label is required.

Then you can reference the section anywhere by simply doing:

This is a reference to :ref:`a-label`

or:

This is a reference to :ref:`a section I really liked <a-label>`

Note

When using :ref:-style links, you don’t need a trailing underscore (_).

Because the labels have to be unique, a best practice is to prefix the labels with the project name to help share the label space; for example, use sfc-user-guide instead of just user-guide.

Troubleshooting
Nested formatting does not work

As stated in the reStructuredText guide, inline markup for bold, italic, and fixed-width font cannot be nested. Furthermore, inline markup cannot be mixed with hyperlinks, so you cannot have a link with bold text.

This is tracked in a Docutils FAQ question, but there is no clear current plan to fix this.

Make sure you have cloned submodules

If you see an error like this:

./build-integration-robot-libdoc.sh: line 6: cd: submodules/integration/test/csit/libraries: No such file or directory
Resource file '*.robot' does not exist.

It means that you have not pulled down the git submodule for the integration/test project. The fastest way to do that is:

git submodule update --init

In some cases, you might wind up with submodules which are somehow out-of-sync. In that case, the easiest way to fix them is to delete the submodules directory and then re-clone the submodules:

rm -rf docs/submodules/
git submodule update --init

Warning

These steps delete any local changes or information you made in the submodules, which would only occur if you manually edited files in that directory.

Clear your tox directory and try again

Sometimes, tox will not detect when your requirements.txt file has changed and so will try to run things without the correct dependencies. This issue usually manifests as No module named X errors or an ExtensionError and can be fixed by deleting the .tox directory and building again:

rm -rf .tox
tox
Builds on Read the Docs

Read the Docs builds do not automatically clear the file structure between builds and clones. The result is that you may have to clean up the state of old runs of the build script.

As an example, refer to the following patch: https://git.opendaylight.org/gerrit/41679

This patch fixed the issue that caused builds to fail because they were taking too long removing directories associated with generated javadoc files that were present from previous runs.

Errors from Coala

As part of running tox, two environments run: coala which does a variety of reStructuredText (and other) linting, and docs, which runs Sphinx to build HTML and PDF documentation. You can run them independently by doing tox -ecoala or tox -edocs.

The coala linter for reStructuredText is not always the most helpful in explaining why it failed. So, here are some common ones. There should also be Jenkins Failure Cause Management rules that will highlight these for you.

Git Commit Message Errors

Coala checks that git commit messages adhere to the following rules:

  • Shortlog (1st line of commit message) is less than 50 characters

  • Shortlog (1st line of commit message) is in the imperative mood. For example, “Add foo unit test” is good, but “Adding foo unit test is bad”“

  • Body (all lines but 1st line of commit message) are less than 72 characters. Some exceptions seem to exist, such as for long URLs.

Some examples of those being logged are:

::

Project wide: | | [NORMAL] GitCommitBear: | | Shortlog of HEAD commit isn’t in imperative mood! Bad words are ‘Adding’

::

Project wide: | | [NORMAL] GitCommitBear: | | Body of HEAD commit contains too long lines. Commit body lines should not exceed 72 characters.

Error in “code-block” directive

If you see an error like this:

::

docs/gerrit.rst | 89| ···..·code-block::·bash | | [MAJOR] RSTcheckBear: | | (ERROR/3) Error in “code-block” directive:

It means that the relevant code-block is not valid for the language specified, in this case bash.

Note

If you do not specify a language, the default language is Python. If you want the code-block to not be an any particular language, instead use the :: directive. For example:

::
::

This is a code block that will not be parsed in any particluar langauge

Project Documentation Requirements

Submitting Documentation Outlines (M2)
  1. Determine the features your project will have and which ones will be ‘’user-facing’‘.

    • In general, a feature is user-facing if it creates functionality that a user would directly interact with.

    • For example, odl-openflowplugin-flow-services-ui is likely user-facing since it installs user-facing OpenFlow features, while odl-openflowplugin-flow-services is not because it provides only developer-facing features.

  2. Determine pieces of documentation that you need to provide based on the features your project will have and which ones will be user-facing.

    Note

    You might need to create multiple documents for the same kind of documentation. For example, the controller project will likely want to have a developer section for the config subsystem as well as for the MD-SAL.

  3. Clone the docs repo: git clone https://git.opendaylight.org/gerrit/docs

  4. For each piece of documentation find the corresponding template in the docs repo.

    • For user documentation: docs.git/docs/templates/template-user-guide.rst

    • For developer documentation: ddocs/templates/template-developer-guide.rst

    • For installation documentation (if any): docs/templates/template-install-guide.rst

    Note

    You can find the rendered templates here:

    <Feature> User Guide

    Refer to this template to identify the required sections and information that you should provide for a User Guide. The user guide should contain configuration, administration, management, using, and troubleshooting sections for the feature.

    Overview

    Provide an overview of the feature and the use case. Also include the audience who will use the feature. For example, audience can be the network administrator, cloud administrator, network engineer, system administrators, and so on.

    <Feature> Architecture

    Provide information about feature components and how they work together. Also include information about how the feature integrates with OpenDaylight. An architecture diagram could help.

    Note

    Please do not include detailed internals that somebody using the feature wouldn’t care about. For example, the fact that there are four layers of APIs between a user command and a message being sent to a device is probably not useful to know unless they have some way to influence how those layers work and a reason to do so.

    Configuring <feature>

    Describe how to configure the feature or the project after installation. Configuration information could include day-one activities for a project such as configuring users, configuring clients/servers and so on.

    Administering or Managing <feature>

    Include related command reference or operations that you could perform using the feature. For example viewing network statistics, monitoring the network, generating reports, and so on.

    For example:

    To configure L2switch components perform the following steps.

    1. Step 1:

    2. Step 2:

    3. Step 3:

    Tutorials

    optional

    If there is only one tutorial, you skip the “Tutorials” section and instead just lead with the single tutorial’s name. If you do, also increase the header level by one, i.e., replace the carets (^^^) with dashes (- - -) and the dashes with equals signs (===).

    <Tutorial Name>

    Ensure that the title starts with a gerund. For example using, monitoring, creating, and so on.

    Overview

    An overview of the use case.

    Prerequisites

    Provide any prerequisite information, assumed knowledge, or environment required to execute the use case.

    Target Environment

    Include any topology requirement for the use case. Ideally, provide visual (abstract) layout of network diagrams and any other useful visual aides.

    Instructions

    Use case could be a set of configuration procedures. Including screenshots to help demonstrate what is happening is especially useful. Ensure that you specify them separately. For example:

    Setting up the VM

    To set up a VM perform the following steps.

    1. Step 1

    2. Step 2

    3. Step 3

    Installing the feature

    To install the feature perform the following steps.

    1. Step 1

    2. Step 2

    3. Step 3

    Configuring the environment

    To configure the system perform the following steps.

    1. Step 1

    2. Step 2

    3. Step 3

    <Feature> Developer Guide
    Overview

    Provide an overview of the feature, what it logical functionality it provides and why you might use it as a developer. To be clear the target audience for this guide is a developer who will be using the feature to build something separate, but not somebody who will be developing code for this feature itself.

    Note

    More so than with user guides, the guide may cover more than one feature. If that is the case, be sure to list all of the features this covers.

    <Feature> Architecture

    Provide information about feature components and how they work together. Also include information about how the feature integrates with OpenDaylight. An architecture diagram could help. This may be the same as the diagram used in the user guide, but it should likely be less abstract and provide more information that would be applicable to a developer.

    Key APIs and Interfaces

    Document the key things a user would want to use. For some features, there will only be one logical grouping of APIs. For others there may be more than one grouping.

    Assuming the API is MD-SAL- and YANG-based, the APIs will be available both via RESTCONF and via Java APIs. Giving a few examples using each is likely a good idea.

    API Group 1

    Provide a description of what the API does and some examples of how to use it.

    API Group 2

    Provide a description of what the API does and some examples of how to use it.

    API Reference Documentation

    Provide links to JavaDoc, REST API documentation, etc.

    <Feature> Installation Guide

    Note

    Only use this template if installation is more complicated than simply installing a feature in the Karaf distribution. Otherwise simply provide the names of all user-facing features in your M3 readout.

    This is a template for installing a feature or a project developed in the ODL project. The feature could be interfaces, protocol plug-ins, or applications.

    Overview

    Add overview of the feature. Include Architecture diagram and the positioning of this feature in overall controller architecture. Highlighting the feature in a different color within the overall architecture must help. Include information to describe if the project is within ODL installation package or to be installed separately.

    Pre Requisites for Installing <Feature>
    • Hardware Requirements

    • Software Requirements

    Preparing for Installation

    Include any pre configuration, database, or other software downloads required to install <feature>.

    Installing <Feature>

    Include if you have separate procedures for Windows and Linux

    Verifying your Installation

    Describe how to verify the installation.

    Troubleshooting

    optional

    Text goes here.

    Post Installation Configuration

    Post Installation Configuration section must include some basic (must-do) procedures if any, to get started.

    Mandatory instructions to get started with the product.

    • Logging in

    • Getting Started

    • Integration points with controller

    Upgrading From a Previous Release

    Text goes here.

    Uninstalling <Feature>

    Text goes here.

  5. Copy the template into the appropriate directory for your project.

    • For user documentation: docs.git/docs/user-guide/${feature-name}-user-guide.rst

    • For developer documentation: docs.git/docs/developer-guide/${feature-name}-developer-guide.rst

    • For installation documentation (if any): docs.git/docs/getting-started-guide/project-specific-guides/${project-name}.rst

    Note

    These naming conventions are not set in stone, but are used to maintain a consistent document taxonomy. If these conventions are not appropriate or do not make sense for a document in development, use the convention that you think is more appropriate and the documentation team will review it and give feedback on the gerrit patch.

  6. Edit the template to fill in the outline of what you will provide using the suggestions in the template. If you feel like a section is not needed, feel free to omit it.

  7. Link the template into the appropriate core .rst file.

    • For user documentation: docs.git/docs/user-guide/index.rst

    • For developer documentation: docs.git/docs/developer-guide/index.rst

    • For installation documentation (if any): docs.git/docs/getting-started-guide/project-specific-guides/index.rst

    • In each file, it should be pretty clear what line you need to add. In general if you have an .rst file project-name.rst, you include it by adding a new line project-name without the .rst at the end.

  8. Make sure the documentation project still builds.

  9. Commit and submit the patch.

    1. Commit using:

      git add --all && git commit -sm "Documentation outline for ${project-shortname}"
      
    2. Submit using:

      git review
      

      See the Git-review Workflow page if you don’t have git-review installed.

  10. Wait for the patch to be merged or to get feedback

    • If you get feedback, make the requested changes and resubmit the patch.

    • When you resubmit the patch, it is helpful if you also post a “+0” reply to the patch in Gerrit, stating what patch set you just submitted and what you fixed in the patch set.

Expected Output From Documentation Project

The expected output is (at least) 3 PDFs and equivalent web-based documentation:

  • User/Operator Guide

  • Developer Guide

  • Installation Guide

These guides will consist of “front matter” produced by the documentation group and the per-project/per-feature documentation provided by the projects.

Note

This requirement is intended for the person responsible for the documentation and should not be interpreted as preventing people not normally in the documentation group from helping with front matter nor preventing people from the documentation group from helping with per-project/per-feature documentation.

Project Documentation Requirements
Content Types

These are the expected kinds of documentation and target audiences for each kind.

  • User/Operator: for people looking to use the feature without writing code

    • Should include an overview of the project/feature

    • Should include description of availble configuration options and what they do

  • Developer: for people looking to use the feature in code without modifying it

    • Should include API documentation, such as, enunciate for REST, Javadoc for Java, ??? for RESTCONF/models

  • Contributor: for people looking to extend or modify the feature’s source code

    Note

    You can find this information on the wiki.

  • Installation: for people looking for instructions to install the feature after they have downloaded the ODL release

    Note

    The audience for this content is the same as User/Operator docs

    • For most projects, this will be just a list of top-level features and options

      • As an example, l2switch-switch as the top-level feature with the -rest and -ui options

      • Features should also note if the options should be checkboxes (that is, they can each be turned on/off independently) or a drop down (that is, at most one can be selected)

      • What other top-level features in the release are incompatible with each feature

      • This will likely be presented as a table in the documentation and the data will likely also be consumed by automated installers/configurators/downloaders

    • For some projects, there is extra installation instructions (for external components) and/or configuration

      • In that case, there will be a (sub)section in the documentation describing this process.

  • HowTo/Tutorial: walk throughs and examples that are not general-purpose documentation

    • Generally, these should be done as a (sub)section of either user/operator or developer documentation.

    • If they are especially long or complex, they may belong on their own

  • Release Notes:

    • Release notes are required as part of each project’s release review. They must also be translated into reStructuredText for inclusion in the formal documentation.

Requirements for projects
  • Projects must provide reStructuredText documentation including:

    • Developer documentation for every feature

      • Most projects will want to logically nest the documentation for individual features under a single project-wide chapter or section

      • The feature documentation can be provided as a single .rst file or multiple .rst files if the features fall into different groups

      • Feature documentation should start with appromimately 300 word overview of the project and include references to any automatically-generated API documentation as well as more general developer information (see Content Types).

    • User/Operator documentation for every every user-facing feature (if any)

      • This documentation should be per-feature, not per-project. Users should not have to know which project a feature came from.

      • Intimately related features can be documented together. For example, l2switch-switch, l2switch-switch-rest, and l2switch-switch-ui, can be documented as one noting the differences.

      • This documentation can be provided as a single .rst file or multiple .rst files if the features fall into different groups

    • Installation documentation

      • Most projects will simply provide a list of user-facing features and options. See Content Types above.

    • Release Notes (both on the wiki and reStructuredText) as part of the release review.

  • Documentation must be contributed to the docs repo (or possibly imported from the project’s own repo with tooling that is under development)

    • Projects may be encouraged to instead provide this from their own repository if the tooling is developed

    • Projects choosing to meet the requirement in this way must provide a patch to docs repo to import the project’s documentation

  • Projects must cooperate with the documentation group on edits and enhancements to documentation

Timeline for Deliverables from Projects
  • M2: Documentation Started

    The following tasks for documentation deliverables must be completed for the M2 readout:

    • The kinds of documentation that will be provided and for what features must be identified.

      Note

      Release Notes are not required until release reviews at RC2

    • The appropriate .rst files must be created in the docs repository (or their own repository if the tooling is available).

    • An outline for the expected documentation must be completed in those .rst files including the relevant (sub)sections and a sentence or two explaining what will be contained in these sections.

      Note

      If an outline is not provided, delivering actual documentation in the (sub)sections meets this requirement.

    • M2 readouts should include

      1. the list of kinds of documentation

      2. the list of corresponding .rst files and their location, including repo and path

      3. the list of commits creating those .rst files

      4. the current word counts of those .rst files

  • M3: Documentation Continues

    • The readout at M3 should include the word counts of all .rst files with links to commits

    • The goal is to have draft documentation complete at the M3 readout so that the documentation group can comment on it.

  • M4: Documentation Complete

    • All (sub)sections in all .rst files have complete, readable, usable content.

    • Ideally, there should have been some interaction with the documentation group about any suggested edits and enhancements

  • RC2: Release notes

    • Projects must provide release notes in .rst format pushed to integration (or locally in the project’s repository if the tooling is developed)

OpenDaylight Release Process Guide

Overview

This guide provides details on the various release processes related to OpenDaylight. It documents the steps used by OpenDaylight release engineers when performing release operations.

Release Planning

Managed Release
Managed Release Summary

The Managed Release Process will facilitate timely, stable OpenDaylight releases by allowing the release team to focus on closely managing a small set of core OpenDaylight projects while not imposing undue requirements on projects that prefer more autonomy.

Managed Release Goals
Reduce Overhead on Release Team

The Managed Release Model will allow the release team to focus their efforts on a smaller set of more stable, more responsive projects.

Reduce Overhead on Projects

The Managed Release Model will reduce the overhead both on projects taking part in the Managed Release and Self-Managed Projects.

Managed Projects will have fewer, smaller checkpoints consisting of only information that is maximally helpful for driving the release process. Much of the information collected at checkpoints will be automatically scraped, requiring minimal to no effort from projects. Additionally, Managed Release projects should have a more stable development environment, as the projects that can break the jobs they depend on will be a smaller set, more stable and more responsive.

Projects that are Self-Managed will have less overhead and reporting. They will be free to develop in their own way, providing their artifacts to include in the final release or choosing to release on their own schedule. They will not be required to submit any checkpoints and will not be expected to work closely with the rest of the OpenDaylight community to resolve breakages.

Enable Timely Releases

The Managed Release Process will reduce the set of projects that must simultaneously become stable at release time. The release and test teams will be able to focus on orchestrating a quality release for a smaller set of more stable, more responsive projects. The release team will also have greater latitude to step in and help projects that are required for dependency reasons but may not have a large set of active contributors.

Managed Projects
Managed Projects Summary

Managed Projects are either required by most applications for dependency reasons or are mature, stable, responsive projects that are consistently able to take part in releases without jeopardizing them. Managed Projects will receive additional support from the test and release teams to further their stability and make sure OpenDaylight releases go out on-time. To enable this extra support, Managed Projects will be given less autonomy than OpenDaylight projects have historically been granted.

Managed Projects for Dependency Reasons

Some projects are required by almost all other OpenDaylight projects. These projects must be in the Managed Release for it to support almost every OpenDaylight use case. Such projects will not have a choice about being in the Managed Release, the TSC will decide they are critical to the OpenDaylight platform and include them. They may not always meet the requirements that would normally be imposed on projects that wish to join the Managed Release. Since they cannot be kicked out of the release, the TSC, test and release teams will do their best to help them meet the Managed Release Requirements. This may involve giving Linux Foundation staff temporary committer rights to merge patches on behalf of unresponsive projects, or appointing committers if projects continue to remain unresponsive. The TSC will strive to work with projects and member companies to proactively keep projects healthy and find active contributors who can become committers in the normal way without the need to appoint them in times of crisis.

Managed Release Integrated Projects

Some Managed Projects may decide to release on their own, not as a part of the Simultaneous Release with Snapshot Integrated Projects. Such Release Integrated projects will still be subject to Managed Release Requirements, but will need to follow a different release process.

For implementation reasons, the projects that are able to release independently must depend only on other projects that release independently. Therefore the Release Integrated Projects will form a tree starting from odlparent. Currently odlparent, yangtools and mdsal are the only Release Integrated Projects, but others may join them in the future.

Requirements for Managed Projects
Healthy Community

Managed Projects should strive to have a healthy community.

Responsiveness

Managed Projects should be responsive over email, IRC, Gerrit, Jira and in regular meetings.

All committers should be subscribed to their project’s mailing list and the release mailing list.

For the following particularly time-sensitive events, an appropriate response is expected within two business days.

  • RC or SR candidate feedback.

  • Major disruptions to other projects where a Jira weather item was not present and the pending breakage was not reported to the release mailing list.

If anyone feels that a Managed Project is not responsive, a grievance process is in place to clearly handle the situation and keep a record for future consideration by the TSC.

Active Committers

Managed Projects should have sufficient active committers to review contributions in a timely manner, support potential contributors, keep CSIT healthy and generally effectively drive the project.

If a project that the TSC deems is critical to the Managed Release is shown to not have sufficient active committers the TSC may step in and appoint additional committers. Projects that can be dropped from the Managed Release will be dropped instead of having additional committers appointed.

Managed Projects should regularly prune their committer list to remove inactive committers, following the Committer Removal Process.

TSC Attendance

Managed Projects are required to send a representative to attend TSC meetings.

To facilitate quickly acting on problems identified during TSC meetings, representatives must be a committer to the project they are representing. A single person can represent any number of projects.

Representatives will make the following entry into the meeting minutes to record their presence:

#project <project ID>

TSC minutes will be scraped per-release to gather attendance statistics. If a project does not provide a representative for at least half of TSC meetings a grievance will be filed for future consideration.

Checkpoints Submitted On-Time

Managed Projects must submit information required for checkpoints on-time. Submissions must be correct and adequate, as judged by the release team and the TSC. Inadequate or missing submissions will result in a grievance.

Jobs Required for Managed Projects Running

Managed Projects are required to have the following jobs running and healthy.

  • Distribution check job (voting)

  • Validate autorelease job (voting)

  • Merge job (non-voting)

  • Sonar job (non-voting)

  • CLM job (non-voting)

Depend only on Managed Projects

Managed Projects should only depend on other Managed Projects.

If a project wants to be Managed but depends on Self-Managed Projects, they should work with their dependencies to become Managed at the same time or drop any Self-Managed dependencies.

Documentation

Managed Projects are required to produce a user guide, developer guide and release notes for each release.

CLM

Managed Projects are required to handle CLM (Component Lifecycle Management) violations in a timely manner.

Managed Release Process
Managed Release Checkpoints

Checkpoints are designed to be mostly automated, to be maximally effective at driving the release process and to impose as little overhead on projects as possible.

There will be an initial checkpoint two weeks after the start of the release, a midway checkpoints one month before code freeze and a final checkpoint at code freeze.

Initial Checkpoint

An initial checkpoint will be collected two weeks after the start of each release. The release team will review the information collected and report it to the TSC at the next TSC meeting.

Projects will need to create the following artifacts:

  • High-level, human-readable description of what the project plans to do in this release. This should be submitted as a Jira Project Plan issue against the TSC project.

    • Select your project in the ODL Project field

    • Select the release version in the ODL Release field

    • Select the appropriate value in the ODL Participation field: SNAPSHOT_Integrated (Managed) or RELEASE_Integrated (Managed)

    • Select the value Initial in the ODL Checkpoint field

    • In the Summary field, put something like: Project-X Fluorine Release Plan

    • In the Description field, fill in the details of your plan:

      This should list a high-level, human-readable summary of what a project
      plans to do in a release. It should cover the project's planned major
      accomplishments during the release, such as features, bugfixes, scale,
      stability or longevity improvements, additional test coverage, better
      documentation or other improvements. It may cover challenges the project
      is facing and needs help with from other projects, the TSC or the LFN
      umbrella. It should be written in a way that makes it amenable to use
      for external communication, such as marketing to users or a report to
      other LFN projects or the LFN Board.
      
  • If a project is transitioning from Self-Managed to Managed or applying for the first time into a Managed release, raise a Jira Project Plan issue against the TSC project highlighting the request.

    • Select your project in the ODL Project field

    • Select the release version in the ODL Release field

    • Select the NOT_Integrated (Self-Managed) value in the ODL Participation field

    • Select the appropriate value in the ODL New Participation field: SNAPSHOT_Integrated (Managed) or RELEASE_Integrated (Managed)

    • In the Summary field, put something like: Project-X joining/moving to Managed Release for Fluorine

    • In the Description field, fill in the details using the template below:

      Summary
      This is an example of a request for a project to move from Self-Managed
      to Managed. It should be submitted no later than the start of the
      release. The request should make it clear that the requesting project
      meets all of the Managed Release Requirements.
      
      Healthy Community
      The request should make it clear that the requesting project has a
      healthy community. The request may also highlight a history of having a
      healthy community.
      
      Responsiveness
      The request should make it clear that the requesting project is
      responsive over email, IRC, Jira and in regular meetings. All committers
      should be subscribed to the project's mailing list and the release
      mailing list. The request may also highlight a history of
      responsiveness.
      
      Active Committers
      The request should make it clear that the requesting project has a
      sufficient number of active committers to review contributions in a
      timely manner, support potential contributors, keep CSIT healthy and
      generally effectively drive the project. The requesting project should
      also make it clear that they have pruned any inactive committers. The
      request may also highlight a history of having sufficient active
      committers and few inactive committers.
      
      TSC Attendance
      The request should acknowledge that the requesting project is required
      to send a committer to represent the project to at least 50% of TSC
      meetings. The request may also highlight a history of sending
      representatives to attend TSC meetings.
      
      Checkpoints Submitted On-Time
      The request should acknowledge that the requesting project is required
      to submit checkpoints on time. The request may also highlight a history
      of providing deliverables on time.
      
      Jobs Required for Managed Projects Running
      The request should show that the requesting project has the required
      jobs for Managed Projects running and healthy. Links should be provided.
      
      Depend only on Managed Projects
      The request should show that the requesting project only depends on
      Managed Projects.
      
      Documentation
      The request should acknowledge that the requesting project is required
      to produce a user guide, developer guide and release notes for each
      release. The request may also highlight a history of providing quality
      documentation.
      
      CLM
      The request should acknowledge that the requesting project is required
      to handle Component Lifecycle Violations in a timely manner. The request
      should show that the project's CLM job is currently healthy. The request
      may also show that the project has a history of dealing with CLM
      violations in a timely manner.
      
  • If a project is transitioning from Managed to Self-Managed, raise a Jira Project Plan issue against the TSC project highlighting the request.

    • Select your project in the ODL Project field

    • Select the release version in the ODL Release field

    • Select the appropriate value in the ODL Participation field: SNAPSHOT_Integrated (Managed) or RELEASE_Integrated (Managed)

    • Select the NOT_Integrated (Self-Managed) value in the ODL New Participation field

    • In the Summary field, put something like: Project-X Fluorine Joining/Moving to Self-Manged for Fluorine

    • In the Description field, fill in the details:

      This is a request for a project to move from Managed to Self-Managed. It
      should be submitted no later than the start of the release. The request
      does not require any additional information, but it may be helpful for
      the requesting project to provide some background and their reasoning.
      
  • Weather items that may impact other projects should be submitted as Jira issues. For a weather item, raise a Jira Weather Item issue against the TSC project highlighting the details.

    • Select your project in the ODL Project field

    • Select the release version in the ODL Release field

    • For the ODL Impacted Projects field, fill in the impacted projects using label values - each label value should correspond to the respective project prefix in Jira, e.g. netvirt is NETVIRT. If all projects are impacted, use the label value ALL.

    • Fill in the expected date of weather event in the ODL Expected Date field

    • Select the appropriate value for ODL Checkpoint (may skip)

    • In the Summary field, summarize the weather event

    • In the Description field, provide the details of the weather event. Provide as much relevant information as possible.

The remaining artifacts will be automatically scraped:

  • Blocker bugs that were raised between the previous code freeze and release.

  • Grievances raised against the project during the last release.

Midway Checkpoint

One month before code freeze, a midway checkpoint will be collected. The release team will review the information collected and report it to the TSC at the next TSC meeting. All information for midway checkpoint will be automatically collected.

  • Open Jira bugs marked as blockers.

  • Open Jira issues tracking weather items.

  • Statistics about jobs. * Autorelease failures per-project. * CLM violations.

  • Grievances raised against the project since the last checkpoint.

Since the midway checkpoint is fully automated, the release team may collect this information more frequently, to provide trends over time.

Final Checkpoint

At 2 weeks after code freeze a final checkpoint will be collected by the release team and presented to the TSC at the next TSC meeting.

Projects will need to create the following artifacts:

  • High-level, human-readable description of what the project did in this release. This should be submitted as a Jira Project Plan issue against the TSC project. This will be reused for external communication/marketing for the release.

    • Select your project in the ODL Project field

    • Select the release version in the ODL Release field

    • Select the appropriate value in the ODL Participation field: SNAPSHOT_Integrated (Managed) or RELEASE_Integrated (Managed)

    • Select the value Final in the ODL Checkpoint field

    • In the Summary field, put something like: Project-X Fluorine Release details

    • In the Description field, fill in the details of your accomplishments:

      This should be a high-level, human-readable summary of what a project
      did during a release. It should cover the project's major
      accomplishments, such as features, bugfixes, scale, stability or
      longevity improvements, additional test coverage, better documentation
      or other improvements. It may cover challenges the project has faced
      and needs help in the future from other projects, the TSC or the LFN
      umbrella. It should be written in a way that makes it amenable to use
      for external communication, such as marketing to users or a report to
      other LFN projects or the LFN Board.
      
    • In the ODL Gerrit Patch field, fill in the Gerrit patch URL to your project release notes

  • Release notes, user guide, developer guide submitted to the docs project.

The remaining artifacts will be automatically scraped:

  • Open Jira bugs marked as blockers.

  • Open Jira issues tracking weather items.

  • Statistics about jobs. * Autorelease failures per-project.

  • Statistics about patches. * Number of patches submitted during the release. * Number of patches merged during the release. * Number of reviews per-reviewer.

  • Grievances raised against the project since the start of the release.

Managed Release Integrated Release Process

Managed Projects that release independently (Release Integrated Projects), not as a part of the Simultaneous Release with Snapshot Integrated Projects, will need to follow a different release process.

Managed Release Integrated (MRI) Projects will provide the releases they want the Managed Snapshot Integrated (MSI) Projects to consume no later than two weeks after the start of the Managed Release. The TSC will decide by a majority vote whether to bump MSI versions to consume the new MRI releases. This should happen as early in the release as possible to get integration woes out of the way and allow projects to focus on developing against a stable base. If the TSC decide to consume the proposed MRI releases, all MSI Projects are required to bump to the new versions within a two day window. If some projects fail to merge version bump patches in time, the TSC will instruct Linux Foundation staff to temporarily wield committer rights and merge version bump patches. The TSC vote at any time to back out of a version bump if the new releases are found to be unsuitable.

MRI Projects are expected to provide bugfixes via minor or patch version updates during the release, but should strive to not expect MSI Projects to consume another major version update during the release.

MRI Projects are free to follow their own release cadence as they develop new features during the Managed Release. They need only have a stable version ready for the MSI Projects to consume by the next integration point.

Managed Release Integrated Checkpoints

The MRI Projects will follow similar checkpoints as the MSI Projects, but the timing will be different. At the time MRI Projects provide the releases they wish MSI Projects to consume for the next release, they will also provide their final checkpoints. Their midway checkpoints will be scraped one month before the deadline for them to deliver their artifacts to the MSI Projects. Their initial checkpoints will be due no later two weeks following the deadline for their delivery of artifacts to the MSI Projects. Their initial checkpoints will cover everything they expect to do in the next Managed Release, which may encompass any number of major version bumps for the MRI Projects.

Moving a Project from Self-Managed to Managed

Self-Managed Projects can request to become Managed by submitting a Project_Plan issue to the TSC project in Jira. See details as described under the Initial Checkpoint section above. Requests should be submitted before the start of a release. The requesting project should make it clear that they meet the Managed Release Project Requirements.

The TSC will evaluate requests to become Managed and inform projects of the result and the TSC’s reasoning no later than the start of the release or one week after the request was submitted, whichever comes last.

For the first release, the TSC will bootstrap the Managed Release with projects that are critical to the OpenDaylight platform. Other projects will need to follow the normal application process defined above.

The following projects are deemed critical to the OpenDaylight platform:

  • aaa

  • controller

  • infrautils

  • mdsal

  • netconf

  • odlparent

  • yangtools

Self-Managed Projects

In general there are two types of Self-Managed (SM) projects:

  1. Self-Managed projects that want to participate in the formal (major or service) OpenDaylight release distribution. This section includes the requirements and release process for these projects.

  2. Self-Managed projects that want to manage their own release schedule or provide their release distribution and installation instructions by the time of the release. There are no specific requirements for these projects.

Requirements for SM projects participating in the release distribution
Use of SNAPSHOT versions

Self-Managed Projects can consume whichever version of their upstream dependencies they want during most of the release cycle, but if they want to be included in the formal (major or service) release distribution they must have their upstream versions bumped to SNAPSHOT and build successfully no later than one week before the first Managed release candidate (RC) is created. Since bumping and integrating with upstream takes time, it is strongly recommended Self-Managed projects start this work early enough. This is no later than the middle checkpoint if they want to be in a major release, or by the previous release if they want to be in a service release (e.g. by the major release date if they want to be in SR1).

Note

To help with the integration effort, the Weather Page includes API and other important changes during the release cycle. After the formal release, the release notes also include this information.

Add to Common Distribution

In order to be included in the formal (major or service) release distribution, Self-Managed Projects must be in the common distribution pom.xml file and the distribution sanity test (see Add Projects to distribution) no later than one week before the first Managed release candidate (RC) is created. Projects should only be added to the final distribution pom.xml after they have succesfully published artifacts using upstream SNAPSHOTs. See Use of SNAPSHOT versions.

Note

It is very important Self-Managed projects do not miss the deadlines for upstream integration and final distribution check, otherwise there are high chances for missing the formal release distribution. See Release the project artifacts.

Cut Stable Branch

Self-Managed projects wanting to use the existing release job to release their artifacts (see Release the project artifacts) must have an stable branch in the major release (fluorine, neon, etc) they are targeting. It is highly recommended to cut the stable branch before the first Managed release candidate (RC) is created.

After creating the stable branch Self-Managed projects should:

  • Bump master branch version to X.Y+1.0-SNAPSHOT, this way any new merge in master will not interfere with the new created stable branch artifacts.

  • Update .gitreview for stable branch: change defaultbranch=master to stable branch. This way folks running “git review” will get the right branch.

  • Update their jenkins jobs: current release should point to the new created stable branch and next release should point to master branch. If you do not know how to do this please open a ticket to opendaylight helpdesk.

Release the project artifacts

Self-Managed projects wanting to participate in the formal (major or service) release distribution must release and publish their artifacts to nexus in the week after the Managed release is published to nexus.

Self-Managed projects having an stable branch with latest upstream SNAPSHOT (see previous requirements) can use the release job in Project Standalone Release to release their artifacts.

After creating the release, Self-Managed projects should bump the stable branch version to X.Y.Z+1-SNAPSHOT, this way any new merge in the stable branch will not interfere with pre-release artifacts.

Note

Self-Managed Projects will not have any leeway for missing deadlines. If projects are not in the final distribution in the allocated time (normally one week) after the Managed projects release, they will not be included in the release distribution.

Checkpoints

There are no checkpoints for Self-Managed Projects.

Moving a Project from Managed to Self-Managed

Managed Projects that are not required for dependency reasons can submit a Project_Plan issue to be Self-Managed to the TSC project in Jira. See details in the Initial Checkpoint section above. Requests should be submitted before the start of a release. Requests will be evaluated by the TSC.

The TSC may withdraw a project from the Managed Release at any time.

Installing Features from Self-Managed Projects

Self-Managed Projects will have their artifacts included in the final release if they are available on-time, but they will not be available to be installed until the user does a repo:add.

To install an Self-Managed Project feature, find the feature description in the system directory. For example, NetVirt’s main feature:

system/org/opendaylight/netvirt/odl-netvirt-openstack/0.6.0-SNAPSHOT/

Then use the Karaf shell to repo:add the feature:

feature:repo-add mvn:org.opendaylight.netvirt/odl-netvirt-openstack/0.6.0 -SNAPSHOT/xml/features

Grievances

For requirements that are difficult to automatically ascertain if a Managed Project is following or not, there should be a clear reporting process.

Grievance reports should be filed against the TSC project in Jira. Very urgent grievances can additionally be brought to the TSC’s attention via the TSC’s mailing list.

Process for Reporting Unresponsive Projects

If a Managed Project does not meet the Responsiveness Requirements, a Grievance issue should be filed against the TSC project in Jira.

Unresponsive project reports should include (at least):

  • Select the project being reported in the ODL_Project field

  • Select the release version in the ODL_Release field

  • In the Summary field, put something like: Grievance against Project-X

  • In the Description field, fill in the details:

    Document the details that show ExampleProject was slow to review a change.
    The report should include as much relevant information as possible,
    including a description of the situation, relevant Gerrit change IDs and
    relevant public email list threads.
    
  • In the ODL_Gerrit_Patch, put in a URL to a Gerrit patch, if applicable

Vocabulary Reference
  • Managed Release Process: The release process described in this document.

  • Managed Project: A project taking part in the Managed Release Process.

  • Self-Managed Project: A project not taking part in the Managed Release Process.

  • Simultaneous Release: Event wherein all Snapshot Integrated Project versions are rewriten to release versions and release artifacts are produced.

  • Snapshot Integrated Project: Project that integrates with OpenDaylight projects using snapshot version numbers. These projects release together in the Simultaneous Release.

  • Release Integrated Project: Project that releases independently of the Simultaneous Release. These projects are consumed by Snapshot Integrated Projects based on release version numbers, not snapshot versions.

Release Schedule

OpenDaylight releases twice per year. The six-month cadence is designed to synchronize OpenDaylight releases with OpenStack and OPNFV releases. Dates are adjusted to match current resources and requirements from the current OpenDaylight users. Dates are also adjusted when they conflict with holidays, overlap with other releases or are otherwise problematic. Dates include the release of both managed and self-managed projects.

Event

Neon Date

Relative Date

Start-Relative Date

Description

Release Start

2018-09-06

Start Date

Start Date +0

Declare Intention: Submit Project_Plan Jira item in TSC project

Initial Checkpoint

2018-09-20

Start Date + 2 weeks

Start Date +2 weeks

Initial Checkpoint. All Managed Projects must have completed Project_Plan Jira items in TSC project.

Release Integrated Deadline

2018-10-04

Initial Checkpoint + 2 weeks

Start Date +4 weeks

Deadline for Release Integrated Projects (currently ODLPARENT, YANGTOOLS and MDSAL) to provide the desired version deliverables for downstream Snapshot Integrated Projects to consume. For Sodium, this is +1 more week to resolve conflict with ONS NA 2019.

Version Bump

2018-10-05

Release Integrated Deadline + 1 day

Start Date +4 weeks 1 day

Prepare version bump patches and merge them in (RelEng team). Spend the next 2 weeks to get green build for all MSI Projects and a healthy distribution.

Version Bump Checkpoint

2018-10-18

Release Integrated Deadline + 2 weeks

Start Date +6 weeks

Check status of MSI Projects to see if we have green builds and a healthy distribution. Revert the MRI deliverables if deemed necessary.

CSIT Checkpoint

2018-11-01

Version Bump Checkpoint + 2 weeks

Start Date +8 weeks

All Managed Release CSIT should be in good shape - get all MSI Projects’ CSIT results as they were before the version bump. This is the final opportunity to revert the MRI deliverables if deemed necessary.

Middle Checkpoint

2019-01-10

CSIT Checkpoint + 8 weeks (sometimes +2 weeks to avoid December holidays)

Start Date +16 weeks (sometimes +2 weeks to avoid December holidays)

Checkpoint for status of Managed Projects - especially Snapshot Integrated Projects.

Code Freeze

2019-01-24

Middle Checkpoint + 4 weeks

Start Date +20 weeks

Code freeze for all Managed Projects - cut and lock release branch. Only allow blocker bugfixes in release branch.

Final Checkpoint

2019-02-07

Code Freeze + 2 weeks

Start Date +22 weeks

Final Checkpoint for all Managed Projects.

Formal Release

2019-03-25

6 months after Start Date

Start Date +6 months

Formal release

Service Release 1

2019-05-16

1.5 month after Formal Release

Start Date +7.5 months

Service Release 1 (SR1)

Service Release 2

2019-09-06

3 months after SR1

Start Date +10.5 months

Service Release 2 (SR2)

Service Release 3

2019-12-05

4 months after SR2

Start Date +14 months

Service Release 3 (SR3) - Final Service Release

Service Release 4

N/A

Not Available Anymore

Not Available Anymore

Service Release 4 (SR4) - N/A

Release End of Life

2020-03-25

4 months after SR3

Start Date +18 months

End of Life - coincides with the Formal Release of the current release+2 versions and the start of the current release+3 versions

Processes

Project Standalone Release

This page explains how a project can release independently outside of the OpenDaylight simultanious release.

Preparing your project for release

A project can produce a staging repository by using one of the following methods against the {project-name}-maven-stage-{stream} job:

  • Leave a comment stage-release against any patch for the stream to build

  • Click Build with Parameters in Jenkins Web UI for the job

This job performs the following duties:

  1. Removes -SNAPSHOT from all pom files

  2. Produces a taglist.log, project.patch, and project.bundle files

  3. Runs a mvn clean deploy to a local staging repo

  4. Pushes the staging repo to a Nexus staging repo https://nexus.opendaylight.org/content/repositories/<REPO_ID> (REPO_ID is saved to staging-repo.txt on the log server)

  5. Archives taglist.log, project.patch, and project.bundle files to log server

The files taglist.log and project.bundle can be used later at release time to reproduce a byte exact commit of what was built by the Jenkins job. This can be used to tag the release at release time.

Releasing your project

Once testing against the staging repo has been completed and project has determined that the staged repo is ready for release. A release can the be performed using the self-serve release process: https://docs.releng.linuxfoundation.org/projects/global-jjb/en/latest/jjb/lf-release-jobs.html

  1. Ask helpdesk the necessary right on jenkins if you do not have them

  2. Log on https://jenkins.opendaylight.org/

  3. Choose your project dashboard

  4. Check your relase branch has been successfully staged and note the corresponding log folder

  5. Go back to the dashboard and choose the release-merge job

  6. Click on build with parameters

  7. Fill in the form:

  • GERRIT_BRANCH must be changed to the branch name you want to release (e.g. stable/sodium)

  • VERSION with your corresponding project version (e.g. 0.4.1)

  • LOG_DIR with the relative path of the log from the stage release job (e.g. project-maven-stage-master/17/)

  • choose maven DISTRIBUTION_TYPE in the select box

  • uncheck USE_RELEASE_FILE box

  1. Launch the jenkins job

This job performs the following duties: * download and patch your project repository * build the project * publish the artifacts on nexus * tag and sign the release on Gerrit

Autorelease

The Release Engineering - Autorelease project is targeted at building the artifacts that are used in the release candidates and final full release.

Cloning Autorelease

To clone all the autorelease repo including it’s submodules simply run the clone command with the ‘’‘–recursive’‘’ parameter.

git clone --recursive https://git.opendaylight.org/gerrit/releng/autorelease

If you forgot to add the –recursive parameter to your git clone you can pull the submodules after with the following commands.

git submodule init
git submodule update
Creating Autorelease - Release and RC build

An autorelease release build comes from the autorelease-release-<branch> job which can be found on the autorelease tab in the releng master:

For example to create a Boron release candidate build launch a build from the autorelease-release-boron job by clicking the ‘’‘Build with Parameters’‘’ button on the left hand menu:

Note

The only field that needs to be filled in is the ‘’‘RELEASE_TAG’‘’, leave all other fields to their default setting. Set this to Boron, Boron-RC0, Boron-RC1, etc… depending on the build you’d like to create.

Adding Autorelease staging repo to settings.xml

If you are building or testing this release in such a way that requires pulling some of the artifacts from the Nexus repo you may need to modify your settings.xml to include the staging repo URL as this URL is not part of ODL Nexus’ public or snapshot groups. If you’ve already cloned the recommended settings.xml for building ODL you will need to add an additional profile and activate it by adding these sections to the “<profiles>” and “<activeProfiles>” sections (please adjust accordingly).

Note

  • This is an example and you need to “Add” these example sections to your settings.xml do not delete your existing sections.

  • The URLs in the <repository> and <pluginRepository> sections will also need to be updated with the staging repo you want to test.

<profiles>
  <profile>
    <id>opendaylight-staging</id>
    <repositories>
      <repository>
        <id>opendaylight-staging</id>
        <name>opendaylight-staging</name>
        <url>https://nexus.opendaylight.org/content/repositories/automatedweeklyreleases-1062</url>
        <releases>
          <enabled>true</enabled>
          <updatePolicy>never</updatePolicy>
        </releases>
        <snapshots>
          <enabled>false</enabled>
        </snapshots>
      </repository>
    </repositories>
    <pluginRepositories>
      <pluginRepository>
        <id>opendaylight-staging</id>
        <name>opendaylight-staging</name>
        <url>https://nexus.opendaylight.org/content/repositories/automatedweeklyreleases-1062</url>
        <releases>
          <enabled>true</enabled>
          <updatePolicy>never</updatePolicy>
        </releases>
        <snapshots>
          <enabled>false</enabled>
        </snapshots>
      </pluginRepository>
    </pluginRepositories>
  </profile>
</profiles>

<activeProfiles>
  <activeProfile>opendaylight-staging</activeProfile>
</activeProfiles>
Project lifecycle

This page documents the current rules to follow when adding and removing a particular project to Simultaneous Release (SR).

List of states for projects in autorelease

The state names are short negative phrases describing what is missing to progress to the following state.

  • non-existent The project is not recognized by Technical Steering Committee (TSC) to be part of OpenDaylight (ODL).

  • non-participating The project is recognized byt TSC to be an ODL project, but the project has not confirmed participation in SR for given release cycle.

  • non-building The recognized project is willing to participate, but its current codebase is not passing its own merge job, or the project artifacts are otherwise unavailable in Nexus.

  • not-in-autorelease Project merge job passes, but the project is not added to autorelease (git submodule, maven module, validate-autorelease job passes).

  • failing-autorelease The project is added to autorelease (git submodule, maven module, validate-autorelease job passes), but autorelease build fails when building project’s artifact. Temporary state, timing out into not-in-autorelease.

  • repo-not-in-integration Project is succesfully built within autorelease, but integration/distribution:features-index is not listing all its public feature repositories.

  • feature-not-in-integration Feature repositories are referenced, distribution-check job is passing, but some user-facing features are absent from integration/distribution:features-test (possibly because adding them does not pass distribution SingleFeatureTest).

  • distribution-check-not-passing Features are in distribution, but distribution-check job is either not running, or it is failing for any reason. Temporary state, timing out into feature-not-in-integration.

  • feature-is-experimental All user-facing features are in features-test, but at least one of the corresponding functional CSIT jobs does not meet Integration/Test requirements.

  • feature-is-not-stable Feature does meet Integration/Test requirements, but it does not meed all requirements for stable features.

  • feature-is-stable

Note

A project may change its state in both directions, this list is to make sure a project is not left in an invalid state, for example distribution referencing feature repositories, but without passing distribution-check job.

Note

Projects can participate in Simultaneous Release even if they are not included in autorelease. Nitrogen example: Odlparent. FIXME: Clarify states for such projects (per version, if they released multiple times within the same cycle).

Branch Cutting

This page documents the current branch cutting tasks that are needed to be performed at RC0 and which team has the necessary permissions in order to perform the necessary task in Parentheses.

JJB (releng/builder)
  1. Export ${NEXT_RELEASE} and ${CURR_RELEASE} with new and current release names. (releng/builder committers)

    export NEXT_RELEASE="Neon"
    export CURR_RELEASE="Fluorine"
    
  2. Change JJB yaml files from stream:fluorine branch pointer from master -> stable/${CURR_RELEASE,,} and create new stream: ${NEXT_RELEASE,,} branch pointer to branch master. This requires handling two different file formats interspersed with in autorelease projects. (releng/builder committers)

    stream:
      - Neon:
          branch: master
      - Fluorine:
          branch: stable/fluorine
    
    - project:
        name: aaa-neon
        jobs:
          - '{project-name}-verify-{stream}-{maven}-{jdks}'
        stream: neon
        branch: master
    
    • The above manual process of updating individual files is automated with the script. (releng/builder committers)

      cd builder/scripts/branch_cut
      ./branch_cutter.sh -n $NEXT_RELEASE -c $CURR_RELEASE
      
  3. Review and submit the changes to releng/builder project. (releng/builder committers)

Autorelease
  1. Block submit permissions for registered users and elevate RE’s committer rights on gerrit. (Helpdesk)

    _images/gerrit-update-committer-rights.png

    Note

    Enable Exclusive checkbox for the submit button to override any existing permissions.

  2. Enable create reference permissions on gerrit for RE’s to submit .gitreview patches. (Helpdesk)

    _images/gerrit-update-create-reference.png

    Note

    Enable Exclusive checkbox override any existing permissions.

  3. Start the branch cut job or use the manual steps below for branch cutting autorelease. (Release Engineering Team)

  4. Start the version bump job or use the manual steps below for version bump autorelease. (Release Engineering Team)

  5. Merge all .gitreview patches submitted though the job or manually. (Release Engineering Team)

  6. Remove create reference permissions set on gerrit for RE’s. (Helpdesk)

  7. Merge all version bump patches in the order of dependencies. (Release Engineering Team)

  8. Re-enable submit permissions for registered users and disable elevated RE committer rights on gerrit. (Helpdesk)

  9. Notify release list on branch cutting work completion. (Release Engineering Team)

Branch cut job (Autorelease)

Branch cutting can be performed either through the job or manually.

  1. Start the autorelease-branch-cut job (Release Engineering Team)

Manual steps to branch cut (Autorelease)
  1. Setup releng/autorelease repository. (Release Engineering Team)

    git review -s
    git submodule foreach 'git review -s'
    git checkout master
    git submodule foreach 'git checkout master'
    git pull --rebase
    git submodule foreach 'git pull --rebase'
    
  2. Enable create reference permissions on gerrit for RE’s to submit .gitreview patches. (Helpdesk)

    _images/gerrit-update-create-reference.png

    Note

    Enable Exclusive check-box override any existing permissions.

  3. Create stable/${CURR_RELEASE} branches based on HEAD master. (Release Engineering Team)

    git checkout -b stable/${CURR_RELEASE,,} origin/master
    git submodule foreach 'git checkout -b stable/${CURR_RELEASE,,} origin/master'
    git push gerrit stable/${CURR_RELEASE,,}
    git submodule foreach 'git push gerrit stable/${CURR_RELEASE,,}'
    
  4. Contribute .gitreview updates to stable/${CURR_RELEASE,,}. (Release Engineering Team)

    git submodule foreach sed -i -e "s#defaultbranch=master#defaultbranch=stable/${CURR_RELEASE,,}#" .gitreview
    git submodule foreach git commit -asm "Update .gitreview to stable/${CURR_RELEASE,,}"
    git submodule foreach 'git review -t ${CURR_RELEASE,,}-branch-cut'
    sed -i -e "s#defaultbranch=master#defaultbranch=stable/${CURR_RELEASE,,}#" .gitreview
    git add .gitreview
    git commit -s -v -m "Update .gitreview to stable/${CURR_RELEASE,,}"
    git review -t  ${CURR_RELEASE,,}-branch-cut
    
Version bump job (Autorelease)

Version bump can performed either through the job or manually.

  1. Start the autorelease-version-bump-${NEXT_RELEASE,,} job (Release Engineering Team)

    Note

    Enabled BRANCH_CUT and disable DRY_RUN to run the job for branch cut work-flow. The version bump job can be run only on the master branch.

Manual steps to version bump (Autorelease)
  1. Version bump master by x.(y+1).z. (Release Engineering Team)

    git checkout master
    git submodule foreach 'git checkout master'
    pip install lftools
    lftools version bump ${CURR_RELEASE}
    
  2. Make sure the version bump changes does not modify anything under scripts or pom.xml. (Release Engineering Team)

    git checkout pom.xml scripts/
    
  3. Push version bump master changes to gerrit. (Release Engineering Team)

    git submodule foreach 'git commit -asm "Bump versions by x.(y+1).z for next dev cycle"'
    git submodule foreach 'git review -t ${CURR_RELEASE,,}-branch-cut'
    
  4. Merge the patches in order according to the merge-order.log file found in autorelease jobs. (Release Engineering Team)

    Note

    The version bump patches can be merged more quickly by performing a local build with mvn clean deploy -DskipTests to prime Nexus with the new version updates.

Documentation post branch tasks
  1. Git remove all files/directories from the docs/release-notes/* directory. (Release Engineering Team)

git checkout master
git rm -rf docs/release-notes/<project file and/or folder>
git commit -sm "Reset release notes for next dev cycle"
git review
Simultaneous Release

This page explains how the OpenDaylight release process works once the TSC has approved a release.

Code Freeze

At the first Release Candidate (RC) the Submit button is disabled on the stable branch to prevent projects from merging non-blocking patches into the release.

  1. Disable Submit for Registered Users and allow permission to the Release Engineering Team (Helpdesk)

    _images/gerrit-update-committer-rights.png

    Important

    DO NOT enable Code-Review+2 and Verified+1 to the Release Engienering Team during code freeze.

    Note

    Enable Exclusive checkbox for the submit button to override any existing persmissions. Code-Review and Verify permissions are only needed during version bumping.

Release Preparations

After release candidate is built gpg sign artifacts using the lftools sign command.

STAGING_REPO=autorelease-1903
STAGING_PROFILE_ID=abc123def456  # This Profile ID is listed in Nexus > Staging Profiles
lftools sign deploy-nexus https://nexus.opendaylight.org $STAGING_REPO $STAGING_PROFILE_ID

Verify the distribution-karaf file with the signature.

gpg2 --verify karaf-x.y.z-${RELEASE}.tar.gz.asc karaf-x.y.z-${RELEASE}.tar.gz

Note

Projects such as OpFlex participate in the Simultaneous Release but are not part of the autorelease build. Ping those projects and prep their staging repos as well.

Releasing OpenDaylight

The following describes the Simultaneous Release process for shipping out the binary and source code on release day.

Bulleted actions can be performed in parallel while numbered actions should be done in sequence.

  • Release the Nexus Staging repos (Helpdesk)

    1. Select both the artifacts and signature repos (created previously) and click Release.

    2. Enter Release OpenDaylight $RELEASE for the description and click confirm.

    Perform this step for any additional projects that are participating in the Simultaneous Release but are not part of the autorelease build.

    Tip

    This task takes hours to run so kicking it off early is a good idea.

  • Version bump for next dev cycle (Release Engineering Team)

    1. Run the autorelease-version-bump-${STREAM} job

      Tip

      This task takes hours to run so kicking it off early is a good idea.

    2. Enable Code-Review+2 and Verify+1 voting permissions for the Release Engineering Team (Helpdesk)

      _images/gerrit-update-committer-rights.png

      Note

      Enable Exclusive checkbox for the submit button to override any existing persmissions. Code-Review and Verify permissions are only needed during version bumping. DO NOT enable it during code freeze.

    3. Merge all patches generated by the job

    4. Restore Gerrit permissions for Registered Users and disable elevated Release Engineering Team permissions (Helpdesk)

  • Tag the release (Release Engineering Team)

    1. Install lftools

      lftools contains the version bumping scripts we need to version bump and tag the dev branches. We recommend using a virtualenv for this.

      # Skip mkvirtualenv if you already have an lftools virtualenv
      mkvirtualenv lftools
      workon lftools
      pip install --upgrade lftools
      
    2. Pull latest autorelease repository

      export RELEASE=Nitrogen-SR1
      export STREAM=${RELEASE//-*}
      export BRANCH=origin/stable/${STREAM,,}
      
      # No need to clean if you have already done it.
      git clone --recursive https://git.opendaylight.org/gerrit/releng/autorelease
      cd autorelease
      git fetch origin
      
      # Ensure we are on the right branch. Note that we are wiping out all
      # modifications in the repo so backup unsaved changes before doing this.
      git checkout -f
      git checkout ${BRANCH,,}
      git clean -xdff
      git submodule foreach git checkout -f
      git submodule foreach git clean -xdff
      git submodule update --init
      
      # Ensure git review is setup
      git review -s
      git submodule foreach 'git review -s'
      
    3. Publish release tags

      export BUILD_NUM=55
      export OPENJDKVER="openjdk8"
      export PATCH_URL="https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/autorelease-release-${STREAM,,}-mvn35-${OPENJDKVER}/${BUILD_NUM}/patches.tar.gz"
      ./scripts/release-tags.sh "${RELEASE}" /tmp/patches "$PATCH_URL"
      
  • Notify Community and Website teams

    1. Update downloads page

      Submit a patch to the ODL docs project to update the downloads page with the latest binaries and packages (Release Engineering Team)

    2. Email dev/release/tsc mailing lists announcing release binaries location (Release Engineering Team)

    3. Email dev/release/tsc mailing lists to notify of tagging and version bump completion (Release Engineering Team)

      Note

      This step is performed after Version Bump and Tagging steps are complete.

  • Generate Service Release notes

    Warning

    If this is a major release (eg. Neon) as opposed to a Service Release (eg. Neon-SR1). Skip this step.

    For major releases the notes come from the projects themselves in the docs repo via the docs/releaset-notes/projects directory.

    For service releases (SRs) we need to generate service release notes. This can be performed by running the autorelease-generate-release-notes-$STREAM job.

    1. Run the autorelease-generate-release-notes-${STREAM} job (Release Engineering Team)

      Trigger this job by leaving a Gerrit comment generate-release-notes Carbon-SR2

    Release notes can also be manually generated with the script:

    git checkout stable/${BRANCH,,}
    ./scripts/release-notes-generator.sh ${RELEASE}
    

    A release-notes.rst will be generated in the working directory. Submit this file as release-notes-sr1.rst (update the sr as necessary) to the docs project.

Super Committers

Super committers are a group of TSC-approved individuals within the OpenDaylight community with the power to merge patches on behalf of projects during approved Release Activities.

Super Committer Activities

Super committers are given super committer powers ONLY during TSC-approved activities and are not a power that is active on a regular basis. Once one of the TSC-approved activities are triggered, helpdesk will enable the permissions listed for the respective activities for the duration of that activity.

Code Freeze

Note

This activity has been pre-approved by the TSC and does not require a TSC vote. Helpdesk should be notified to enable the permissions and again to disable the permissions once activities are complete.

Super committers are granted powers to merge blocking patches for the duration code of freeze until a release is approved and code freeze is lifted. This permission is only granted for the specific branch that is frozen.

The following powers are granted:

  • Submit button access

During this time Super Committers can ONLY merge patches that have a +2 Code-Review by a project committer approving the merge, and the patch passes Jenkins Verify check. If neither of these conditions are met then DO NOT merge the patch.

Version bumping

Note

This activity has been pre-approved by the TSC and does not require a TSC vote. Helpdesk should be notified to enable the permissions and again to disable the permissions once activities are complete.

Super committers are granted powers to merge version bump related patches for the duration of version bumping activities.

The following powers are granted:

  • Vote Code-Review +2

  • Vote Verified +1

  • Remove Reviewer

  • Submit button access

These permissions are granted to allow super committers to push through version bump patches with haste. The Remove Reviewer permission is to be used only for removing Jenkins vote caused by a failed distribution-check job, if that failure is caused by a temporary version inconsistency present while the bump activity is being performed.

Version bumping activities come in 2 forms.

  1. Post-release Autorelease version bumping

  2. MRI project version bumping

Case 1, the TSC has approved an official OpenDaylight release and after the binaries are released to the world all Autorelease managed projects are version bumped appropriately to the next development release number.

Case 2, During the Release Integrated Deadline of the release schedule MRI projects submit desired version updates. Once approved by the TSC Super Committers can merge these patches across the projects.

Ideally the version bumping activities should not include code modifications, if they do +2 Code-Review vote should be complete by a committer on the project to indicate that they approve the code changes.

Once version bump patches are merged these permissions are removed.

Exceptional cases

Any activities not in the list above will fall under the exceptional case in which requires TSC approval before Super Committers can merge changes. These cases should be brought up to the TSC for voting.

Super Committers

Name

IRC

Email

Anil Belur

abelur

abelur@linuxfoundation.org

Ariel Adams

aadams

aadam@redhat.com

Daniel Farrell

dfarrell07

dfarrell@redhat.com

Jamo Luhrsen

jamoluhrsen

jluhrsen@gmail.com

Luis Gomez

LuisGomez

ecelgp@gmail.com

Michael Vorburger

vorburger

vorburger@redhat.com

Sam Hague

shague

shague@redhat.com

Stephen Kitt

skitt

skitt@redhat.com

Robert Varga

rovarga

nite@hq.sk

Thanh Ha

zxiiro

thanh.ha@linuxfoundation.org

Supporting Documentation

Identifying Managed Projects in an OpenDaylight Version
What are Managed Projects?

Managed Projects are simply projects that take part in the Managed Release Process. Managed Projects are either core components of OpenDaylight or have demonstrated their maturity and ability to successfully take part in the Managed Release.

For more information, see the full description of Managed Projects.

What is a Managed Distribution?

Managed Projects are aggregated together by a POM file that defines a Managed Distribution. The Managed Distribution is the focus of OpenDaylight development. It’s continuously built, tested, packaged and released into Continuous Delivery pipelines. As prescribed by the Managed Release Process, Managed Distributions are eventually blessed as formal OpenDaylight releases.

NB: OpenDaylight’s Fluorine release actually included Managed and Self-Managed Projects, but the community is working towards the formal release being exactly the Managed Distribution, with an option for Self-Managed Projects to release independently on top of the Managed Distribution later.

Finding the Managed Projects given a Managed Distribution

Given a Managed Distribution (tar.gz, .zip, RPM, Deb), the Managed Projects that constitute it can be found in the taglist.log file in the root of the archive.

taglist.log files are of the format:

<Managed Project> <Git SHA of built commit> <Codename of release>
Finding the Managed Projects Given a Branch

To find the current set of Managed Projects in a given OpenDaylight branch, examine the integration/distribution/features/repos/index/pom.xml file that defines the Managed Distribution.

The release management team maintains several documents in Google Drive to track releases. These documents can be found at the following link:

https://drive.google.com/drive/folders/0ByPlysxjHHJaUXdfRkJqRGo4aDg

Java API Documentation

Release Integrated Projects

OpenDaylight User Guide

Overview

This first part of the user guide covers the basic user operations of the OpenDaylight Release using the generic base functionality.

OpenDaylight Controller Overview

The OpenDaylight controller is JVM software and can be run from any operating system and hardware as long as it supports Java. The controller is an implementation of the Software Defined Network (SDN) concept and makes use of the following tools:

  • Maven: OpenDaylight uses Maven for easier build automation. Maven uses pom.xml (Project Object Model) to script the dependencies between bundle and also to describe what bundles to load and start.

  • OSGi: This framework is the back-end of OpenDaylight as it allows dynamically loading bundles and packages JAR files, and binding bundles together for exchanging information.

  • JAVA interfaces: Java interfaces are used for event listening, specifications, and forming patterns. This is the main way in which specific bundles implement call-back functions for events and also to indicate awareness of specific state.

  • REST APIs: These are northbound APIs such as topology manager, host tracker, flow programmer, static routing, and so on.

The controller exposes open northbound APIs which are used by applications. The OSGi framework and bidirectional REST are supported for the northbound APIs. The OSGi framework is used for applications that run in the same address space as the controller while the REST (web-based) API is used for applications that do not run in the same address space (or even the same system) as the controller. The business logic and algorithms reside in the applications. These applications use the controller to gather network intelligence, run its algorithm to do analytics, and then orchestrate the new rules throughout the network. On the southbound, multiple protocols are supported as plugins, e.g. OpenFlow 1.0, OpenFlow 1.3, BGP-LS, and so on. The OpenDaylight controller starts with an OpenFlow 1.0 southbound plugin. Other OpenDaylight contributors begin adding to the controller code. These modules are linked dynamically into a Service Abstraction Layer (SAL).

The SAL exposes services to which the modules north of it are written. The SAL figures out how to fulfill the requested service irrespective of the underlying protocol used between the controller and the network devices. This provides investment protection to the applications as OpenFlow and other protocols evolve over time. For the controller to control devices in its domain, it needs to know about the devices, their capabilities, reachability, and so on. This information is stored and managed by the Topology Manager. The other components like ARP handler, Host Tracker, Device Manager, and Switch Manager help in generating the topology database for the Topology Manager.

For a more detailed overview of the OpenDaylight controller, see the OpenDaylight Developer Guide.

Project-specific User Guides

Distribution Version reporting
Overview

This section provides an overview of odl-distribution-version feature.

A remote user of OpenDaylight usually has access to RESTCONF and NETCONF northbound interfaces, but does not have access to the system OpenDaylight is running on. OpenDaylight has released multiple versions including Service Releases, and there are incompatible changes between them. In order to know which YANG modules to use, which bugs to expect and which workarounds to apply, such user would need to know the exact version of at least one OpenDaylight component.

There are indirect ways to deduce such version, but the direct way is enabled by odl-distribution-version feature. Administrator can specify version strings, which would be available to users via NETCONF, or via RESTCONF if OpenDaylight is configured to initiate NETCONF connection to its config subsystem northbound interface.

By default, users have write access to config subsystem, so they can add, modify or delete any version strings present there. Admins can only influence whether the feature is installed, and initial values.

Config subsystem is local only, not cluster aware, so each member reports versions independently. This is suitable for heterogeneous clusters.

Default config file

Initial version values are set via config file odl-version.xml which is created in $KARAF_HOME/etc/opendaylight/karaf/ upon installation of odl-distribution-version feature. If admin wants to use different content, the file with desired content has to be created there before feature installation happens.

By default, the config file defines two config modules, named odl-distribution-version and odl-odlparent-version.

RESTCONF usage

Opendaylight config subsystem NETCONF northbound is not made available just by installing odl-distribution-version, but most other feature installations would enable it. RESTCONF interfaces are enabled by installing odl-restconf feature, but that do not allow access to config subsystem by itself.

On single node deployments, installation of odl-netconf-connector-ssh is recommended, which would configure controller-config device and its MD-SAL mount point.

For cluster deployments, installing odl-netconf-clustered-topology is recommended. See documentation for clustering on how to create similar devices for each member, as controller-config name is not unique in that context.

Assuming single node deployment and user located on the same system, here is an example curl command accessing odl-odlparent-version config module:

curl 127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-distribution-version:odl-version/odl-odlparent-version
NEtwork MOdeling (NEMO)

This section describes how to use the NEMO feature in OpenDaylight and contains contains configuration, administration, and management sections for the feature.

Overview

With the network becoming more complicated, users and applications must handle more complex configurations to deploy new services. NEMO project aims to simplify the usage of network by providing a new intent northbound interface (NBI). Instead of tons of APIs, users/applications just need to describe their intent without caring about complex physical devices and implementation means. The intent will be translated into detailed configurations on the devices in the NEMO engine. A typical scenario is user just need to assign which nodes to implement an VPN, without considering which technique is used.

NEMO Engine Architecture
  • NEMO API * The NEMO API provide users the NEMO model, which guides users how to construct the instance of intent, and how to construct the instance of predefined types.

  • NEMO REST * The NEMO REST provides users REST APIs to access NEMO engine, that is, user could transmit the intent instance to NEMO engine through basic REST methods.

Installing NEMO engine

To install NEMO engine, download OpenDaylight and use the Karaf console to install the following feature:

odl-nemo-engine-ui

Administering or Managing NEMO Engine

After install features NEMO engine used, user could use NEMO to express his intent with NEMO UI or REST APIs in apidoc.

Go to http://{controller-ip}:8181/index.html. In this interface, user could go to NEMO UI, and use the tabs and input box to input intent, and see the state of intent deployment with the image.

Go to http://{controller-ip}:8181/apidoc/explorer/index.html. In this interface, user could REST methods “POST”, “PUT”,”GET” and “DELETE” to deploy intent or query the state of deployment.

Tutorials

Below are tutorials for NEMO Engine.

Using NEMO Engine

The purpose of the tutorial is to describe how to use use UI to deploy intent.

Overview

This tutorial will describe how to use the NEMO UI to check the operated resources, the steps to deploy service, and the ultimate state.

Prerequisites

To understand the tutorial well, we hope there are a physical or virtual network exist, and OpenDaylight with NEMO engine must be deployed in one host.

Target Environment

The intent expressed by NEMO model is depended on network resources, so user need to have enough resources to use, or else, the deployment of intent will fail.

Instructions
  • Run the OpenDaylight distribution and install odl-nemo-engine-ui from the Karaf console.

  • Go to http://{controller-ip}:8181/index.html, and sign in.

  • Go the NEMO UI interface. And Register a new user with user name, password, and tenant.

  • Check the existing resources to see if it is consistent with yours.

  • Deploy service with NEMO model by the create intent menu.

Neutron Service User Guide
Overview

This Karaf feature (odl-neutron-service) provides integration support for OpenStack Neutron via the OpenDaylight ML2 mechanism driver. The Neutron Service is only one of the components necessary for OpenStack integration. For those related components please refer to documentations of each component:

Use cases and who will use the feature

If you want OpenStack integration with OpenDaylight, you will need this feature with an OpenDaylight provider feature like netvirt, group based policy, VTN, and lisp mapper. For provider configuration, please refer to each individual provider’s documentation. Since the Neutron service only provides the northbound API for the OpenStack Neutron ML2 mechanism driver. Without those provider features, the Neutron service itself isn’t useful.

Neutron Service feature Architecture

The Neutron service provides northbound API for OpenStack Neutron via RESTCONF and also its dedicated REST API. It communicates through its YANG model with providers.

Neutron Service Architecture

Neutron Service Architecture

Configuring Neutron Service feature

As the Karaf feature includes everything necessary for communicating northbound, no special configuration is needed. Usually this feature is used with an OpenDaylight southbound plugin that implements actual network virtualization functionality and OpenStack Neutron. The user wants to setup those configurations. Refer to each related documentations for each configurations.

Administering or Managing odl-neutron-service

There is no specific configuration regarding to Neutron service itself. For related configuration, please refer to OpenStack Neutron configuration and OpenDaylight related services which are providers for OpenStack.

installing odl-neutron-service while the controller running
  1. While OpenDaylight is running, in Karaf prompt, type: feature:install odl-neutron-service.

  2. Wait a while until the initialization is done and the controller stabilizes.

odl-neutron-service provides only a unified interface for OpenStack Neutron. It doesn’t provide actual functionality for network virtualization. Refer to each OpenDaylight project documentation for actual configuration with OpenStack Neutron.

Neutron Logger

Another service, the Neutron Logger, is provided for debugging/logging purposes. It logs changes on Neutron YANG models.

feature:install odl-neutron-logger
OpenFlow Plugin Project User Guide
Overview and Architecture
Overview and Architecture
Overview

OpenFlow is a vendor-neutral standard communications interface defined to enable interaction between the control and forwarding layers of an SDN architecture. The OpenFlow plugin project intends to develop a plugin to support implementations of the OpenFlow specification as it develops and evolves. Specifically the project has developed a plugin aiming to support OpenFlow 1.0 and 1.3.x. It can be extended to add support for subsequent OpenFlow specifications. The plugin is based on the Model Driven Service Abstraction Layer (MD-SAL) architecture (https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL). This new OpenFlow 1.0/1.3 MD-SAL based plugin is distinct from the old OpenFlow 1.0 plugin which was based on the API driven SAL (AD-SAL) architecture.

Scope
  • Southbound plugin and integration of OpenFlow 1.0/1.3.x library project

  • Ongoing support and integration of the OpenFlow specification

  • The plugin should be implemented in an easily extensible manner

  • Protocol verification activities will be performed on supported OpenFlow specifications

Architecture and Design
Functionality

OpenFlow 1.3 Plugin will support the following functionality

  • Connection Handling

  • Session Management

  • State Management.

  • Error Handling.

  • Mapping function(Infrastructure to OF structures).

  • Connection establishment will be handled by OpenFlow library using opensource netty.io library.

  • Message handling(Ex: Packet in).

  • Event handling and propagation to upper layers.

  • Plugin will support both MD-SAL and Hard SAL.

  • Will be backward compatible with OF 1.0.

Activities in OF plugin module

  • New OF plugin bundle will support both OF 1.0 and OF 1.3.

  • Integration with OpenFlow library.

  • Integration with corresponding MD-SAL infrastructure.

  • Hard SAL will be supported as adapter on top of MD-SAL plugin.

  • OF 1.3 and OF 1.0 plugin will be integrated as single bundle.

Design

Overall Architecture

overal architecture

overal architecture

Security

TLS

It is strongly recommended that any production deployments utilising the OpenFlow Plugin do so with TLS encryption to protect against various man-in-the-middle attacks. Please refer to the Certificate Management section of the user guide for more details. TLS Support in the OpenFlow Plugin is outlined on this wiki page.

Coverage
Intro

This page is to catalog the things that have been tested and confirmed to work:

Coverage

Coverage has been moved to a GoogleDoc Spreadsheet

OF 1.3 Considerations

The baseline model is a OF 1.3 model, and the coverage tables primarily deal with OF 1.3. However for OF 1.0, we have a column to indicate either N/A if it doesn’t apply, or whether its been confirmed working.

OF 1.0 Considerations

OF 1.0 is being considered as a switch with: * 1 Table * 0 Groups * 0 Meters * 1 Instruction (Apply Actions) * and a limited vocabulary of matches and actions.

Tutorial / How-To
Running the controller with the new OpenFlow Plugin

How to start

There are all helium features (from features-openflowplugin) duplicated into features-openflowplugin-li. The duplicates got suffix -li and provide Lithium codebase functionality.

These are most used:

  • odl-openflowplugin-app-lldp-speaker-li

  • odl-openflowplugin-flow-services-rest-li

  • odl-openflowplugin-drop-test-li

In case topology is required then the first one should be installed.

feature:install odl-openflowplugin-app-lldp-speaker-li

The Li-southbound currently provides:

  • flow management

  • group management

  • meter management

  • statistics polling

What to log

In order to see really low level messages enter these in karaf console:

log:set TRACE org.opendaylight.openflowplugin.openflow.md.core
log:set TRACE org.opendaylight.openflowplugin.impl

How enable topology

In order for topology to work (fill dataStore/operational with links) there must be LLDP responses delivered back to controller. This requires table-miss-entries. Table-miss-entry is a flow in table.id=0 with low priority, empty match and one output action = send to controller. Having this flow installed on every node will enable for gathering and exporting links between nodes into dataStore/operational. This is done if you use for example l2 switch application.

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
   <barrier>false</barrier>
   <cookie>54</cookie>
   <flags>SEND_FLOW_REM</flags>
   <flow-name>FooXf54</flow-name>
   <hard-timeout>0</hard-timeout>
   <id>4242</id>
   <idle-timeout>0</idle-timeout>
   <installHw>false</installHw>
   <instructions>
       <instruction>
           <apply-actions>
               <action>
                   <output-action>
                       <max-length>65535</max-length>
                       <output-node-connector>CONTROLLER</output-node-connector>
                   </output-action>
                   <order>0</order>
               </action>
           </apply-actions>
           <order>0</order>
       </instruction>
   </instructions>
   <match/>
   <priority>0</priority>
   <strict>false</strict>
   <table_id>0</table_id>
</flow>

Enable RESTCONF and Controller GUI

If you want to use RESTCONF with openflowplugin project, you have to install odl-restconf feature to enable that. To install odl-restconf feature run the following command

karaf#>feature:install odl-restconf
OpenFlow 1.3 Enabled Software Switches / Environment
Getting Mininet with OF 1.3

Download Mininet VM Upgraded to OF 1.3 (or the newer mininet-2.1.0 with OVS-2.0 that works with VMware Player. For using this on VirtualBox, import this to VMware Player and then export the .vmdk ) or you could build one yourself Openflow Protocol Library:OpenVirtualSwitch[Instructions for setting up Mininet with OF 1.3].

Installing under VirtualBox
configuring a host-only adapter

configuring a host-only adapter

For whatever reason, at least on the Mac, NATed interfaces in VirtualBox don’t actually seem to allow for connections from the host to the VM. Instead, you need to configure a host-only network and set it up. Do this by:

  • Go to the VM’s settings in VirtualBox then to network and add a second adapter attached to “Host-only Adapter” (see the screenshot to the right)

  • Edit the /etc/network/interfaces file to configure the adapter properly by adding these two lines

auto eth1
iface eth1 inet dhcp
  • Reboot the VM

At this point you should have two interfaces one which gives you NATed access to the internet and another that gives you access between your mac and the VMs. At least for me, the NATed interface gets a 10.0.2.x address and the host-only interface gets a 192.168.56.x address.

Your simplest choice: Use Vagrant

Download Virtual Box and install it Download Vagrant and install it

cd openflowplugin/vagrant/mininet-2.1.0-of-1.3/
vagrant up
vagrant ssh

This will leave you sshed into a fully provisioned Ubuntu Trusty box with mininet-2.1.0 and OVS 2.0 patches to work with OF 1.3.

Setup CPqD Openflow 1.3 Soft Switch

Latest version of Openvswitch (v2.0.0) doesn’t support all the openflow 1.3 features, e.g group multipart statistics request. Alternate options is CPqD Openflow 1.3 soft switch, It supports most of the openflow 1.3 features.

  • You can setup the switch as per the instructions given on the following URL

https://github.com/CPqD/ofsoftswitch13

  • Fire following command to start the switch

Start the datapath :

$ sudo udatapath/ofdatapath --datapath-id=<dpid> --interfaces=<if-list> ptcp:<port>
 e.g $ sudo udatapath/ofdatapath --datapath-id=000000000001 --interfaces=ethX ptcp:6680

ethX should not be associated with ip address and ipv6 should be disabled on it. If you are installing the switch on your local machine, you can use following command (for Ubuntu) to create virtual interface.

ip link add link ethX address 00:19:d1:29:d2:58 macvlan0 type macvlan

ethX - Any existing interface.

Or if you are using mininet VM for installing this switch, you can simply add one more adaptor to your VM.

Start Openflow protocol agent:

$secchan/ofprotocol tcp:<switch-host>:<switch-port> tcp:<ctrl-host>:<ctrl-port>
 e.g $secchan/ofprotocol tcp:127.0.0.1:6680 tcp:127.0.0.1:6653
Commands to add entries to various tables of the switch
  • Add meter

$utilities/dpctl tcp:<switch-host>:<switch-port> meter-mod cmd=add,meter=1 drop:rate=50
  • Add Groups

$utilities/dpctl tcp:127.0.0.1:6680 group-mod cmd=add,type=all,group=1
$utilities/dpctl tcp:127.0.0.1:6680 group-mod cmd=add,type=sel,group=2 weight=10 output:1
  • Create queue

$utilities/dpctl tcp:<ip>:<switch port> queue-mod <port-number> <queue-number> <minimum-bandwidth>
  e.g - $utilities/dpctl tcp:127.0.0.1:6680 queue-mod 1 1 23

“dpctl” –help is not very intuitive, so please keep adding any new command you figured out while your experiment with the switch.

Using the built-in Wireshark

Mininet comes with pre-installed Wireshark, but for some reason it does not include the Openflow protocol dissector. You may want to get and install it in the /.wireshark/plugins/ directory.

First login to your mininet VM

ssh mininet@<your mininet vm ip> -X

The -X option in ssh will enable x-session over ssh so that the wireshark window can be shown on your host machine’s display. when prompted, enter the password (mininet).

From the mininet vm shell, set the wireshark capture privileges (http://wiki.wireshark.org/CaptureSetup/CapturePrivileges):

sudo chgrp mininet /usr/bin/dumpcap
sudo chmod 754 /usr/bin/dumpcap
sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap

Finally, start wireshark:

wireshark

The wireshark window should show up.

To see only Openflow packets, you may want to apply the following filter in the Filter window:

tcp.port == 6633 and tcp.flags.push == 1

Start the capture on any port.

Running Mininet with OF 1.3

From within the Mininet VM, run:

sudo mn --topo single,3  --controller 'remote,ip=<your controller ip>,port=6653' --switch ovsk,protocols=OpenFlow13
End to End Inventory
Introduction

The purpose of this page is to walk you through how to see the Inventory Manager working end to end with the openflowplugin using OpenFlow 1.3.

Basically, you will learn how to:

  1. Run the Base/Virtualization/Service provider Edition with the new openflowplugin: OpenDaylight_OpenFlow_Plugin::Running_controller_with_the_new_OF_plugin[Running the controller with the new OpenFlow Plugin]

  2. Start mininet to use OF 1.3: OpenDaylight_OpenFlow_Plugin::Test_Environment[OpenFlow 1.3 Enabled Software Switches / Environment]

  3. Use RESTCONF to see the nodes appear in inventory.

Restconf for Inventory

The REST url for listing all the nodes is:

http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/

You will need to set the Accept header:

Accept: application/xml

You will also need to use HTTP Basic Auth with username: admin password: admin.

Alternately, if you have a node’s id you can address it as

http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/node/<id>

for example

http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/node/openflow:1
How to hit RestConf with Postman

Install Postman for Chrome

In the chrome browser bar enter

chrome://apps/

And click on Postman.

Enter the URL. Click on the Headers button on the far right. Enter the Accept: header. Click on the Basic Auth Tab at the top and setup the username and password. Send.

Known Bug

If you have not had any switches come up, and though no children for http://localhost:8080/restconf/datastore/opendaylight-inventory:nodes/ and exception will be thrown. I’m pretty sure I know how to fix this bug, just need to get to it :)

End to End Flows
Instructions
Learn End to End for Inventory

See End to End Inventory

Check inventory
Flow Strategy

Current way to flush a flow to switch looks like this:

  1. Create MD-SAL modeled flow and commit it into dataStore using two phase commit MD-SAL FAQ

  2. FRM gets notified and invokes corresponding rpc (addFlow) on particular service provider (if suitable provider for given node registered)

  3. The provider (plugin in this case) transforms MD-SAL modeled flow into OF-API modeled flow

  4. OF-API modeled flow is then flushed into OFLibrary

  5. OFLibrary encodes flow into particular version of wire protocol and sends it to particular switch

  6. Check on mininet side if flow is set

Push your flow
  • With PostMan:

    • Set headers:

      • Content-Type: application/xml

      • Accept: application/xml

      • Authentication

    • Use URL: “http://<controller IP>:8181/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/0/flow/1”

    • PUT

    • Use Body:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <priority>2</priority>
    <flow-name>Foo</flow-name>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
        </ethernet-match>
        <ipv4-destination>10.0.10.2/24</ipv4-destination>
    </match>
    <id>1</id>
    <table_id>0</table_id>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                   <order>0</order>
                   <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
</flow>

*Note: If you want to try a different flow id or a different table, make sure the URL and the body stay in sync. For example, if you wanted to try: table 2 flow 20 you’d change the URL to:

http://<controller IP>:8181/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2/flow/20”

but you would also need to update the 20 and 2 in the body of the XML.

Other caveat, we have a known bug with updates, so please only write to a given flow id and table id on a given node once at this time until we resolve it. Or you can use the DELETE method with the same URL in PostMan to delete the flow information on switch and controller cache.

Check for your flow on the switch
  • See your flow on your mininet:

mininet@mininet-vm:~$ sudo ovs-ofctl -O OpenFlow13 dump-flows s1
OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x0, duration=7.325s, table=0, n_packets=0, n_bytes=0, idle_timeout=300, hard_timeout=600, send_flow_rem priority=2,ip,nw_dst=10.0.10.0/24 actions=dec_ttl

If you want to see the above information from the mininet prompt - use “sh” instead of “sudo” i.e. use “sh ovs-ofctl -O OpenFlow13 dump-flows s1”.

Check for your flow in the controller config via RESTCONF
  • See your configured flow in POSTMAN with

    • URL http://<controller IP>:8181/restconf/operational/opendaylight-inventory:nodes/node/openflow:1/table/0/

    • GET

    • You no longer need to set Accept header

Return Response:

{
  "flow-node-inventory:table": [
    {
      "flow-node-inventory:id": 0,
      "flow-node-inventory:flow": [
        {
          "flow-node-inventory:priority": 1,
          "flow-node-inventory:id": "10b1a23c-5299-4f7b-83d6-563bab472754",
          "flow-node-inventory:table_id": 0,
          "flow-node-inventory:hard-timeout": 0,
          "flow-node-inventory:idle-timeout": 0,
          "flow-node-inventory:instructions": {
            "flow-node-inventory:instruction": [
              {
                "flow-node-inventory:apply-actions": {
                  "flow-node-inventory:action": [
                    {
                      "flow-node-inventory:output-action": {
                        "flow-node-inventory:output-node-connector": "openflow:1:1"
                      },
                      "flow-node-inventory:order": 0
                    }
                  ]
                },
                "flow-node-inventory:order": 0
              }
            ]
          },
          "flow-node-inventory:match": {
            "flow-node-inventory:ethernet-match": {
              "flow-node-inventory:ethernet-type": {
                "flow-node-inventory:type": 2048
              }
            },
            "flow-node-inventory:ipv4-destination": "10.0.0.2"
          },
          "flow-node-inventory:cookie": 0
        },
        {
          "flow-node-inventory:priority": 1,
          "flow-node-inventory:id": "020bf359-1299-4da6-b4f7-368bd83b5841",
          "flow-node-inventory:table_id": 0,
          "flow-node-inventory:hard-timeout": 0,
          "flow-node-inventory:idle-timeout": 0,
          "flow-node-inventory:instructions": {
            "flow-node-inventory:instruction": [
              {
                "flow-node-inventory:apply-actions": {
                  "flow-node-inventory:action": [
                    {
                      "flow-node-inventory:output-action": {
                        "flow-node-inventory:output-node-connector": "openflow:1:1"
                      },
                      "flow-node-inventory:order": 0
                    }
                  ]
                },
                "flow-node-inventory:order": 0
              }
            ]
          },
          "flow-node-inventory:match": {
            "flow-node-inventory:ethernet-match": {
              "flow-node-inventory:ethernet-type": {
                "flow-node-inventory:type": 2048
              }
            },
            "flow-node-inventory:ipv4-destination": "10.0.0.1"
          },
          "flow-node-inventory:cookie": 0
        },
        {
          "flow-node-inventory:priority": 1,
          "flow-node-inventory:id": "42172bfc-9142-4a92-9e90-ee62529b1e85",
          "flow-node-inventory:table_id": 0,
          "flow-node-inventory:hard-timeout": 0,
          "flow-node-inventory:idle-timeout": 0,
          "flow-node-inventory:instructions": {
            "flow-node-inventory:instruction": [
              {
                "flow-node-inventory:apply-actions": {
                  "flow-node-inventory:action": [
                    {
                      "flow-node-inventory:output-action": {
                        "flow-node-inventory:output-node-connector": "openflow:1:1"
                      },
                      "flow-node-inventory:order": 0
                    }
                  ]
                },
                "flow-node-inventory:order": 0
              }
            ]
          },
          "flow-node-inventory:match": {
            "flow-node-inventory:ethernet-match": {
              "flow-node-inventory:ethernet-type": {
                "flow-node-inventory:type": 2048
              }
            },
            "flow-node-inventory:ipv4-destination": "10.0.0.3"
          },
          "flow-node-inventory:cookie": 0
        },
        {
          "flow-node-inventory:priority": 1,
          "flow-node-inventory:id": "99bf566e-89f3-4c6f-ae9e-e26012ceb1e4",
          "flow-node-inventory:table_id": 0,
          "flow-node-inventory:hard-timeout": 0,
          "flow-node-inventory:idle-timeout": 0,
          "flow-node-inventory:instructions": {
            "flow-node-inventory:instruction": [
              {
                "flow-node-inventory:apply-actions": {
                  "flow-node-inventory:action": [
                    {
                      "flow-node-inventory:output-action": {
                        "flow-node-inventory:output-node-connector": "openflow:1:1"
                      },
                      "flow-node-inventory:order": 0
                    }
                  ]
                },
                "flow-node-inventory:order": 0
              }
            ]
          },
          "flow-node-inventory:match": {
            "flow-node-inventory:ethernet-match": {
              "flow-node-inventory:ethernet-type": {
                "flow-node-inventory:type": 2048
              }
            },
            "flow-node-inventory:ipv4-destination": "10.0.0.4"
          },
          "flow-node-inventory:cookie": 0
        },
        {
          "flow-node-inventory:priority": 1,
          "flow-node-inventory:id": "019dcc2e-5b4f-44f0-90cc-de490294b862",
          "flow-node-inventory:table_id": 0,
          "flow-node-inventory:hard-timeout": 0,
          "flow-node-inventory:idle-timeout": 0,
          "flow-node-inventory:instructions": {
            "flow-node-inventory:instruction": [
              {
                "flow-node-inventory:apply-actions": {
                  "flow-node-inventory:action": [
                    {
                      "flow-node-inventory:output-action": {
                        "flow-node-inventory:output-node-connector": "openflow:1:2"
                      },
                      "flow-node-inventory:order": 0
                    }
                  ]
                },
                "flow-node-inventory:order": 0
              }
            ]
          },
          "flow-node-inventory:match": {
            "flow-node-inventory:ethernet-match": {
              "flow-node-inventory:ethernet-type": {
                "flow-node-inventory:type": 2048
              }
            },
            "flow-node-inventory:ipv4-destination": "10.0.0.5"
          },
          "flow-node-inventory:cookie": 0
        },
        {
          "flow-node-inventory:priority": 1,
          "flow-node-inventory:id": "968cf81e-3f16-42f1-8b16-d01ff719c63c",
          "flow-node-inventory:table_id": 0,
          "flow-node-inventory:hard-timeout": 0,
          "flow-node-inventory:idle-timeout": 0,
          "flow-node-inventory:instructions": {
            "flow-node-inventory:instruction": [
              {
                "flow-node-inventory:apply-actions": {
                  "flow-node-inventory:action": [
                    {
                      "flow-node-inventory:output-action": {
                        "flow-node-inventory:output-node-connector": "openflow:1:2"
                      },
                      "flow-node-inventory:order": 0
                    }
                  ]
                },
                "flow-node-inventory:order": 0
              }
            ]
          },
          "flow-node-inventory:match": {
            "flow-node-inventory:ethernet-match": {
              "flow-node-inventory:ethernet-type": {
                "flow-node-inventory:type": 2048
              }
            },
            "flow-node-inventory:ipv4-destination": "10.0.0.8"
          },
          "flow-node-inventory:cookie": 0
        },
        {
          "flow-node-inventory:priority": 1,
          "flow-node-inventory:id": "1c14ea3c-9dcc-4434-b566-7e99033ea252",
          "flow-node-inventory:table_id": 0,
          "flow-node-inventory:hard-timeout": 0,
          "flow-node-inventory:idle-timeout": 0,
          "flow-node-inventory:instructions": {
            "flow-node-inventory:instruction": [
              {
                "flow-node-inventory:apply-actions": {
                  "flow-node-inventory:action": [
                    {
                      "flow-node-inventory:output-action": {
                        "flow-node-inventory:output-node-connector": "openflow:1:2"
                      },
                      "flow-node-inventory:order": 0
                    }
                  ]
                },
                "flow-node-inventory:order": 0
              }
            ]
          },
          "flow-node-inventory:match": {
            "flow-node-inventory:ethernet-match": {
              "flow-node-inventory:ethernet-type": {
                "flow-node-inventory:type": 2048
              }
            },
            "flow-node-inventory:ipv4-destination": "10.0.0.6"
          },
          "flow-node-inventory:cookie": 0
        },
        {
          "flow-node-inventory:priority": 1,
          "flow-node-inventory:id": "ed9deeb2-be8f-4b84-bcd8-9d12049383d6",
          "flow-node-inventory:table_id": 0,
          "flow-node-inventory:hard-timeout": 0,
          "flow-node-inventory:idle-timeout": 0,
          "flow-node-inventory:instructions": {
            "flow-node-inventory:instruction": [
              {
                "flow-node-inventory:apply-actions": {
                  "flow-node-inventory:action": [
                    {
                      "flow-node-inventory:output-action": {
                        "flow-node-inventory:output-node-connector": "openflow:1:2"
                      },
                      "flow-node-inventory:order": 0
                    }
                  ]
                },
                "flow-node-inventory:order": 0
              }
            ]
          },
          "flow-node-inventory:match": {
            "flow-node-inventory:ethernet-match": {
              "flow-node-inventory:ethernet-type": {
                "flow-node-inventory:type": 2048
              }
            },
            "flow-node-inventory:ipv4-destination": "10.0.0.7"
          },
          "flow-node-inventory:cookie": 0
        }
      ]
    }
  ]
}
Look for your flow stats in the controller operational data via

RESTCONF

  • See your operational flow stats in POSTMAN with

    • URL “http://<controller IP>:8181/restconf/operational/opendaylight-inventory:nodes/node/openflow:1/table/0/”

    • GET

Return Response:

{
  "flow-node-inventory:table": [
    {
      "flow-node-inventory:id": 0,
      "flow-node-inventory:flow": [
        {
          "flow-node-inventory:id": "10b1a23c-5299-4f7b-83d6-563bab472754",
          "opendaylight-flow-statistics:flow-statistics": {
            "opendaylight-flow-statistics:cookie": 0,
            "opendaylight-flow-statistics:duration": {
              "opendaylight-flow-statistics:nanosecond": 886000000,
              "opendaylight-flow-statistics:second": 2707
            },
            "opendaylight-flow-statistics:hard-timeout": 0,
            "opendaylight-flow-statistics:byte-count": 784,
            "opendaylight-flow-statistics:match": {
              "opendaylight-flow-statistics:ethernet-match": {
                "opendaylight-flow-statistics:ethernet-type": {
                  "opendaylight-flow-statistics:type": 2048
                }
              },
              "opendaylight-flow-statistics:ipv4-destination": "10.0.0.2/32"
            },
            "opendaylight-flow-statistics:priority": 1,
            "opendaylight-flow-statistics:packet-count": 8,
            "opendaylight-flow-statistics:table_id": 0,
            "opendaylight-flow-statistics:idle-timeout": 0,
            "opendaylight-flow-statistics:instructions": {
              "opendaylight-flow-statistics:instruction": [
                {
                  "opendaylight-flow-statistics:order": 0,
                  "opendaylight-flow-statistics:apply-actions": {
                    "opendaylight-flow-statistics:action": [
                      {
                        "opendaylight-flow-statistics:order": 0,
                        "opendaylight-flow-statistics:output-action": {
                          "opendaylight-flow-statistics:output-node-connector": "1",
                          "opendaylight-flow-statistics:max-length": 0
                        }
                      }
                    ]
                  }
                }
              ]
            }
          }
        },
        {
          "flow-node-inventory:id": "020bf359-1299-4da6-b4f7-368bd83b5841",
          "opendaylight-flow-statistics:flow-statistics": {
            "opendaylight-flow-statistics:cookie": 0,
            "opendaylight-flow-statistics:duration": {
              "opendaylight-flow-statistics:nanosecond": 826000000,
              "opendaylight-flow-statistics:second": 2711
            },
            "opendaylight-flow-statistics:hard-timeout": 0,
            "opendaylight-flow-statistics:byte-count": 1568,
            "opendaylight-flow-statistics:match": {
              "opendaylight-flow-statistics:ethernet-match": {
                "opendaylight-flow-statistics:ethernet-type": {
                  "opendaylight-flow-statistics:type": 2048
                }
              },
              "opendaylight-flow-statistics:ipv4-destination": "10.0.0.1/32"
            },
            "opendaylight-flow-statistics:priority": 1,
            "opendaylight-flow-statistics:packet-count": 16,
            "opendaylight-flow-statistics:table_id": 0,
            "opendaylight-flow-statistics:idle-timeout": 0,
            "opendaylight-flow-statistics:instructions": {
              "opendaylight-flow-statistics:instruction": [
                {
                  "opendaylight-flow-statistics:order": 0,
                  "opendaylight-flow-statistics:apply-actions": {
                    "opendaylight-flow-statistics:action": [
                      {
                        "opendaylight-flow-statistics:order": 0,
                        "opendaylight-flow-statistics:output-action": {
                          "opendaylight-flow-statistics:output-node-connector": "1",
                          "opendaylight-flow-statistics:max-length": 0
                        }
                      }
                    ]
                  }
                }
              ]
            }
          }
        },
        {
          "flow-node-inventory:id": "42172bfc-9142-4a92-9e90-ee62529b1e85",
          "opendaylight-flow-statistics:flow-statistics": {
            "opendaylight-flow-statistics:cookie": 0,
            "opendaylight-flow-statistics:duration": {
              "opendaylight-flow-statistics:nanosecond": 548000000,
              "opendaylight-flow-statistics:second": 2708
            },
            "opendaylight-flow-statistics:hard-timeout": 0,
            "opendaylight-flow-statistics:byte-count": 784,
            "opendaylight-flow-statistics:match": {
              "opendaylight-flow-statistics:ethernet-match": {
                "opendaylight-flow-statistics:ethernet-type": {
                  "opendaylight-flow-statistics:type": 2048
                }
              },
              "opendaylight-flow-statistics:ipv4-destination": "10.0.0.3/32"
            },
            "opendaylight-flow-statistics:priority": 1,
            "opendaylight-flow-statistics:packet-count": 8,
            "opendaylight-flow-statistics:table_id": 0,
            "opendaylight-flow-statistics:idle-timeout": 0,
            "opendaylight-flow-statistics:instructions": {
              "opendaylight-flow-statistics:instruction": [
                {
                  "opendaylight-flow-statistics:order": 0,
                  "opendaylight-flow-statistics:apply-actions": {
                    "opendaylight-flow-statistics:action": [
                      {
                        "opendaylight-flow-statistics:order": 0,
                        "opendaylight-flow-statistics:output-action": {
                          "opendaylight-flow-statistics:output-node-connector": "1",
                          "opendaylight-flow-statistics:max-length": 0
                        }
                      }
                    ]
                  }
                }
              ]
            }
          }
        },
        {
          "flow-node-inventory:id": "99bf566e-89f3-4c6f-ae9e-e26012ceb1e4",
          "opendaylight-flow-statistics:flow-statistics": {
            "opendaylight-flow-statistics:cookie": 0,
            "opendaylight-flow-statistics:duration": {
              "opendaylight-flow-statistics:nanosecond": 296000000,
              "opendaylight-flow-statistics:second": 2710
            },
            "opendaylight-flow-statistics:hard-timeout": 0,
            "opendaylight-flow-statistics:byte-count": 1274,
            "opendaylight-flow-statistics:match": {
              "opendaylight-flow-statistics:ethernet-match": {
                "opendaylight-flow-statistics:ethernet-type": {
                  "opendaylight-flow-statistics:type": 2048
                }
              },
              "opendaylight-flow-statistics:ipv4-destination": "10.0.0.4/32"
            },
            "opendaylight-flow-statistics:priority": 1,
            "opendaylight-flow-statistics:packet-count": 13,
            "opendaylight-flow-statistics:table_id": 0,
            "opendaylight-flow-statistics:idle-timeout": 0,
            "opendaylight-flow-statistics:instructions": {
              "opendaylight-flow-statistics:instruction": [
                {
                  "opendaylight-flow-statistics:order": 0,
                  "opendaylight-flow-statistics:apply-actions": {
                    "opendaylight-flow-statistics:action": [
                      {
                        "opendaylight-flow-statistics:order": 0,
                        "opendaylight-flow-statistics:output-action": {
                          "opendaylight-flow-statistics:output-node-connector": "1",
                          "opendaylight-flow-statistics:max-length": 0
                        }
                      }
                    ]
                  }
                }
              ]
            }
          }
        },
        {
          "flow-node-inventory:id": "019dcc2e-5b4f-44f0-90cc-de490294b862",
          "opendaylight-flow-statistics:flow-statistics": {
            "opendaylight-flow-statistics:cookie": 0,
            "opendaylight-flow-statistics:duration": {
              "opendaylight-flow-statistics:nanosecond": 392000000,
              "opendaylight-flow-statistics:second": 2711
            },
            "opendaylight-flow-statistics:hard-timeout": 0,
            "opendaylight-flow-statistics:byte-count": 1470,
            "opendaylight-flow-statistics:match": {
              "opendaylight-flow-statistics:ethernet-match": {
                "opendaylight-flow-statistics:ethernet-type": {
                  "opendaylight-flow-statistics:type": 2048
                }
              },
              "opendaylight-flow-statistics:ipv4-destination": "10.0.0.5/32"
            },
            "opendaylight-flow-statistics:priority": 1,
            "opendaylight-flow-statistics:packet-count": 15,
            "opendaylight-flow-statistics:table_id": 0,
            "opendaylight-flow-statistics:idle-timeout": 0,
            "opendaylight-flow-statistics:instructions": {
              "opendaylight-flow-statistics:instruction": [
                {
                  "opendaylight-flow-statistics:order": 0,
                  "opendaylight-flow-statistics:apply-actions": {
                    "opendaylight-flow-statistics:action": [
                      {
                        "opendaylight-flow-statistics:order": 0,
                        "opendaylight-flow-statistics:output-action": {
                          "opendaylight-flow-statistics:output-node-connector": "2",
                          "opendaylight-flow-statistics:max-length": 0
                        }
                      }
                    ]
                  }
                }
              ]
            }
          }
        },
        {
          "flow-node-inventory:id": "968cf81e-3f16-42f1-8b16-d01ff719c63c",
          "opendaylight-flow-statistics:flow-statistics": {
            "opendaylight-flow-statistics:cookie": 0,
            "opendaylight-flow-statistics:duration": {
              "opendaylight-flow-statistics:nanosecond": 344000000,
              "opendaylight-flow-statistics:second": 2707
            },
            "opendaylight-flow-statistics:hard-timeout": 0,
            "opendaylight-flow-statistics:byte-count": 784,
            "opendaylight-flow-statistics:match": {
              "opendaylight-flow-statistics:ethernet-match": {
                "opendaylight-flow-statistics:ethernet-type": {
                  "opendaylight-flow-statistics:type": 2048
                }
              },
              "opendaylight-flow-statistics:ipv4-destination": "10.0.0.8/32"
            },
            "opendaylight-flow-statistics:priority": 1,
            "opendaylight-flow-statistics:packet-count": 8,
            "opendaylight-flow-statistics:table_id": 0,
            "opendaylight-flow-statistics:idle-timeout": 0,
            "opendaylight-flow-statistics:instructions": {
              "opendaylight-flow-statistics:instruction": [
                {
                  "opendaylight-flow-statistics:order": 0,
                  "opendaylight-flow-statistics:apply-actions": {
                    "opendaylight-flow-statistics:action": [
                      {
                        "opendaylight-flow-statistics:order": 0,
                        "opendaylight-flow-statistics:output-action": {
                          "opendaylight-flow-statistics:output-node-connector": "2",
                          "opendaylight-flow-statistics:max-length": 0
                        }
                      }
                    ]
                  }
                }
              ]
            }
          }
        },
        {
          "flow-node-inventory:id": "ed9deeb2-be8f-4b84-bcd8-9d12049383d6",
          "opendaylight-flow-statistics:flow-statistics": {
            "opendaylight-flow-statistics:cookie": 0,
            "opendaylight-flow-statistics:duration": {
              "opendaylight-flow-statistics:nanosecond": 577000000,
              "opendaylight-flow-statistics:second": 2706
            },
            "opendaylight-flow-statistics:hard-timeout": 0,
            "opendaylight-flow-statistics:byte-count": 784,
            "opendaylight-flow-statistics:match": {
              "opendaylight-flow-statistics:ethernet-match": {
                "opendaylight-flow-statistics:ethernet-type": {
                  "opendaylight-flow-statistics:type": 2048
                }
              },
              "opendaylight-flow-statistics:ipv4-destination": "10.0.0.7/32"
            },
            "opendaylight-flow-statistics:priority": 1,
            "opendaylight-flow-statistics:packet-count": 8,
            "opendaylight-flow-statistics:table_id": 0,
            "opendaylight-flow-statistics:idle-timeout": 0,
            "opendaylight-flow-statistics:instructions": {
              "opendaylight-flow-statistics:instruction": [
                {
                  "opendaylight-flow-statistics:order": 0,
                  "opendaylight-flow-statistics:apply-actions": {
                    "opendaylight-flow-statistics:action": [
                      {
                        "opendaylight-flow-statistics:order": 0,
                        "opendaylight-flow-statistics:output-action": {
                          "opendaylight-flow-statistics:output-node-connector": "2",
                          "opendaylight-flow-statistics:max-length": 0
                        }
                      }
                    ]
                  }
                }
              ]
            }
          }
        },
        {
          "flow-node-inventory:id": "1c14ea3c-9dcc-4434-b566-7e99033ea252",
          "opendaylight-flow-statistics:flow-statistics": {
            "opendaylight-flow-statistics:cookie": 0,
            "opendaylight-flow-statistics:duration": {
              "opendaylight-flow-statistics:nanosecond": 659000000,
              "opendaylight-flow-statistics:second": 2705
            },
            "opendaylight-flow-statistics:hard-timeout": 0,
            "opendaylight-flow-statistics:byte-count": 784,
            "opendaylight-flow-statistics:match": {
              "opendaylight-flow-statistics:ethernet-match": {
                "opendaylight-flow-statistics:ethernet-type": {
                  "opendaylight-flow-statistics:type": 2048
                }
              },
              "opendaylight-flow-statistics:ipv4-destination": "10.0.0.6/32"
            },
            "opendaylight-flow-statistics:priority": 1,
            "opendaylight-flow-statistics:packet-count": 8,
            "opendaylight-flow-statistics:table_id": 0,
            "opendaylight-flow-statistics:idle-timeout": 0,
            "opendaylight-flow-statistics:instructions": {
              "opendaylight-flow-statistics:instruction": [
                {
                  "opendaylight-flow-statistics:order": 0,
                  "opendaylight-flow-statistics:apply-actions": {
                    "opendaylight-flow-statistics:action": [
                      {
                        "opendaylight-flow-statistics:order": 0,
                        "opendaylight-flow-statistics:output-action": {
                          "opendaylight-flow-statistics:output-node-connector": "2",
                          "opendaylight-flow-statistics:max-length": 0
                        }
                      }
                    ]
                  }
                }
              ]
            }
          }
        }
      ],
      "opendaylight-flow-table-statistics:flow-table-statistics": {
        "opendaylight-flow-table-statistics:active-flows": 8,
        "opendaylight-flow-table-statistics:packets-matched": 97683,
        "opendaylight-flow-table-statistics:packets-looked-up": 101772
      }
    }
  ]
}
Discovering and testing new Flow Types

Currently, the openflowplugin has a test-provider that allows you to push various flows through the system from the OSGI command line. Once those flows have been pushed through, you can see them as examples and then use them to see in the config what a particular flow example looks like.

Using addMDFlow

From the

cd openflowplugin/distribution/base/target/distributions-openflowplugin-base-0.0.1-SNAPSHOT-osgipackage/opendaylight
./run.sh

Point your mininet at the controller as described above.

once you can see your node (probably openflow:1 if you’ve been following along) in the inventory, at the OSGI command line try running:

addMDFlow openflow:1 f#

Where # is a number between 1 and 80. This will create one of 80 possible flows. You can go confirm they were created on the switch.

Once you’ve done that, use

To see a full listing of the flows in table 2 (where they will be put). If you want to see a particular flow, look at

Where # is 123 + the f# you used. So for example, for f22, your url would be

Note: You may have to trim out some of the sections like that contain bitfields and binary types that are not correctly modeled.

Note: Before attempting to PUT a flow you have created via addMDFlow, please change its URL and body to, for example, use table 1 instead of table 2 or another Flow Id, so you don’t collide.

Note: There are several test command providers and the one handling flows is OpenflowpluginTestCommandProvider. Methods, which can be use as commands in OSGI-console have prefix _.

Example Flows

Examples for XML for various flow matches, instructions & actions can be found in following section here.

End to End Topology
Introduction

The purpose of this page is to walk you through how to see the Topology Manager working end to end with the openflowplugin using OpenFlow 1.3.

Basically, you will learn how to:

  1. Run the Base/Virtualization/Service provider Edition with the new openflowplugin: Running the controller with the new OpenFlow Plugin

  2. Start mininet to use OF 1.3: OpenFlow 1.3 Enabled Software Switches / Environment

  3. Use RESTCONF to see the topology information.

Restconf for Topology

The REST url for listing all the nodes is:

http://localhost:8080/restconf/operational/network-topology:network-topology/

You will need to set the Accept header:

Accept: application/xml

You will also need to use HTTP Basic Auth with username: admin password: admin.

Alternately, if you have a node’s id you can address it as

http://localhost:8080/restconf/operational/network-topology:network-topology/topology/<id>

for example

http://localhost:8080/restconf/operational/network-topology:network-topology/topology/flow:1/
How to hit RestConf with Postman

Install postman for Chrome

In the chrome browser bar enter

chrome://apps/

And click on Postman.

Enter the URL. Click on the Headers button on the far right. Enter the Accept: header. Click on the Basic Auth Tab at the top and setup the username and password. Send.

End to End Groups
NOTE

Groups are NOT SUPPORTED in current (2.0.0) version of openvswitch. See

For testing group feature please use for example CPQD virtual switch in the End to End Inventory section.

Instructions
Learn End to End for Inventory

End to End Inventory

Check inventory

Run CPqD with support for OF 1.3 as described in End to End Inventory

Make sure you see the openflow:1 node come up as described in End to End Inventory

Group Strategy

Current way to flush a group to switch looks like this:

  1. create MD-SAL modeled group and commit it into dataStore using two phase commit

  2. FRM gets notified and invokes corresponding rpc (addGroup) on particular service provider (if suitable provider for given node registered)

  3. the provider (plugin in this case) transforms MD-SAL modeled group into OF-API modeled group

  4. OF-API modeled group is then flushed into OFLibrary

  5. OFLibrary encodes group into particular version of wire protocol and sends it to particular switch

  6. check on CPqD if group is installed

Push your Group
  • With PostMan:

    • Set

      • Content-Type: application/xml

      • Accept: application/xml

    • Use URL: “http://<ip-address>:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/group/1”

    • PUT

    • Use Body:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<group xmlns="urn:opendaylight:flow:inventory">
    <group-type>group-all</group-type>
    <buckets>
        <bucket>
            <action>
                <pop-vlan-action/>
                <order>0</order>
            </action>
            <bucket-id>12</bucket-id>
            <watch_group>14</watch_group>
            <watch_port>1234</watch_port>
        </bucket>
        <bucket>
            <action>
                <set-field>
                    <ipv4-source>100.1.1.1</ipv4-source>
                </set-field>
                <order>0</order>
            </action>
            <action>
                <set-field>
                    <ipv4-destination>200.71.9.5210</ipv4-destination>
                </set-field>
                <order>1</order>
            </action>
            <bucket-id>13</bucket-id>
            <watch_group>14</watch_group>
            <watch_port>1234</watch_port>
        </bucket>
    </buckets>
    <barrier>false</barrier>
    <group-name>Foo</group-name>
    <group-id>1</group-id>
</group>

Note

If you want to try a different group id, make sure the URL and the body stay in sync. For example, if you wanted to try: group-id 20 you’d change the URL to “http://<ip-address>:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/group/20” but you would also need to update the <group-id>20</group-id> in the body to match.

Note

<ip-address> :Provide the IP Address of the machine on which the controller is running.

Check for your group on the switch
  • See your group on your cpqd switch:

COMMAND: sudo dpctl tcp:127.0.0.1:6000 stats-group

SENDING:
stat_req{type="grp", flags="0x0", group="all"}


RECEIVED:
stat_repl{type="grp", flags="0x0", stats=[
{group="1", ref_cnt="0", pkt_cnt="0", byte_cnt="0", cntrs=[{pkt_cnt="0", byte_cnt="0"}, {pkt_cnt="0", byte_cnt="0"}]}]}
Check for your group in the controller config via RESTCONF
  • See your configured group in POSTMAN with

    • URL http://<ip-address>:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/group/1

    • GET

    • You should no longer need to set Accept

    • Note: <ip-address> :Provide the IP Address of the machine on which the controller is running.

Look for your group stats in the controller operational data via RESTCONF
  • See your operational group stats in POSTMAN with

    • URL http://<ip-address>:8080/restconf/operational/opendaylight-inventory:nodes/node/openflow:1/group/1

    • GET

    • Note: <ip-address> :Provide the IP Address of the machine on which the controller is running.

Discovering and testing Group Types

Currently, the openflowplugin has a test-provider that allows you to push various groups through the system from the OSGI command line. Once those groups have been pushed through, you can see them as examples and then use them to see in the config what a particular group example looks like.

Using addGroup

From the

cd openflowplugin/distribution/base/target/distributions-openflowplugin-base-0.0.1-SNAPSHOT-osgipackage/opendaylight
./run.sh

Point your CPqD at the controller as described above.

once you can see your node (probably openflow:1 if you’ve been following along) in the inventory, at the OSGI command line try running:

addGroup openflow:1

This will install a group in the switch. You can check whether the group is installed or not.

Once you’ve done that, use

  • GET

  • Accept: application/xml

  • URL: “http://<ip-address>:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/group/1”

    • Note: <ip-address> :Provide the IP Address of the machine on which the controller is running.

Note

Before attempting to PUT a group you have created via addGroup, please change its URL and body to, for example, use group 1 instead of group 2 or another Group Id, so that they don’t collide.

Note

There are several test command providers and the one handling groups is OpenflowpluginGroupTestCommandProvider. Methods, which can be use as commands in OSGI-console have prefix _.

Example Group

Examples for XML for various Group Types can be found in the test-scripts bundle of the plugin code with names g1.xml, g2.xml and g3.xml.

End to End Meters
Instructions
Learn End to End for Inventory
Check inventory
Meter Strategy

Current way to flush a meter to switch looks like this:

  1. create MD-SAL modeled flow and commit it into dataStore using two phase commit

  2. FRM gets notified and invokes corresponding rpc (addMeter) on particular service provider (if suitable provider for given node registered)

  3. the provider (plugin in this case) transforms MD-SAL modeled meter into OF-API modeled meter

  4. OF-API modeled meter is then flushed into OFLibrary

  5. OFLibrary encodes meter into particular version of wire protocol and sends it to particular switch

  6. check on mininet side if meter is installed

Push your Meter
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<meter xmlns="urn:opendaylight:flow:inventory">
    <container-name>abcd</container-name>
    <flags>meter-burst</flags>
    <meter-band-headers>
        <meter-band-header>
            <band-burst-size>444</band-burst-size>
            <band-id>0</band-id>
            <band-rate>234</band-rate>
            <dscp-remark-burst-size>5</dscp-remark-burst-size>
            <dscp-remark-rate>12</dscp-remark-rate>
            <prec_level>1</prec_level>
            <meter-band-types>
                <flags>ofpmbt-dscp-remark</flags>
            </meter-band-types>
        </meter-band-header>
    </meter-band-headers>
    <meter-id>1</meter-id>
    <meter-name>Foo</meter-name>
</meter>

Note

If you want to try a different meter id, make sure the URL and the body stay in sync. For example, if you wanted to try: meter-id 20 you’d change the URL to “http://:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/meter/20” but you would also need to update the 20 in the body to match.

Note

:Provide the IP Address of the machine on which the controller is running.

Check for your meter on the switch
  • See your meter on your CPqD switch:

COMMAND: $ sudo dpctl tcp:127.0.0.1:6000 meter-config

SENDING:
stat_req{type="mconf", flags="0x0"{meter_id= ffffffff"}


RECEIVED:
stat_repl{type="mconf", flags="0x0", stats=[{meter= c"", flags="4", bands=[{type = dscp_remark, rate="12", burst_size="5", prec_level="1"}]}]}
Check for your meter in the controller config via RESTCONF
Look for your meter stats in the controller operational data via RESTCONF
Discovering and testing Meter Types

Currently, the openflowplugin has a test-provider that allows you to push various meters through the system from the OSGI command line. Once those meters have been pushed through, you can see them as examples and then use them to see in the config what a particular meter example looks like.

Using addMeter

From the

cd openflowplugin/distribution/base/target/distributions-openflowplugin-base-0.0.1-SNAPSHOT-osgipackage/opendaylight
./run.sh

Point your CPqD at the controller as described above.

Once you can see your CPqD connected to the controller, at the OSGI command line try running:

addMeter openflow:1

Once you’ve done that, use

Note

Before attempting to PUT a meter you have created via addMeter, please change its URL and body to, for example, use meter 1 instead of meter 2 or another Meter Id, so you don’t collide.

Note

There are several test command providers and the one handling Meter is OpenflowpluginMeterTestCommandProvider. Methods, which can be used as commands in OSGI-console have prefix _. Examples: addMeter, modifyMeter and removeMeter.

Example Meter

Examples for XML for various Meter Types can be found in the test-scripts bundle of the plugin code with names m1.xml, m2.xml and m3.xml.

Statistics
Overview

This page contains high level detail about the statistics collection mechanism in new OpenFlow plugin.

Statistics collection in new OpenFlow plugin

New OpenFlow plugin collects following statistics from OpenFlow enabled node(switch):

  1. Individual Flow Statistics

  2. Aggregate Flow Statistics

  3. Flow Table Statistics

  4. Port Statistics

  5. Group Description

  6. Group Statistics

  7. Meter Configuration

  8. Meter Statistics

  9. Queue Statistics

  10. Node Description

  11. Flow Table Features

  12. Port Description

  13. Group Features

  14. Meter Features

At a high level statistics collection mechanism is divided into following three parts

  1. Statistics related YANG models, service APIs and notification interfaces are defined in the MD-SAL.

  2. Service APIs (RPCs) defined in yang models are implemented by OpenFlow plugin. Notification interfaces are wired up by OpenFlow plugin to MD-SAL.

  3. Statistics Manager Module: This module use service APIs implemented by OpenFlow plugin to send statistics requests to all the connected OpenFlow enabled nodes. Module also implements notification interfaces to receive statistics response from nodes. Once it receives statistics response, it augment all the statistics data to the relevant element of the node (like node-connector, flow, table,group, meter) and store it in MD-SAL operational data store.

Details of statistics collection
  • Current implementation collects above mentioned statistics (except 10-14) at a periodic interval of 15 seconds.

  • Statistics mentioned in 10 to 14 are only fetched when any node connects to the controller because these statistics are just static details about the respective elements.

  • Whenever any new element is added to node (like flow, group, meter, queue) it sends statistics request immediately to fetch the latest statistics and store it in the operational data store.

  • Whenever any element is deleted from the node, it immediately remove the relevant statistics from operational data store.

  • Statistics data are augmented to their respective element stored in the configuration data store. E.g Controller installed flows are stored in configuration data store. Whenever Statistics Manager receive statistics data related to these flow, it search the corresponding flow in the configuration data store and augment statistics in the corresponding location in operational data store. Similar approach is used for other elements of the node.

  • Statistics Manager stores flow statistics as an unaccounted flow statistics in operational data store if there is no corresponding flow exist in configuration data store. ID format of unaccounted flow statistics is as follows - [#UF$TABLE**Unaccounted-flow-count - e.g #UF$TABLE*2*1].

  • All the unaccounted flows will be cleaned up periodically after every two cycle of flow statistics collection, given that there is no update for these flows in the last two cycles.

  • Statistics Manager only entertains statistics response for the request sent by itself. User can write its own statistics collector using the statistics service APIs and notification defined in yang models, it won’t effect the functioning of Statistics Manager.

  • OpenFlow 1.0 don’t have concept of Meter and Group, so Statistics Manager don’t send any group & meter related statistics request to OpenFlow 1.0 enabled switch.

RESTCONF Uris to access statistics of various node elements
  • Aggregate Flow Statistics & Flow Table Statistics

GET  http://<controller-ip>:8080/restconf/operational/opendaylight-inventory:nodes/node/{node-id}/table/{table-id}
  • Individual Flow Statistics from specific table

GET  http://<controller-ip>:8080/restconf/operational/opendaylight-inventory:nodes/node/{node-id}/table/{table-id}/flow/{flow-id}
  • Group Features & Meter Features Statistics

GET  http://<controller-ip>:8080/restconf/operational/opendaylight-inventory:nodes/node/{node-id}
  • Group Description & Group Statistics

GET  http://<controller-ip>:8080/restconf/operational/opendaylight-inventory:nodes/node/{node-id}/group/{group-id}
  • Meter Configuration & Meter Statistics

GET  http://<controller-ip>:8080/restconf/operational/opendaylight-inventory:nodes/node/{node-id}/meter/{meter-id}
  • Node Connector Statistics

GET  http://<controller-ip>:8080/restconf/operational/opendaylight-inventory:nodes/node/{node-id}/node-connector/{node-connector-id}
  • Queue Statistics

GET  http://<controller-ip>:8080/restconf/operational/opendaylight-inventory:nodes/node/{node-id}/node-connector/{node-connector-id}/queue/{queue-id}
Bugs

For more details and queuries, please send mail to openflowplugin-dev@lists.opendaylight.org or avishnoi@in.ibm.com If you want to report any bug in statistics collection, please use bugzilla.

Web / Graphical Interface

In the Hydrogen & Helium release, the current Web UI does not support the new OpenFlow 1.3 constructs such as groups, meters, new fields in the flows, multiple flow tables, etc.

Command Line Interface

The following is not exactly CLI - just a set of test commands which can be executed on the OSGI console testing various features in OpenFlow 1.3 spec.

Flows : Test Provider

Currently, the openflowplugin has a test-provider that allows you to push various flows through the system from the OSGI command line. Once those flows have been pushed through, you can see them as examples and then use them to see in the config what a particular flow example looks like.

AddFlow : addMDFlow

Run the controller by executing:

cd openflowplugin/distribution/base/target/distributions-openflowplugin-base-0.0.1-SNAPSHOT-osgipackage/opendaylight
./run.sh

Point your mininet to the controller by giving the parameters –controller=remote,ip=.

Once you see your node (probably openflow:1 if you’ve been following along) in the inventory, at the OSGI command line try running:

addMDFlow openflow:1 f#

Where # is a number between 1 and 80 and openflow:1 is the of the switch. This will create one of 80 possible flows. You can confirm that they were created on the switch.

RemoveFlow : removeMDFlow

Similar to addMDFlow, from the controller OSGi prompt, while your switch is connected to the controller, try running:

removeMDFlow openflow:1 f#

where # is a number between 1 and 80 and openflow:1 is the of the switch. The flow to be deleted should have same flowid and Nodeid as used for flow add.

ModifyFlow : modifyMDFlow

Similar to addMDFlow, from the controller OSGi prompt, while your switch is connected to the controller, try running:

modifyMDFlow openflow:1 f#

where # is a number between 1 and 80 and openflow:1 is the of the switch. The flow to be deleted should have same flowid and Nodeid as used for flow add.

Group : Test Provider

Currently, the openflowplugin has a test-provider that allows you to push various flows through the system from the OSGI command line. Once those flows have been pushed through, you can see them as examples and then use them to see in the config what a particular flow example looks like.

AddGroup : addGroup

Run the controller by executing:

cd openflowplugin/distribution/base/target/distributions-openflowplugin-base-0.0.1-SNAPSHOT-osgipackage/opendaylight
./run.sh

Point your mininet to the controller by giving the parameters –controller=remote,ip=.

Once you see your node (probably openflow:1 if you’ve been following along) in the inventory, at the OSGI command line try running:

addGroup openflow:1 a# g#

Where # is a number between 1 and 4 for grouptype(g#) and 1 and 28 for actiontype(a#). You can confirm that they were created on the switch.

RemoveGroup : removeGroup

Run the controller by executing:

cd openflowplugin/distribution/base/target/distributions-openflowplugin-base-0.0.1-SNAPSHOT-osgipackage/opendaylight
./run.sh

Point your mininet at the controller as described above.

Once you see your node (probably openflow:1 if you’ve been following along) in the inventory, at the OSGI command line try running:

removeGroup openflow:1 a# g#

Where # is a number between 1 and 4 for grouptype(g#) and 1 and 28 for actiontype(a#). GroupId should be same as that used for adding the flow. You can confirm that it was removed from the switch.

ModifyGroup : modifyGroup

Run the controller by executing:

cd openflowplugin/distribution/base/target/distributions-openflowplugin-base-0.0.1-SNAPSHOT-osgipackage/opendaylight
./run.sh

Point your mininet at the controller as described above.

Once you see your node (probably openflow:1 if you’ve been following along) in the inventory, at the OSGI command line try running:

modifyGroup openflow:1 a# g#

Where # is a number between 1 and 4 for grouptype(g#) and 1 and 28 for actiontype(a#). GroupId should be same as that used for adding the flow. You can confirm that it was modified on the switch.

Meters : Test Provider

Currently, the openflowplugin has a test-provider that allows you to push various flows through the system from the OSGI command line. Once those flows have been pushed through, you can see them as examples and then use them to see in the config what a particular flow example looks like.

AddMeter : addMeter

Run the controller by executing:

cd openflowplugin/distribution/base/target/distributions-openflowplugin-base-0.0.1-SNAPSHOT-osgipackage/opendaylight
./run.sh

Point your mininet to the controller by giving the parameters –controller=remote,ip=.

Once you see your node (probably openflow:1 if you’ve been following along) in the inventory, at the OSGI command line try running:

addMeter openflow:1

You can now confirm that meter has been created on the switch.

RemoveMeter : removeMeter

Run the controller by executing:

cd openflowplugin/distribution/base/target/distributions-openflowplugin-base-0.0.1-SNAPSHOT-osgipackage/opendaylight
./run.sh

Point your mininet to the controller by giving the parameters –controller=remote,ip=.

Once you see your node (probably openflow:1 if you’ve been following along) in the inventory, at the OSGI command line try running:

removeMeter openflow:1

The CLI takes care of using the same meterId and nodeId as used for meter add. You can confirm that it was removed from the switch.

ModifyMeter : modifyMeter

Run the controller by executing:

cd openflowplugin/distribution/base/target/distributions-openflowplugin-base-0.0.1-SNAPSHOT-osgipackage/opendaylight
./run.sh

Point your mininet to the controller by giving the parameters –controller=remote,ip=.

Once you see your node (probably openflow:1 if you’ve been following along) in the inventory, at the OSGI command line try running:

modifyMeter openflow:1

The CLI takes care of using the same meterId and nodeId as used for meter add. You can confirm that it was modified on the switch.

Topology : Notification

Currently, the openflowplugin has a test-provider that allows you to get notifications for the topology related events like Link-Discovered , Link-Removed events.

Programmatic Interface

The API is documented in the model documentation under the section OpenFlow Services at:

Example flows
Overview

The flow examples on this page are tested to work with OVS.

Use, for example, POSTMAN with the following parameters:

PUT http://<ctrl-addr>:8080/restconf/config/opendaylight-inventory:nodes/node/<Node-id>/table/<Table-#>/flow/<Flow-#>

- Accept: application/xml
- Content-Type: application/xml

For example:

PUT http://localhost:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2/flow/127

Make sure that the Table-# and Flow-# in the URL and in the XML match.

The format of the flow-programming XML is determined by by the grouping flow in the opendaylight-flow-types yang model: MISSING LINK.

Match Examples

The format of the XML that describes OpenFlow matches is determined by the opendaylight-match-types yang model: .

IPv4 Dest Address
  • Flow=124, Table=2, Priority=2, Instructions=\{Apply_Actions={dec_nw_ttl}}, match=\{ipv4_destination_address=10.0.1.1/24}

  • Note that ethernet-type MUST be 2048 (0x800)

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <table_id>2</table_id>
    <id>124</id>
    <cookie_mask>255</cookie_mask>
    <installHw>false</installHw>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
        </ethernet-match>
        <ipv4-destination>10.0.1.1/24</ipv4-destination>
    </match>
    <hard-timeout>12</hard-timeout>
    <cookie>1</cookie>
    <idle-timeout>34</idle-timeout>
    <flow-name>FooXf1</flow-name>
    <priority>2</priority>
    <barrier>false</barrier>
</flow>
Ethernet Src Address
  • Flow=126, Table=2, Priority=2, Instructions=\{Apply_Actions={drop}}, match=\{ethernet-source=00:00:00:00:00:01}

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <drop-action/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <table_id>2</table_id>
    <id>126</id>
    <cookie_mask>255</cookie_mask>
    <installHw>false</installHw>
    <match>
        <ethernet-match>
            <ethernet-source>
                <address>00:00:00:00:00:01</address>
            </ethernet-source>
        </ethernet-match>
    </match>
    <hard-timeout>12</hard-timeout>
    <cookie>3</cookie>
    <idle-timeout>34</idle-timeout>
    <flow-name>FooXf3</flow-name>
    <priority>2</priority>
    <barrier>false</barrier>
</flow>
Ethernet Src & Dest Addresses, Ethernet Type
  • Flow=127, Table=2, Priority=2, Instructions=\{Apply_Actions={drop}}, match=\{ethernet-source=00:00:00:00:23:ae, ethernet-destination=ff:ff:ff:ff:ff:ff, ethernet-type=45}

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-mpls-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <table_id>2</table_id>
    <id>127</id>
    <cookie_mask>255</cookie_mask>
    <installHw>false</installHw>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>45</type>
            </ethernet-type>
            <ethernet-destination>
                <address>ff:ff:ff:ff:ff:ff</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:00:23:ae</address>
            </ethernet-source>
        </ethernet-match>
    </match>
    <hard-timeout>12</hard-timeout>
    <cookie>4</cookie>
    <idle-timeout>34</idle-timeout>
    <flow-name>FooXf4</flow-name>
    <priority>2</priority>
    <barrier>false</barrier>
</flow>
Ethernet Src & Dest Addresses, IPv4 Src & Dest Addresses, Input Port
  • Note that ethernet-type MUST be 34887 (0x8847)

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-mpls-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <table_id>2</table_id>
    <id>128</id>
    <cookie_mask>255</cookie_mask>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34887</type>
            </ethernet-type>
            <ethernet-destination>
                <address>ff:ff:ff:ff:ff:ff</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:00:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <ipv4-source>10.1.2.3/24</ipv4-source>
        <ipv4-destination>20.4.5.6/16</ipv4-destination>
        <in-port>0</in-port>
    </match>
    <hard-timeout>12</hard-timeout>
    <cookie>5</cookie>
    <idle-timeout>34</idle-timeout>
    <flow-name>FooXf5</flow-name>
    <priority>2</priority>
    <barrier>false</barrier>
</flow>
Ethernet Src & Dest Addresses, IPv4 Src & Dest Addresses, IP

Protocol #, IP DSCP, IP ECN, Input Port

  • Note that ethernet-type MUST be 2048 (0x800)

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <table_id>2</table_id>
    <id>130</id>
    <cookie_mask>255</cookie_mask>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
            <ethernet-destination>
                <address>ff:ff:ff:ff:ff:aa</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:11:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <ipv4-source>10.1.2.3/24</ipv4-source>
        <ipv4-destination>20.4.5.6/16</ipv4-destination>
        <ip-match>
            <ip-protocol>56</ip-protocol>
            <ip-dscp>15</ip-dscp>
            <ip-ecn>1</ip-ecn>
        </ip-match>
        <in-port>0</in-port>
    </match>
    <hard-timeout>12000</hard-timeout>
    <cookie>7</cookie>
    <idle-timeout>12000</idle-timeout>
    <flow-name>FooXf7</flow-name>
    <priority>2</priority>
    <barrier>false</barrier>
</flow>
Ethernet Src & Dest Addresses, IPv4 Src & Dest Addresses, TCP Src &

Dest Ports, IP DSCP, IP ECN, Input Port

  • Note that ethernet-type MUST be 2048 (0x800)

  • Note that IP Protocol Type MUST be 6

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <table_id>2</table_id>
    <id>131</id>
    <cookie_mask>255</cookie_mask>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
            <ethernet-destination>
                <address>ff:ff:29:01:19:61</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:11:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <ipv4-source>17.1.2.3/8</ipv4-source>
        <ipv4-destination>172.168.5.6/16</ipv4-destination>
        <ip-match>
            <ip-protocol>6</ip-protocol>
            <ip-dscp>2</ip-dscp>
            <ip-ecn>2</ip-ecn>
        </ip-match>
        <tcp-source-port>25364</tcp-source-port>
        <tcp-destination-port>8080</tcp-destination-port>
        <in-port>0</in-port>
    </match>
    <hard-timeout>1200</hard-timeout>
    <cookie>8</cookie>
    <idle-timeout>3400</idle-timeout>
    <flow-name>FooXf8</flow-name>
    <priority>2</priority>
    <barrier>false</barrier>
</flow>
Ethernet Src & Dest Addresses, IPv4 Src & Dest Addresses, UDP Src &

Dest Ports, IP DSCP, IP ECN, Input Port

  • Note that ethernet-type MUST be 2048 (0x800)

  • Note that IP Protocol Type MUST be 17

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <table_id>2</table_id>
    <id>132</id>
    <cookie_mask>255</cookie_mask>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
            <ethernet-destination>
                <address>20:14:29:01:19:61</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:11:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <ipv4-source>19.1.2.3/10</ipv4-source>
        <ipv4-destination>172.168.5.6/18</ipv4-destination>
        <ip-match>
            <ip-protocol>17</ip-protocol>
            <ip-dscp>8</ip-dscp>
            <ip-ecn>3</ip-ecn>
        </ip-match>
        <udp-source-port>25364</udp-source-port>
        <udp-destination-port>8080</udp-destination-port>
        <in-port>0</in-port>
    </match>
    <hard-timeout>1200</hard-timeout>
    <cookie>9</cookie>
    <idle-timeout>3400</idle-timeout>
    <flow-name>FooXf9</flow-name>
    <priority>2</priority>
    <barrier>false</barrier>
Ethernet Src & Dest Addresses, IPv4 Src & Dest Addresses, ICMPv4

Type & Code, IP DSCP, IP ECN, Input Port

  • Note that ethernet-type MUST be 2048 (0x800)

  • Note that IP Protocol Type MUST be 1

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <table_id>2</table_id>
    <id>134</id>
    <cookie_mask>255</cookie_mask>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
            <ethernet-destination>
                <address>ff:ff:29:01:19:61</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:11:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <ipv4-source>17.1.2.3/8</ipv4-source>
        <ipv4-destination>172.168.5.6/16</ipv4-destination>
        <ip-match>
            <ip-protocol>1</ip-protocol>
            <ip-dscp>27</ip-dscp>
            <ip-ecn>3</ip-ecn>
        </ip-match>
        <icmpv4-match>
            <icmpv4-type>6</icmpv4-type>
            <icmpv4-code>3</icmpv4-code>
        </icmpv4-match>
        <in-port>0</in-port>
    </match>
    <hard-timeout>1200</hard-timeout>
    <cookie>11</cookie>
    <idle-timeout>3400</idle-timeout>
    <flow-name>FooXf11</flow-name>
    <priority>2</priority>
</flow>
Ethernet Src & Dest Addresses, ARP Operation, ARP Src & Target

Transport Addresses, ARP Src & Target Hw Addresses

  • Note that ethernet-type MUST be 2054 (0x806)

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
                <action>
                    <order>1</order>
                    <dec-mpls-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <table_id>2</table_id>
    <id>137</id>
    <cookie_mask>255</cookie_mask>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2054</type>
            </ethernet-type>
            <ethernet-destination>
                <address>ff:ff:ff:ff:FF:ff</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:FC:01:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <arp-op>1</arp-op>
        <arp-source-transport-address>192.168.4.1</arp-source-transport-address>
        <arp-target-transport-address>10.21.22.23</arp-target-transport-address>
        <arp-source-hardware-address>
            <address>12:34:56:78:98:AB</address>
        </arp-source-hardware-address>
        <arp-target-hardware-address>
            <address>FE:DC:BA:98:76:54</address>
        </arp-target-hardware-address>
    </match>
    <hard-timeout>12</hard-timeout>
    <cookie>14</cookie>
    <idle-timeout>34</idle-timeout>
    <flow-name>FooXf14</flow-name>
    <priority>2</priority>
    <barrier>false</barrier>
Ethernet Src & Dest Addresses, Ethernet Type, VLAN ID, VLAN PCP
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <table_id>2</table_id>
    <id>138</id>
    <cookie_mask>255</cookie_mask>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
            <ethernet-destination>
                <address>ff:ff:29:01:19:61</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:11:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <vlan-match>
            <vlan-id>
                <vlan-id>78</vlan-id>
                <vlan-id-present>true</vlan-id-present>
            </vlan-id>
            <vlan-pcp>3</vlan-pcp>
      </vlan-match>
    </match>
    <hard-timeout>1200</hard-timeout>
    <cookie>15</cookie>
    <idle-timeout>3400</idle-timeout>
    <flow-name>FooXf15</flow-name>
    <priority>2</priority>
    <barrier>false</barrier>
</flow>
Ethernet Src & Dest Addresses, MPLS Label, MPLS TC, MPLS BoS
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <flow-name>FooXf17</flow-name>
    <id>140</id>
    <cookie_mask>255</cookie_mask>
    <cookie>17</cookie>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <priority>2</priority>
    <table_id>2</table_id>
    <strict>false</strict>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34887</type>
            </ethernet-type>
            <ethernet-destination>
                <address>ff:ff:29:01:19:61</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:11:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <protocol-match-fields>
            <mpls-label>567</mpls-label>
            <mpls-tc>3</mpls-tc>
            <mpls-bos>1</mpls-bos>
        </protocol-match-fields>
    </match>
</flow>
IPv6 Src & Dest Addresses
  • Note that ethernet-type MUST be 34525

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf18</flow-name>
    <id>141</id>
    <cookie_mask>255</cookie_mask>
    <cookie>18</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34525</type>
            </ethernet-type>
        </ethernet-match>
        <ipv6-source>fe80::2acf:e9ff:fe21:6431/128</ipv6-source>
        <ipv6-destination>aabb:1234:2acf:e9ff::fe21:6431/64</ipv6-destination>
    </match>
</flow>
Metadata
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf19</flow-name>
    <id>142</id>
    <cookie_mask>255</cookie_mask>
    <cookie>19</cookie>
    <table_id>2</table_id>
    <priority>1</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <metadata>
            <metadata>12345</metadata>
        </metadata>
    </match>
</flow>
Metadata, Metadata Mask
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf20</flow-name>
    <id>143</id>
    <cookie_mask>255</cookie_mask>
    <cookie>20</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <metadata>
            <metadata>12345</metadata>
            <metadata-mask>//FF</metadata-mask>
        </metadata>
    </match>
</flow>
IPv6 Src & Dest Addresses, Metadata, IP DSCP, IP ECN, UDP Src & Dest Ports
  • Note that ethernet-type MUST be 34525

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf21</flow-name>
    <id>144</id>
    <cookie_mask>255</cookie_mask>
    <cookie>21</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34525</type>
            </ethernet-type>
        </ethernet-match>
        <ipv6-source>1234:5678:9ABC:DEF0:FDCD:A987:6543:210F/76</ipv6-source>
        <ipv6-destination>fe80::2acf:e9ff:fe21:6431/128</ipv6-destination>
        <metadata>
            <metadata>12345</metadata>
        </metadata>
        <ip-match>
            <ip-protocol>17</ip-protocol>
            <ip-dscp>8</ip-dscp>
            <ip-ecn>3</ip-ecn>
        </ip-match>
        <udp-source-port>25364</udp-source-port>
        <udp-destination-port>8080</udp-destination-port>
    </match>
</flow>
IPv6 Src & Dest Addresses, Metadata, IP DSCP, IP ECN, TCP Src & Dest Ports
  • Note that ethernet-type MUST be 34525

  • Note that IP Protocol MUST be 6

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf22</flow-name>
    <id>145</id>
    <cookie_mask>255</cookie_mask>
    <cookie>22</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34525</type>
            </ethernet-type>
        </ethernet-match>
        <ipv6-source>1234:5678:9ABC:DEF0:FDCD:A987:6543:210F/76</ipv6-source>
        <ipv6-destination>fe80:2acf:e9ff:fe21::6431/94</ipv6-destination>
        <metadata>
            <metadata>12345</metadata>
        </metadata>
        <ip-match>
            <ip-protocol>6</ip-protocol>
            <ip-dscp>60</ip-dscp>
            <ip-ecn>3</ip-ecn>
        </ip-match>
        <tcp-source-port>183</tcp-source-port>
        <tcp-destination-port>8080</tcp-destination-port>
    </match>
</flow>
IPv6 Src & Dest Addresses, Metadata, IP DSCP, IP ECN, TCP Src & Dest Ports, IPv6 Label
  • Note that ethernet-type MUST be 34525

  • Note that IP Protocol MUST be 6

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf23</flow-name>
    <id>146</id>
    <cookie_mask>255</cookie_mask>
    <cookie>23</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34525</type>
            </ethernet-type>
        </ethernet-match>
        <ipv6-source>1234:5678:9ABC:DEF0:FDCD:A987:6543:210F/76</ipv6-source>
        <ipv6-destination>fe80:2acf:e9ff:fe21::6431/94</ipv6-destination>
        <metadata>
            <metadata>12345</metadata>
        </metadata>
        <ipv6-label>
            <ipv6-flabel>33</ipv6-flabel>
        </ipv6-label>
        <ip-match>
            <ip-protocol>6</ip-protocol>
            <ip-dscp>60</ip-dscp>
            <ip-ecn>3</ip-ecn>
        </ip-match>
        <tcp-source-port>183</tcp-source-port>
        <tcp-destination-port>8080</tcp-destination-port>
    </match>
</flow>
Tunnel ID
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf24</flow-name>
    <id>147</id>
    <cookie_mask>255</cookie_mask>
    <cookie>24</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <tunnel>
            <tunnel-id>2591</tunnel-id>
        </tunnel>
    </match>
</flow>
IPv6 Src & Dest Addresses, Metadata, IP DSCP, IP ECN, ICMPv6 Type & Code, IPv6 Label
  • Note that ethernet-type MUST be 34525

  • Note that IP Protocol MUST be 58

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf25</flow-name>
    <id>148</id>
    <cookie_mask>255</cookie_mask>
    <cookie>25</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34525</type>
            </ethernet-type>
        </ethernet-match>
        <ipv6-source>1234:5678:9ABC:DEF0:FDCD:A987:6543:210F/76</ipv6-source>
        <ipv6-destination>fe80:2acf:e9ff:fe21::6431/94</ipv6-destination>
        <metadata>
            <metadata>12345</metadata>
        </metadata>
        <ipv6-label>
            <ipv6-flabel>33</ipv6-flabel>
        </ipv6-label>
        <ip-match>
            <ip-protocol>58</ip-protocol>
            <ip-dscp>60</ip-dscp>
            <ip-ecn>3</ip-ecn>
        </ip-match>
        <icmpv6-match>
            <icmpv6-type>6</icmpv6-type>
            <icmpv6-code>3</icmpv6-code>
        </icmpv6-match>
    </match>
</flow>
IPv6 Src & Dest Addresses, Metadata, IP DSCP, IP ECN, TCP Src & Dst Ports, IPv6 Label, IPv6 Ext Header
  • Note that ethernet-type MUST be 34525

  • Note that IP Protocol MUST be 58

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf27</flow-name>
    <id>150</id>
    <cookie_mask>255</cookie_mask>
    <cookie>27</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <dec-nw-ttl/>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34525</type>
            </ethernet-type>
        </ethernet-match>
        <ipv6-source>1234:5678:9ABC:DEF0:FDCD:A987:6543:210F/76</ipv6-source>
        <ipv6-destination>fe80:2acf:e9ff:fe21::6431/94</ipv6-destination>
        <metadata>
            <metadata>12345</metadata>
        </metadata>
        <ipv6-label>
            <ipv6-flabel>33</ipv6-flabel>
        </ipv6-label>
        <ipv6-ext-header>
            <ipv6-exthdr>0</ipv6-exthdr>
        </ipv6-ext-header>
        <ip-match>
            <ip-protocol>6</ip-protocol>
            <ip-dscp>60</ip-dscp>
            <ip-ecn>3</ip-ecn>
        </ip-match>
        <tcp-source-port>183</tcp-source-port>
        <tcp-destination-port>8080</tcp-destination-port>
    </match>
</flow>
Actions

The format of the XML that describes OpenFlow actions is determined by the opendaylight-action-types yang model: .

Apply Actions
Output to TABLE
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf101</flow-name>
    <id>256</id>
    <cookie_mask>255</cookie_mask>
    <cookie>101</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <output-action>
                        <output-node-connector>TABLE</output-node-connector>
                        <max-length>60</max-length>
                    </output-action>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34525</type>
            </ethernet-type>
        </ethernet-match>
        <ipv6-source>1234:5678:9ABC:DEF0:FDCD:A987:6543:210F/76</ipv6-source>
        <ipv6-destination>fe80:2acf:e9ff:fe21::6431/94</ipv6-destination>
        <metadata>
            <metadata>12345</metadata>
        </metadata>
        <ip-match>
            <ip-protocol>6</ip-protocol>
            <ip-dscp>60</ip-dscp>
            <ip-ecn>3</ip-ecn>
        </ip-match>
        <tcp-source-port>183</tcp-source-port>
        <tcp-destination-port>8080</tcp-destination-port>
    </match>
</flow>
Output to INPORT
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf102</flow-name>
    <id>257</id>
    <cookie_mask>255</cookie_mask>
    <cookie>102</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <output-action>
                        <output-node-connector>INPORT</output-node-connector>
                        <max-length>60</max-length>
                    </output-action>
                </action>
7            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
            <ethernet-destination>
                <address>ff:ff:29:01:19:61</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:11:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <ipv4-source>17.1.2.3/8</ipv4-source>
        <ipv4-destination>172.168.5.6/16</ipv4-destination>
        <ip-match>
            <ip-protocol>6</ip-protocol>
            <ip-dscp>2</ip-dscp>
            <ip-ecn>2</ip-ecn>
        </ip-match>
        <tcp-source-port>25364</tcp-source-port>
        <tcp-destination-port>8080</tcp-destination-port>
    </match>
</flow>
Output to Physical Port
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf103</flow-name>
    <id>258</id>
    <cookie_mask>255</cookie_mask>
    <cookie>103</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <output-action>
                        <output-node-connector>1</output-node-connector>
                        <max-length>60</max-length>
                    </output-action>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
            <ethernet-destination>
                <address>ff:ff:29:01:19:61</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:11:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <ipv4-source>17.1.2.3/8</ipv4-source>
        <ipv4-destination>172.168.5.6/16</ipv4-destination>
        <ip-match>
            <ip-protocol>6</ip-protocol>
            <ip-dscp>2</ip-dscp>
            <ip-ecn>2</ip-ecn>
        </ip-match>
        <tcp-source-port>25364</tcp-source-port>
        <tcp-destination-port>8080</tcp-destination-port>
    </match>
</flow>
Output to LOCAL
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf104</flow-name>
    <id>259</id>
    <cookie_mask>255</cookie_mask>
    <cookie>104</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <output-action>
                        <output-node-connector>LOCAL</output-node-connector>
                        <max-length>60</max-length>
                    </output-action>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34525</type>
            </ethernet-type>
        </ethernet-match>
        <ipv6-source>1234:5678:9ABC:DEF0:FDCD:A987:6543:210F/76</ipv6-source>
        <ipv6-destination>fe80:2acf:e9ff:fe21::6431/94</ipv6-destination>
        <metadata>
            <metadata>12345</metadata>
        </metadata>
        <ip-match>
            <ip-protocol>6</ip-protocol>
            <ip-dscp>60</ip-dscp>
            <ip-ecn>3</ip-ecn>
        </ip-match>
        <tcp-source-port>183</tcp-source-port>
        <tcp-destination-port>8080</tcp-destination-port>
    </match>
</flow>
Output to NORMAL
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf105</flow-name>
    <id>260</id>
    <cookie_mask>255</cookie_mask>
    <cookie>105</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <output-action>
                        <output-node-connector>NORMAL</output-node-connector>
                        <max-length>60</max-length>
                    </output-action>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34525</type>
            </ethernet-type>
        </ethernet-match>
        <ipv6-source>1234:5678:9ABC:DEF0:FDCD:A987:6543:210F/84</ipv6-source>
        <ipv6-destination>fe80:2acf:e9ff:fe21::6431/90</ipv6-destination>
        <metadata>
            <metadata>12345</metadata>
        </metadata>
        <ip-match>
            <ip-protocol>6</ip-protocol>
            <ip-dscp>45</ip-dscp>
            <ip-ecn>2</ip-ecn>
        </ip-match>
        <tcp-source-port>20345</tcp-source-port>
        <tcp-destination-port>80</tcp-destination-port>
    </match>
</flow>
Output to FLOOD
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf106</flow-name>
    <id>261</id>
    <cookie_mask>255</cookie_mask>
    <cookie>106</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <output-action>
                        <output-node-connector>FLOOD</output-node-connector>
                        <max-length>60</max-length>
                    </output-action>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34525</type>
            </ethernet-type>
        </ethernet-match>
        <ipv6-source>1234:5678:9ABC:DEF0:FDCD:A987:6543:210F/100</ipv6-source>
        <ipv6-destination>fe80:2acf:e9ff:fe21::6431/67</ipv6-destination>
        <metadata>
            <metadata>12345</metadata>
        </metadata>
        <ip-match>
            <ip-protocol>6</ip-protocol>
            <ip-dscp>45</ip-dscp>
            <ip-ecn>2</ip-ecn>
        </ip-match>
        <tcp-source-port>20345</tcp-source-port>
        <tcp-destination-port>80</tcp-destination-port>
    </match>
</flow>
Output to ALL
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf107</flow-name>
    <id>262</id>
    <cookie_mask>255</cookie_mask>
    <cookie>107</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <output-action>
                        <output-node-connector>ALL</output-node-connector>
                        <max-length>60</max-length>
                    </output-action>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
            <ethernet-destination>
                <address>20:14:29:01:19:61</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:11:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <ipv4-source>19.1.2.3/10</ipv4-source>
        <ipv4-destination>172.168.5.6/18</ipv4-destination>
        <ip-match>
            <ip-protocol>17</ip-protocol>
            <ip-dscp>8</ip-dscp>
            <ip-ecn>3</ip-ecn>
        </ip-match>
        <udp-source-port>25364</udp-source-port>
        <udp-destination-port>8080</udp-destination-port>
        <in-port>0</in-port>
    </match>
</flow>
Output to CONTROLLER
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf108</flow-name>
    <id>263</id>
    <cookie_mask>255</cookie_mask>
    <cookie>108</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <output-action>
                        <output-node-connector>CONTROLLER</output-node-connector>
                        <max-length>60</max-length>
                    </output-action>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
            <ethernet-destination>
                <address>20:14:29:01:19:61</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:11:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <ipv4-source>19.1.2.3/10</ipv4-source>
        <ipv4-destination>172.168.5.6/18</ipv4-destination>
        <ip-match>
            <ip-protocol>17</ip-protocol>
            <ip-dscp>8</ip-dscp>
            <ip-ecn>3</ip-ecn>
        </ip-match>
        <udp-source-port>25364</udp-source-port>
        <udp-destination-port>8080</udp-destination-port>
        <in-port>0</in-port>
    </match>
</flow>
Output to ANY
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
    <strict>false</strict>
    <flow-name>FooXf109</flow-name>
    <id>264</id>
    <cookie_mask>255</cookie_mask>
    <cookie>109</cookie>
    <table_id>2</table_id>
    <priority>2</priority>
    <hard-timeout>1200</hard-timeout>
    <idle-timeout>3400</idle-timeout>
    <installHw>false</installHw>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <order>0</order>
                    <output-action>
                        <output-node-connector>ANY</output-node-connector>
                        <max-length>60</max-length>
                    </output-action>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
            <ethernet-destination>
                <address>20:14:29:01:19:61</address>
            </ethernet-destination>
            <ethernet-source>
                <address>00:00:00:11:23:ae</address>
            </ethernet-source>
        </ethernet-match>
        <ipv4-source>19.1.2.3/10</ipv4-source>
        <ipv4-destination>172.168.5.6/18</ipv4-destination>
        <ip-match>
            <ip-protocol>17</ip-protocol>
            <ip-dscp>8</ip-dscp>
            <ip-ecn>3</ip-ecn>
        </ip-match>
        <udp-source-port>25364</udp-source-port>
        <udp-destination-port>8080</udp-destination-port>
        <in-port>0</in-port>
    </match>
</flow>
Push VLAN
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
   <strict>false</strict>
   <instructions>
       <instruction>
           <order>0</order>
           <apply-actions>
              <action>
                 <push-vlan-action>
                     <ethernet-type>33024</ethernet-type>
                 </push-vlan-action>
                 <order>0</order>
              </action>
               <action>
                   <set-field>
                       <vlan-match>
                            <vlan-id>
                                <vlan-id>79</vlan-id>
                                <vlan-id-present>true</vlan-id-present>
                            </vlan-id>
                       </vlan-match>
                   </set-field>
                   <order>1</order>
               </action>
               <action>
                   <output-action>
                       <output-node-connector>5</output-node-connector>
                   </output-action>
                   <order>2</order>
               </action>
           </apply-actions>
       </instruction>
   </instructions>
   <table_id>0</table_id>
   <id>31</id>
   <match>
       <ethernet-match>
           <ethernet-type>
               <type>2048</type>
           </ethernet-type>
           <ethernet-destination>
               <address>FF:FF:29:01:19:61</address>
           </ethernet-destination>
           <ethernet-source>
               <address>00:00:00:11:23:AE</address>
           </ethernet-source>
       </ethernet-match>
     <in-port>1</in-port>
   </match>
   <flow-name>vlan_flow</flow-name>
   <priority>2</priority>
</flow>
Push MPLS
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow
    xmlns="urn:opendaylight:flow:inventory">
    <flow-name>push-mpls-action</flow-name>
    <instructions>
        <instruction>
            <order>3</order>
            <apply-actions>
                <action>
                    <push-mpls-action>
                        <ethernet-type>34887</ethernet-type>
                    </push-mpls-action>
                    <order>0</order>
                </action>
                <action>
                    <set-field>
                        <protocol-match-fields>
                            <mpls-label>27</mpls-label>
                        </protocol-match-fields>
                    </set-field>
                    <order>1</order>
                </action>
                <action>
                    <output-action>
                        <output-node-connector>2</output-node-connector>
                    </output-action>
                    <order>2</order>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <strict>false</strict>
    <id>100</id>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>2048</type>
            </ethernet-type>
        </ethernet-match>
        <in-port>1</in-port>
        <ipv4-destination>10.0.0.4/32</ipv4-destination>
    </match>
    <idle-timeout>0</idle-timeout>
    <cookie_mask>255</cookie_mask>
    <cookie>401</cookie>
    <priority>8</priority>
    <hard-timeout>0</hard-timeout>
    <installHw>false</installHw>
    <table_id>0</table_id>
</flow>
Swap MPLS
  • Note that ethernet-type MUST be 34887

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow
    xmlns="urn:opendaylight:flow:inventory">
    <flow-name>push-mpls-action</flow-name>
    <instructions>
        <instruction>
            <order>2</order>
            <apply-actions>
                <action>
                    <set-field>
                        <protocol-match-fields>
                            <mpls-label>37</mpls-label>
                        </protocol-match-fields>
                    </set-field>
                    <order>1</order>
                </action>
                <action>
                    <output-action>
                        <output-node-connector>2</output-node-connector>
                    </output-action>
                    <order>2</order>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <strict>false</strict>
    <id>101</id>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34887</type>
            </ethernet-type>
        </ethernet-match>
        <in-port>1</in-port>
        <protocol-match-fields>
            <mpls-label>27</mpls-label>
        </protocol-match-fields>
    </match>
    <idle-timeout>0</idle-timeout>
    <cookie_mask>255</cookie_mask>
    <cookie>401</cookie>
    <priority>8</priority>
    <hard-timeout>0</hard-timeout>
    <installHw>false</installHw>
    <table_id>0</table_id>
</flow>
Pop MPLS
  • Note that ethernet-type MUST be 34887

  • Issue with OVS 2.1 OVS fix

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow
    xmlns="urn:opendaylight:flow:inventory">
    <flow-name>FooXf10</flow-name>
    <instructions>
        <instruction>
            <order>0</order>
            <apply-actions>
                <action>
                    <pop-mpls-action>
                        <ethernet-type>2048</ethernet-type>
                    </pop-mpls-action>
                    <order>1</order>
                </action>
                <action>
                    <output-action>
                        <output-node-connector>2</output-node-connector>
                        <max-length>60</max-length>
                    </output-action>
                    <order>2</order>
                </action>
            </apply-actions>
        </instruction>
    </instructions>
    <id>11</id>
    <strict>false</strict>
    <match>
        <ethernet-match>
            <ethernet-type>
                <type>34887</type>
            </ethernet-type>
        </ethernet-match>
        <in-port>1</in-port>
        <protocol-match-fields>
            <mpls-label>37</mpls-label>
        </protocol-match-fields>
    </match>
    <idle-timeout>0</idle-timeout>
    <cookie>889</cookie>
    <cookie_mask>255</cookie_mask>
    <installHw>false</installHw>
    <hard-timeout>0</hard-timeout>
    <priority>10</priority>
    <table_id>0</table_id>
</flow>
Learn
<flow>
  <id>ICMP_Ingress258a5a5ad-08a8-4ff7-98f5-ef0b96ca3bb8</id>
  <hard-timeout>0</hard-timeout>
  <idle-timeout>0</idle-timeout>
  <match>
    <ethernet-match>
      <ethernet-type>
        <type>2048</type>
      </ethernet-type>
    </ethernet-match>
    <metadata>
      <metadata>2199023255552</metadata>
      <metadata-mask>2305841909702066176</metadata-mask>
    </metadata>
    <ip-match>
      <ip-protocol>1</ip-protocol>
    </ip-match>
  </match>
  <cookie>110100480</cookie>
  <instructions>
    <instruction>
      <order>0</order>
      <apply-actions>
        <action>
          <order>1</order>
          <nx-resubmit
            xmlns="urn:opendaylight:openflowplugin:extension:nicira:action">
            <table>220</table>
          </nx-resubmit>
        </action>
        <action>
          <order>0</order>
          <nx-learn
            xmlns="urn:opendaylight:openflowplugin:extension:nicira:action">
            <idle-timeout>60</idle-timeout>
            <fin-idle-timeout>0</fin-idle-timeout>
            <hard-timeout>60</hard-timeout>
            <flags>0</flags>
            <table-id>41</table-id>
            <priority>61010</priority>
            <fin-hard-timeout>0</fin-hard-timeout>
            <flow-mods>
              <flow-mod-add-match-from-value>
                <src-ofs>0</src-ofs>
                <value>2048</value>
                <src-field>1538</src-field>
                <flow-mod-num-bits>16</flow-mod-num-bits>
              </flow-mod-add-match-from-value>
            </flow-mods>
            <flow-mods>
              <flow-mod-add-match-from-field>
                <src-ofs>0</src-ofs>
                <dst-ofs>0</dst-ofs>
                <dst-field>4100</dst-field>
                <src-field>3588</src-field>
                <flow-mod-num-bits>32</flow-mod-num-bits>
              </flow-mod-add-match-from-field>
            </flow-mods>
            <flow-mods>
              <flow-mod-add-match-from-field>
                <src-ofs>0</src-ofs>
                <dst-ofs>0</dst-ofs>
                <dst-field>518</dst-field>
                <src-field>1030</src-field>
                <flow-mod-num-bits>48</flow-mod-num-bits>
              </flow-mod-add-match-from-field>
            </flow-mods>
            <flow-mods>
              <flow-mod-add-match-from-field>
                <src-ofs>0</src-ofs>
                <dst-ofs>0</dst-ofs>
                <dst-field>3073</dst-field>
                <src-field>3073</src-field>
                <flow-mod-num-bits>8</flow-mod-num-bits>
              </flow-mod-add-match-from-field>
            </flow-mods>
            <flow-mods>
              <flow-mod-copy-value-into-field>
                <dst-ofs>0</dst-ofs>
                <value>1</value>
                <dst-field>65540</dst-field>
                <flow-mod-num-bits>8</flow-mod-num-bits>
              </flow-mod-copy-value-into-field>
            </flow-mods>
            <cookie>110100480</cookie>
          </nx-learn>
        </action>
      </apply-actions>
    </instruction>
  </instructions>
  <installHw>true</installHw>
  <barrier>false</barrier>
  <strict>false</strict>
  <priority>61010</priority>
  <table_id>253</table_id>
  <flow-name>ACL</flow-name>
</flow>
Opendaylight OpenFlow Plugin: Troubleshooting

empty section

OVSDB User Guide

The OVSDB project implements the OVSDB protocol (RFC 7047), as well as plugins to support OVSDB Schemas, such as the Open_vSwitch database schema and the hardware_vtep database schema.

OVSDB Plugins
Overview and Architecture

There are currently two OVSDB Southbound plugins:

  • odl-ovsdb-southbound: Implements the OVSDB Open_vSwitch database schema.

  • odl-ovsdb-hwvtepsouthbound: Implements the OVSDB hardware_vtep database schema.

These plugins are normally installed and used automatically by higher level applications such as odl-ovsdb-openstack; however, they can also be installed separately and used via their REST APIs as is described in the following sections.

OVSDB Southbound Plugin

The OVSDB Southbound Plugin provides support for managing OVS hosts via an OVSDB model in the MD-SAL which maps to important tables and attributes present in the Open_vSwitch schema. The OVSDB Southbound Plugin is able to connect actively or passively to OVS hosts and operate as the OVSDB manager of the OVS host. Using the OVSDB protocol it is able to manage the OVS database (OVSDB) on the OVS host as defined by the Open_vSwitch schema.

OVSDB YANG Model

The OVSDB Southbound Plugin provides a YANG model which is based on the abstract network topology model.

The details of the OVSDB YANG model are defined in the ovsdb.yang file.

The OVSDB YANG model defines three augmentations:

ovsdb-node-augmentation

This augments the network-topology node and maps primarily to the Open_vSwitch table of the OVSDB schema. The ovsdb-node-augmentation is a representation of the OVS host. It contains the following attributes.

  • connection-info - holds the local and remote IP address and TCP port numbers for the OpenDaylight to OVSDB node connections

  • db-version - version of the OVSDB database

  • ovs-version - version of OVS

  • list managed-node-entry - a list of references to ovsdb-bridge-augmentation nodes, which are the OVS bridges managed by this OVSDB node

  • list datapath-type-entry - a list of the datapath types supported by the OVSDB node (e.g. system, netdev) - depends on newer OVS versions

  • list interface-type-entry - a list of the interface types supported by the OVSDB node (e.g. internal, vxlan, gre, dpdk, etc.) - depends on newer OVS verions

  • list openvswitch-external-ids - a list of the key/value pairs in the Open_vSwitch table external_ids column

  • list openvswitch-other-config - a list of the key/value pairs in the Open_vSwitch table other_config column

  • list managery-entry - list of manager information entries and connection status

  • list qos-entries - list of QoS entries present in the QoS table

  • list queues - list of queue entries present in the queue table

ovsdb-bridge-augmentation

This augments the network-topology node and maps to an specific bridge in the OVSDB bridge table of the associated OVSDB node. It contains the following attributes.

  • bridge-uuid - UUID of the OVSDB bridge

  • bridge-name - name of the OVSDB bridge

  • bridge-openflow-node-ref - a reference (instance-identifier) of the OpenFlow node associated with this bridge

  • list protocol-entry - the version of OpenFlow protocol to use with the OpenFlow controller

  • list controller-entry - a list of controller-uuid and is-connected status of the OpenFlow controllers associated with this bridge

  • datapath-id - the datapath ID associated with this bridge on the OVSDB node

  • datapath-type - the datapath type of this bridge

  • fail-mode - the OVSDB fail mode setting of this bridge

  • flow-node - a reference to the flow node corresponding to this bridge

  • managed-by - a reference to the ovsdb-node-augmentation (OVSDB node) that is managing this bridge

  • list bridge-external-ids - a list of the key/value pairs in the bridge table external_ids column for this bridge

  • list bridge-other-configs - a list of the key/value pairs in the bridge table other_config column for this bridge

ovsdb-termination-point-augmentation

This augments the topology termination point model. The OVSDB Southbound Plugin uses this model to represent both the OVSDB port and OVSDB interface for a given port/interface in the OVSDB schema. It contains the following attributes.

  • port-uuid - UUID of an OVSDB port row

  • interface-uuid - UUID of an OVSDB interface row

  • name - name of the port and interface

  • interface-type - the interface type

  • list options - a list of port options

  • ofport - the OpenFlow port number of the interface

  • ofport_request - the requested OpenFlow port number for the interface

  • vlan-tag - the VLAN tag value

  • list trunks - list of VLAN tag values for trunk mode

  • vlan-mode - the VLAN mode (e.g. access, native-tagged, native-untagged, trunk)

  • list port-external-ids - a list of the key/value pairs in the port table external_ids column for this port

  • list interface-external-ids - a list of the key/value pairs in the interface table external_ids interface for this interface

  • list port-other-configs - a list of the key/value pairs in the port table other_config column for this port

  • list interface-other-configs - a list of the key/value pairs in the interface table other_config column for this interface

  • list inteface-lldp - LLDP Auto Attach configuration for the interface

  • qos - UUID of the QoS entry in the QoS table assigned to this port

Getting Started

To install the OVSDB Southbound Plugin, use the following command at the Karaf console:

feature:install odl-ovsdb-southbound-impl-ui

After installing the OVSDB Southbound Plugin, and before any OVSDB topology nodes have been created, the OVSDB topology will appear as follows in the configuration and operational MD-SAL.

HTTP GET:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
 or
http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/

Result Body:

{
  "topology": [
    {
      "topology-id": "ovsdb:1"
    }
  ]
}

Where

<controller-ip> is the IP address of the OpenDaylight controller

OpenDaylight as the OVSDB Manager

An OVS host is a system which is running the OVS software and is capable of being managed by an OVSDB manager. The OVSDB Southbound Plugin is capable of connecting to an OVS host and operating as an OVSDB manager. Depending on the configuration of the OVS host, the connection of OpenDaylight to the OVS host will be active or passive.

Active Connection to OVS Hosts

An active connection is when the OVSDB Southbound Plugin initiates the connection to an OVS host. This happens when the OVS host is configured to listen for the connection (i.e. the OVSDB Southbound Plugin is active the the OVS host is passive). The OVS host is configured with the following command:

sudo ovs-vsctl set-manager ptcp:6640

This configures the OVS host to listen on TCP port 6640.

The OVSDB Southbound Plugin can be configured via the configuration MD-SAL to actively connect to an OVS host.

HTTP PUT:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1

Body:

{
  "network-topology:node": [
    {
      "node-id": "ovsdb://HOST1",
      "connection-info": {
        "ovsdb:remote-port": "6640",
        "ovsdb:remote-ip": "<ovs-host-ip>"
      }
    }
  ]
}

Where

<ovs-host-ip> is the IP address of the OVS Host

Note that the configuration assigns a node-id of “ovsdb://HOST1” to the OVSDB node. This node-id will be used as the identifier for this OVSDB node in the MD-SAL.

Query the configuration MD-SAL for the OVSDB topology.

HTTP GET:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/

Result Body:

{
  "topology": [
    {
      "topology-id": "ovsdb:1",
      "node": [
        {
          "node-id": "ovsdb://HOST1",
          "ovsdb:connection-info": {
            "remote-ip": "<ovs-host-ip>",
            "remote-port": 6640
          }
        }
      ]
    }
  ]
}

As a result of the OVSDB node configuration being added to the configuration MD-SAL, the OVSDB Southbound Plugin will attempt to connect with the specified OVS host. If the connection is successful, the plugin will connect to the OVS host as an OVSDB manager, query the schemas and databases supported by the OVS host, and register to monitor changes made to the OVSDB tables on the OVS host. It will also set an external id key and value in the external-ids column of the Open_vSwtich table of the OVS host which identifies the MD-SAL instance identifier of the OVSDB node. This ensures that the OVSDB node will use the same node-id in both the configuration and operational MD-SAL.

"opendaylight-iid" = "instance identifier of OVSDB node in the MD-SAL"

When the OVS host sends the OVSDB Southbound Plugin the first update message after the monitoring has been established, the plugin will populate the operational MD-SAL with the information it receives from the OVS host.

Query the operational MD-SAL for the OVSDB topology.

HTTP GET:

http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/

Result Body:

{
  "topology": [
    {
      "topology-id": "ovsdb:1",
      "node": [
        {
          "node-id": "ovsdb://HOST1",
          "ovsdb:openvswitch-external-ids": [
            {
              "external-id-key": "opendaylight-iid",
              "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
            }
          ],
          "ovsdb:connection-info": {
            "local-ip": "<controller-ip>",
            "remote-port": 6640,
            "remote-ip": "<ovs-host-ip>",
            "local-port": 39042
          },
          "ovsdb:ovs-version": "2.3.1-git4750c96",
          "ovsdb:manager-entry": [
            {
              "target": "ptcp:6640",
              "connected": true,
              "number_of_connections": 1
            }
          ]
        }
      ]
    }
  ]
}

To disconnect an active connection, just delete the configuration MD-SAL entry.

HTTP DELETE:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1

Note in the above example, that / characters which are part of the node-id are specified in hexadecimal format as “%2F”.

Passive Connection to OVS Hosts

A passive connection is when the OVS host initiates the connection to the OVSDB Southbound Plugin. This happens when the OVS host is configured to connect to the OVSDB Southbound Plugin. The OVS host is configured with the following command:

sudo ovs-vsctl set-manager tcp:<controller-ip>:6640

The OVSDB Southbound Plugin is configured to listen for OVSDB connections on TCP port 6640. This value can be changed by editing the “./karaf/target/assembly/etc/custom.properties” file and changing the value of the “ovsdb.listenPort” attribute.

When a passive connection is made, the OVSDB node will appear first in the operational MD-SAL. If the Open_vSwitch table does not contain an external-ids value of opendaylight-iid, then the node-id of the new OVSDB node will be created in the format:

"ovsdb://uuid/<actual UUID value>"

If there an opendaylight-iid value was already present in the external-ids column, then the instance identifier defined there will be used to create the node-id instead.

Query the operational MD-SAL for an OVSDB node after a passive connection.

HTTP GET:

http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/

Result Body:

{
  "topology": [
    {
      "topology-id": "ovsdb:1",
      "node": [
        {
          "node-id": "ovsdb://uuid/163724f4-6a70-428a-a8a0-63b2a21f12dd",
          "ovsdb:openvswitch-external-ids": [
            {
              "external-id-key": "system-id",
              "external-id-value": "ecf160af-e78c-4f6b-a005-83a6baa5c979"
            }
          ],
          "ovsdb:connection-info": {
            "local-ip": "<controller-ip>",
            "remote-port": 46731,
            "remote-ip": "<ovs-host-ip>",
            "local-port": 6640
          },
          "ovsdb:ovs-version": "2.3.1-git4750c96",
          "ovsdb:manager-entry": [
            {
              "target": "tcp:10.11.21.7:6640",
              "connected": true,
              "number_of_connections": 1
            }
          ]
        }
      ]
    }
  ]
}

Take note of the node-id that was created in this case.

Manage Bridges

The OVSDB Southbound Plugin can be used to manage bridges on an OVS host.

This example shows how to add a bridge to the OVSDB node ovsdb://HOST1.

HTTP PUT:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest

Body:

{
  "network-topology:node": [
    {
      "node-id": "ovsdb://HOST1/bridge/brtest",
      "ovsdb:bridge-name": "brtest",
      "ovsdb:protocol-entry": [
        {
          "protocol": "ovsdb:ovsdb-bridge-protocol-openflow-13"
        }
      ],
      "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
    }
  ]
}

Notice that the ovsdb:managed-by attribute is specified in the command. This indicates the association of the new bridge node with its OVSDB node.

Bridges can be updated. In the following example, OpenDaylight is configured to be the OpenFlow controller for the bridge.

HTTP PUT:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest

Body:

{
  "network-topology:node": [
        {
          "node-id": "ovsdb://HOST1/bridge/brtest",
             "ovsdb:bridge-name": "brtest",
              "ovsdb:controller-entry": [
                {
                  "target": "tcp:<controller-ip>:6653"
                }
              ],
             "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
        }
    ]
}

If the OpenDaylight OpenFlow Plugin is installed, then checking on the OVS host will show that OpenDaylight has successfully connected as the controller for the bridge.

$ sudo ovs-vsctl show
    Manager "ptcp:6640"
        is_connected: true
    Bridge brtest
        Controller "tcp:<controller-ip>:6653"
            is_connected: true
        Port brtest
            Interface brtest
                type: internal
    ovs_version: "2.3.1-git4750c96"

Query the operational MD-SAL to see how the bridge appears.

HTTP GET:

http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/

Result Body:

{
  "node": [
    {
      "node-id": "ovsdb://HOST1/bridge/brtest",
      "ovsdb:bridge-name": "brtest",
      "ovsdb:datapath-type": "ovsdb:datapath-type-system",
      "ovsdb:datapath-id": "00:00:da:e9:0c:08:2d:45",
      "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']",
      "ovsdb:bridge-external-ids": [
        {
          "bridge-external-id-key": "opendaylight-iid",
          "bridge-external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1/bridge/brtest']"
        }
      ],
      "ovsdb:protocol-entry": [
        {
          "protocol": "ovsdb:ovsdb-bridge-protocol-openflow-13"
        }
      ],
      "ovsdb:bridge-uuid": "080ce9da-101e-452d-94cd-ee8bef8a4b69",
      "ovsdb:controller-entry": [
        {
          "target": "tcp:10.11.21.7:6653",
          "is-connected": true,
          "controller-uuid": "c39b1262-0876-4613-8bfd-c67eec1a991b"
        }
      ],
      "termination-point": [
        {
          "tp-id": "brtest",
          "ovsdb:port-uuid": "c808ae8d-7af2-4323-83c1-e397696dc9c8",
          "ovsdb:ofport": 65534,
          "ovsdb:interface-type": "ovsdb:interface-type-internal",
          "ovsdb:interface-uuid": "49e9417f-4479-4ede-8faf-7c873b8c0413",
          "ovsdb:name": "brtest"
        }
      ]
    }
  ]
}

Notice that just like with the OVSDB node, an opendaylight-iid has been added to the external-ids column of the bridge since it was created via the configuration MD-SAL.

A bridge node may be deleted as well.

HTTP DELETE:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
Manage Ports

Similarly, ports may be managed by the OVSDB Southbound Plugin.

This example illustrates how a port and various attributes may be created on a bridge.

HTTP PUT:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/

Body:

{
  "network-topology:termination-point": [
    {
      "ovsdb:options": [
        {
          "ovsdb:option": "remote_ip",
          "ovsdb:value" : "10.10.14.11"
        }
      ],
      "ovsdb:name": "testport",
      "ovsdb:interface-type": "ovsdb:interface-type-vxlan",
      "tp-id": "testport",
      "vlan-tag": "1",
      "trunks": [
        {
          "trunk": "5"
        }
      ],
      "vlan-mode":"access"
    }
  ]
}

Ports can be updated - add another VLAN trunk.

HTTP PUT:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/

Body:

{
  "network-topology:termination-point": [
    {
      "ovsdb:name": "testport",
      "tp-id": "testport",
      "trunks": [
        {
          "trunk": "5"
        },
        {
          "trunk": "500"
        }
      ]
    }
  ]
}

Query the operational MD-SAL for the port.

HTTP GET:

http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/

Result Body:

{
  "termination-point": [
    {
      "tp-id": "testport",
      "ovsdb:port-uuid": "b1262110-2a4f-4442-b0df-84faf145488d",
      "ovsdb:options": [
        {
          "option": "remote_ip",
          "value": "10.10.14.11"
        }
      ],
      "ovsdb:port-external-ids": [
        {
          "external-id-key": "opendaylight-iid",
          "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1/bridge/brtest']/network-topology:termination-point[network-topology:tp-id='testport']"
        }
      ],
      "ovsdb:interface-type": "ovsdb:interface-type-vxlan",
      "ovsdb:trunks": [
        {
          "trunk": 5
        },
        {
          "trunk": 500
        }
      ],
      "ovsdb:vlan-mode": "access",
      "ovsdb:vlan-tag": 1,
      "ovsdb:interface-uuid": "7cec653b-f407-45a8-baec-7eb36b6791c9",
      "ovsdb:name": "testport",
      "ovsdb:ofport": 1
    }
  ]
}

Remember that the OVSDB YANG model includes both OVSDB port and interface table attributes in the termination-point augmentation. Both kinds of attributes can be seen in the examples above. Again, note the creation of an opendaylight-iid value in the external-ids column of the port table.

Delete a port.

HTTP DELETE:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest2/termination-point/testport/
Overview of QoS and Queue

The OVSDB Southbound Plugin provides the capability of managing the QoS and Queue tables on an OVS host with OpenDaylight configured as the OVSDB manager.

QoS and Queue Tables in OVSDB

The OVSDB includes a QoS and Queue table. Unlike most of the other tables in the OVSDB, except the Open_vSwitch table, the QoS and Queue tables are “root set” tables, which means that entries, or rows, in these tables are not automatically deleted if they can not be reached directly or indirectly from the Open_vSwitch table. This means that QoS entries can exist and be managed independently of whether or not they are referenced in a Port entry. Similarly, Queue entries can be managed independently of whether or not they are referenced by a QoS entry.

Modelling of QoS and Queue Tables in OpenDaylight MD-SAL

Since the QoS and Queue tables are “root set” tables, they are modeled in the OpenDaylight MD-SAL as lists which are part of the attributes of the OVSDB node model.

The MD-SAL QoS and Queue models have an additonal identifier attribute per entry (e.g. “qos-id” or “queue-id”) which is not present in the OVSDB schema. This identifier is used by the MD-SAL as a key for referencing the entry. If the entry is created originally from the configuration MD-SAL, then the value of the identifier is whatever is specified by the configuration. If the entry is created on the OVSDB node and received by OpenDaylight in an operational update, then the id will be created in the following format.

"queue-id": "queue://<UUID>"
"qos-id": "qos://<UUID>"

The UUID in the above identifiers is the actual UUID of the entry in the OVSDB database.

When the QoS or Queue entry is created by the configuration MD-SAL, the identifier will be configured as part of the external-ids column of the entry. This will ensure that the corresponding entry that is created in the operational MD-SAL uses the same identifier.

"queues-external-ids": [
  {
    "queues-external-id-key": "opendaylight-queue-id",
    "queues-external-id-value": "QUEUE-1"
  }
]

See more in the examples that follow in this section.

The QoS schema in OVSDB currently defines two types of QoS entries.

  • linux-htb

  • linux-hfsc

These QoS types are defined in the QoS model. Additional types will need to be added to the model in order to be supported. See the examples that folow for how the QoS type is specified in the model.

QoS entries can be configured with addtional attritubes such as “max-rate”. These are configured via the other-config column of the QoS entry. Refer to OVSDB schema (in the reference section below) for all of the relevant attributes that can be configured. The examples in the rest of this section will demonstrate how the other-config column may be configured.

Similarly, the Queue entries may be configured with additional attributes via the other-config column.

Managing QoS and Queues via Configuration MD-SAL

This section will show some examples on how to manage QoS and Queue entries via the configuration MD-SAL. The examples will be illustrated by using RESTCONF (see QoS and Queue Postman Collection ).

A pre-requisite for managing QoS and Queue entries is that the OVS host must be present in the configuration MD-SAL.

For the following examples, the following OVS host is configured.

HTTP POST:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/

Body:

{
  "node": [
    {
      "node-id": "ovsdb:HOST1",
      "connection-info": {
        "ovsdb:remote-ip": "<ovs-host-ip>",
        "ovsdb:remote-port": "<ovs-host-ovsdb-port>"
      }
    }
  ]
}

Where

  • <controller-ip> is the IP address of the OpenDaylight controller

  • <ovs-host-ip> is the IP address of the OVS host

  • <ovs-host-ovsdb-port> is the TCP port of the OVSDB server on the OVS host (e.g. 6640)

This command creates an OVSDB node with the node-id “ovsdb:HOST1”. This OVSDB node will be used in the following examples.

QoS and Queue entries can be created and managed without a port, but ultimately, QoS entries are associated with a port in order to use them. For the following examples a test bridge and port will be created.

Create the test bridge.

HTTP PUT

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test

Body:

{
  "network-topology:node": [
    {
      "node-id": "ovsdb:HOST1/bridge/br-test",
      "ovsdb:bridge-name": "br-test",
      "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']"
    }
  ]
}

Create the test port (which is modeled as a termination point in the OpenDaylight MD-SAL).

HTTP PUT:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/

Body:

{
  "network-topology:termination-point": [
    {
      "ovsdb:name": "testport",
      "tp-id": "testport"
    }
  ]
}

If all of the previous steps were successful, a query of the operational MD-SAL should look something like the following results. This indicates that the configuration commands have been successfully instantiated on the OVS host.

HTTP GET:

http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test

Result Body:

{
  "node": [
    {
      "node-id": "ovsdb:HOST1/bridge/br-test",
      "ovsdb:bridge-name": "br-test",
      "ovsdb:datapath-type": "ovsdb:datapath-type-system",
      "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']",
      "ovsdb:datapath-id": "00:00:8e:5d:22:3d:09:49",
      "ovsdb:bridge-external-ids": [
        {
          "bridge-external-id-key": "opendaylight-iid",
          "bridge-external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']"
        }
      ],
      "ovsdb:bridge-uuid": "3d225d8d-d060-4909-93ef-6f4db58ef7cc",
      "termination-point": [
        {
          "tp-id": "br=-est",
          "ovsdb:port-uuid": "f85f7aa7-4956-40e4-9c94-e6ca2d5cd254",
          "ovsdb:ofport": 65534,
          "ovsdb:interface-type": "ovsdb:interface-type-internal",
          "ovsdb:interface-uuid": "29ff3692-6ed4-4ad7-a077-1edc277ecb1a",
          "ovsdb:name": "br-test"
        },
        {
          "tp-id": "testport",
          "ovsdb:port-uuid": "aa79a8e2-147f-403a-9fa9-6ee5ec276f08",
          "ovsdb:port-external-ids": [
            {
              "external-id-key": "opendaylight-iid",
              "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']/network-topology:termination-point[network-topology:tp-id='testport']"
            }
          ],
          "ovsdb:interface-uuid": "e96f282e-882c-41dd-a870-80e6b29136ac",
          "ovsdb:name": "testport"
        }
      ]
    }
  ]
}
Create Queue

Create a new Queue in the configuration MD-SAL.

HTTP PUT:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/

Body:

{
  "ovsdb:queues": [
    {
      "queue-id": "QUEUE-1",
      "dscp": 25,
      "queues-other-config": [
        {
          "queue-other-config-key": "max-rate",
          "queue-other-config-value": "3600000"
        }
      ]
    }
  ]
}
Query Queue

Now query the operational MD-SAL for the Queue entry.

HTTP GET:

http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/

Result Body:

{
  "ovsdb:queues": [
    {
      "queue-id": "QUEUE-1",
      "queues-other-config": [
        {
          "queue-other-config-key": "max-rate",
          "queue-other-config-value": "3600000"
        }
      ],
      "queues-external-ids": [
        {
          "queues-external-id-key": "opendaylight-queue-id",
          "queues-external-id-value": "QUEUE-1"
        }
      ],
      "queue-uuid": "83640357-3596-4877-9527-b472aa854d69",
      "dscp": 25
    }
  ]
}
Create QoS

Create a QoS entry. Note that the UUID of the Queue entry, obtained by querying the operational MD-SAL of the Queue entry, is specified in the queue-list of the QoS entry. Queue entries may be added to the QoS entry at the creation of the QoS entry, or by a subsequent update to the QoS entry.

HTTP PUT:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/

Body:

{
  "ovsdb:qos-entries": [
    {
      "qos-id": "QOS-1",
      "qos-type": "ovsdb:qos-type-linux-htb",
      "qos-other-config": [
        {
          "other-config-key": "max-rate",
          "other-config-value": "4400000"
        }
      ],
      "queue-list": [
        {
          "queue-number": "0",
          "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
        }
      ]
    }
  ]
}
Query QoS

Query the operational MD-SAL for the QoS entry.

HTTP GET:

http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/

Result Body:

{
  "ovsdb:qos-entries": [
    {
      "qos-id": "QOS-1",
      "qos-other-config": [
        {
          "other-config-key": "max-rate",
          "other-config-value": "4400000"
        }
      ],
      "queue-list": [
        {
          "queue-number": 0,
          "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
        }
      ],
      "qos-type": "ovsdb:qos-type-linux-htb",
      "qos-external-ids": [
        {
          "qos-external-id-key": "opendaylight-qos-id",
          "qos-external-id-value": "QOS-1"
        }
      ],
      "qos-uuid": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
    }
  ]
}
Add QoS to a Port

Update the termination point entry to include the UUID of the QoS entry, obtained by querying the operational MD-SAL, to associate a QoS entry with a port.

HTTP PUT:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/

Body:

{
  "network-topology:termination-point": [
    {
      "ovsdb:name": "testport",
      "tp-id": "testport",
      "qos": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
    }
  ]
}
Query the Port

Query the operational MD-SAL to see how the QoS entry appears in the termination point model.

HTTP GET:

http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/

Result Body:

{
  "termination-point": [
    {
      "tp-id": "testport",
      "ovsdb:port-uuid": "aa79a8e2-147f-403a-9fa9-6ee5ec276f08",
      "ovsdb:port-external-ids": [
        {
          "external-id-key": "opendaylight-iid",
          "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']/network-topology:termination-point[network-topology:tp-id='testport']"
        }
      ],
      "ovsdb:qos": "90ba9c60-3aac-499d-9be7-555f19a6bb31",
      "ovsdb:interface-uuid": "e96f282e-882c-41dd-a870-80e6b29136ac",
      "ovsdb:name": "testport"
    }
  ]
}
Query the OVSDB Node

Query the operational MD-SAL for the OVS host to see how the QoS and Queue entries appear as lists in the OVS node model.

HTTP GET:

http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/

Result Body (edited to only show information relevant to the QoS and Queue entries):

{
  "node": [
    {
      "node-id": "ovsdb:HOST1",
      <content edited out>
      "ovsdb:queues": [
        {
          "queue-id": "QUEUE-1",
          "queues-other-config": [
            {
              "queue-other-config-key": "max-rate",
              "queue-other-config-value": "3600000"
            }
          ],
          "queues-external-ids": [
            {
              "queues-external-id-key": "opendaylight-queue-id",
              "queues-external-id-value": "QUEUE-1"
            }
          ],
          "queue-uuid": "83640357-3596-4877-9527-b472aa854d69",
          "dscp": 25
        }
      ],
      "ovsdb:qos-entries": [
        {
          "qos-id": "QOS-1",
          "qos-other-config": [
            {
              "other-config-key": "max-rate",
              "other-config-value": "4400000"
            }
          ],
          "queue-list": [
            {
              "queue-number": 0,
              "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
            }
          ],
          "qos-type": "ovsdb:qos-type-linux-htb",
          "qos-external-ids": [
            {
              "qos-external-id-key": "opendaylight-qos-id",
              "qos-external-id-value": "QOS-1"
            }
          ],
          "qos-uuid": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
        }
      ]
      <content edited out>
    }
  ]
}
Remove QoS from a Port

This example removes a QoS entry from the termination point and associated port. Note that this is a PUT command on the termination point with the QoS attribute absent. Other attributes of the termination point should be included in the body of the command so that they are not inadvertantly removed.

HTTP PUT:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/

Body:

{
  "network-topology:termination-point": [
    {
      "ovsdb:name": "testport",
      "tp-id": "testport"
    }
  ]
}
Remove a Queue from QoS

This example removes the specific Queue entry from the queue list in the QoS entry. The queue entry is specified by the queue number, which is “0” in this example.

HTTP DELETE:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/queue-list/0/
Remove Queue

Once all references to a specific queue entry have been removed from QoS entries, the Queue itself can be removed.

HTTP DELETE:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
Remove QoS

The QoS entry may be removed when it is no longer referenced by any ports.

HTTP DELETE:

http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
OVSDB Hardware VTEP SouthBound Plugin
Overview

Hwvtepsouthbound plugin is used to configure a hardware VTEP which implements hardware ovsdb schema. This page will show how to use RESTConf API of hwvtepsouthbound. There are two ways to connect to ODL:

switch initiates connection..

Both will be introduced respectively.

User Initiates Connection
Prerequisite

Configure the hwvtep device/node to listen for the tcp connection in passive mode. In addition, management IP and tunnel source IP are also configured. After all this configuration is done, a physical switch is created automatically by the hwvtep node.

Connect to a hwvtep device/node

Send below Restconf request if you want to initiate the connection to a hwvtep node from the controller, where listening IP and port of hwvtep device/node are provided.

REST API: POST http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/

{
 "network-topology:node": [
       {
           "node-id": "hwvtep://192.168.1.115:6640",
           "hwvtep:connection-info":
           {
               "hwvtep:remote-port": 6640,
               "hwvtep:remote-ip": "192.168.1.115"
           }
       }
   ]
}

Please replace odl in the URL with the IP address of your OpenDaylight controller and change 192.168.1.115 to your hwvtep node IP.

NOTE: The format of node-id is fixed. It will be one of the two:

User initiates connection from ODL:

hwvtep://ip:port

Switch initiates connection:

hwvtep://uuid/<uuid of switch>

The reason for using UUID is that we can distinguish between multiple switches if they are behind a NAT.

After this request is completed successfully, we can get the physical switch from the operational data store.

REST API: GET http://odl:8181/restconf/operational/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640

There is no body in this request.

The response of the request is:

{
   "node": [
         {
           "node-id": "hwvtep://192.168.1.115:6640",
           "hwvtep:switches": [
             {
               "switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640/physicalswitch/br0']"
             }
           ],
           "hwvtep:connection-info": {
             "local-ip": "192.168.92.145",
             "local-port": 47802,
             "remote-port": 6640,
             "remote-ip": "192.168.1.115"
           }
         },
         {
           "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
           "hwvtep:management-ips": [
             {
               "management-ips-key": "192.168.1.115"
             }
           ],
           "hwvtep:physical-switch-uuid": "37eb5abd-a6a3-4aba-9952-a4d301bdf371",
           "hwvtep:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']",
           "hwvtep:hwvtep-node-description": "",
           "hwvtep:tunnel-ips": [
             {
               "tunnel-ips-key": "192.168.1.115"
             }
           ],
           "hwvtep:hwvtep-node-name": "br0"
         }
       ]
}

If there is a physical switch which has already been created by manual configuration, we can get the node-id of the physical switch from this response, which is presented in “swith-ref”. If the switch does not exist, we need to create the physical switch. Currently, most hwvtep devices do not support running multiple switches.

Create a physical switch

REST API: POST http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/

request body:

{
 "network-topology:node": [
       {
           "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
           "hwvtep-node-name": "ps0",
           "hwvtep-node-description": "",
           "management-ips": [
             {
               "management-ips-key": "192.168.1.115"
             }
           ],
           "tunnel-ips": [
             {
               "tunnel-ips-key": "192.168.1.115"
             }
           ],
           "managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']"
       }
   ]
}

Note: “managed-by” must provided by user. We can get its value after the step Connect to a hwvtep device/node since the node-id of hwvtep device is provided by user. “managed-by” is a reference typed of instance identifier. Though the instance identifier is a little complicated for RestConf, the primary user of hwvtepsouthbound plugin will be provider-type code such as NetVirt and the instance identifier is much easier to write code for.

Create a logical switch

Creating a logical switch is effectively creating a logical network. For VxLAN, it is a tunnel network with the same VNI.

REST API: POST http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640

request body:

{
 "logical-switches": [
       {
           "hwvtep-node-name": "ls0",
           "hwvtep-node-description": "",
           "tunnel-key": "10000"
        }
   ]
}
Create a physical locator

After the VXLAN network is ready, we will add VTEPs to it. A VTEP is described by a physical locator.

REST API: POST http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640

request body:

{
 "termination-point": [
      {
          "tp-id": "vxlan_over_ipv4:192.168.0.116",
          "encapsulation-type": "encapsulation-type-vxlan-over-ipv4",
          "dst-ip": "192.168.0.116"
          }
     ]
}

The “tp-id” of locator is “{encapsualation-type}: {dst-ip}”.

Note: As far as we know, the OVSDB database does not allow the insertion of a new locator alone. So, no locator is inserted after this request is sent. We will trigger off the creation until other entity refer to it, such as remote-mcast-macs.

Create a remote-mcast-macs entry

After adding a physical locator to a logical switch, we need to create a remote-mcast-macs entry to handle unknown traffic.

REST API: POST http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640

request body:

{
 "remote-mcast-macs": [
       {
           "mac-entry-key": "00:00:00:00:00:00",
           "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
           "locator-set": [
                {
                      "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://219.141.189.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
                }
           ]
       }
   ]
}

The physical locator vxlan_over_ipv4:192.168.0.116 is just created in “Create a physical locator”. It should be noted that list “locator-set” is immutable, that is, we must provide a set of “locator-ref” as a whole.

Note: “00:00:00:00:00:00” stands for “unknown-dst” since the type of mac-entry-key is yang:mac and does not accept “unknown-dst”.

Create a physical port

Now we add a physical port into the physical switch “hwvtep://192.168.1.115:6640/physicalswitch/br0”. The port is attached with a physical server or an L2 network and with the vlan 100.

REST API: POST http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640%2Fphysicalswitch%2Fbr0

{
 "network-topology:termination-point": [
       {
           "tp-id": "port0",
           "hwvtep-node-name": "port0",
           "hwvtep-node-description": "",
           "vlan-bindings": [
               {
                 "vlan-id-key": "100",
                 "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']"
               }
         ]
       }
   ]
}

At this point, we have completed the basic configuration.

Typically, hwvtep devices learn local MAC addresses automatically. But they also support getting MAC address entries from ODL.

Create a local-mcast-macs entry

It is similar to Create a remote-mcast-macs entry.

Create a remote-ucast-macs

REST API: POST http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640

request body:
{
 "remote-ucast-macs": [
       {
           "mac-entry-key": "11:11:11:11:11:11",
           "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
           "ipaddr": "1.1.1.1",
           "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
       }
   ]
}
Create a local-ucast-macs entry

This is similar to Create a remote-ucast-macs.

Switch Initiates Connection

We do not need to connect to a hwvtep device/node when the switch initiates the connection. After switches connect to ODL successfully, we get the node-id’s of switches by reading the operational data store. Once the node-id of a hwvtep device is received, the remaining steps are the same as when the user initiates the connection.

P4 Plugin User Guide
Overview

P4 is a high-level language for expressing how packets are processed by the pipeline of a network forwarding element such as a switch, network processing units, software switches (bmv2), etc. P4 itself is protocol independent but allows for the expression of forwarding plane protocols. It is based upon an abstract forwarding model called PISA (Protocol Independent Switch Architecture). In the Oxygen release, the P4 Plugin project is aimed to provide basic functions for P4 targets, such as channel and device management, table population, packet-in and packet-out process, etc.

P4 Plugin User-Facing Features
  • odl-p4plugin-all

    • This feature contains all other features/bundles of P4 Plugin project. If you install it, it provides all functions that the P4 Plugin project can support.

  • odl-p4plugin-runtime

    • This feature provides a function which implements a gRPC client that provides RPCs for users, such as setting and retrieving forwarding pipeline config dynamically, complete table entry population entry and packet out procedures, etc.

  • odl-p4plugin-netconf-adapter

    • This feature mainly provides function about collecting device resource.

How To Start
Preparing for Installation
  1. Forwarding devices must support NETCONF, so that OpenDaylight can connect to them and collect resoures via NETCONF.

  2. Forwarding devices must support gRpc and run P4 program, so that OpenDaylight can set the forwarding pipeline config, complete table entry population and packet in/out procedure, etc.

Installation Feature

Run OpenDaylight and install P4 Plugin Service odl-p4plugin-all as shown below:

feature:install odl-p4plugin-all

For a more detailed overview of the P4 Plugin, see the P4 Plugin Developer Guide.

Service Function Chaining
OpenDaylight Service Function Chaining (SFC) Overview

OpenDaylight Service Function Chaining (SFC) provides the ability to define an ordered list of network services (e.g. firewalls, load balancers). These services are then “stitched” together in the network to create a service chain. This project provides the infrastructure (chaining logic, APIs) needed for ODL to provision a service chain in the network and an end-user application for defining such chains.

  • ACE - Access Control Entry

  • ACL - Access Control List

  • SCF - Service Classifier Function

  • SF - Service Function

  • SFC - Service Function Chain

  • SFF - Service Function Forwarder

  • SFG - Service Function Group

  • SFP - Service Function Path

  • RSP - Rendered Service Path

  • NSH - Network Service Header

SFC User Interface
Overview

The SFC User interface comes with a Command Line Interface (CLI): it provides several Karaf console commands to show the SFC model (SF, SFFs, etc.) provisioned in the datastore.

SFC Web Interface (SFC-UI)
Architecture

SFC-UI operates purely by using RESTCONF.

SFC-UI integration into ODL

SFC-UI integration into ODL

How to access
  1. Run ODL distribution (run karaf)

  2. In Karaf console execute: feature:install odl-sfc-ui

  3. Visit SFC-UI on: http://<odl_ip_address>:8181/sfc/index.html

SFC Command Line Interface (SFC-CLI)
Overview

The Karaf Container offers a complete Unix-like console that allows managing the container. This console can be extended with custom commands to manage the features deployed on it. This feature will add some basic commands to show the provisioned SFC entities.

How to use it

The SFC-CLI implements commands to show some of the provisioned SFC entities: Service Functions, Service Function Forwarders, Service Function Chains, Service Function Paths, Service Function Classifiers, Service Nodes and Service Function Types:

  • List one/all provisioned Service Functions:

    sfc:sf-list [--name <name>]
    
  • List one/all provisioned Service Function Forwarders:

    sfc:sff-list [--name <name>]
    
  • List one/all provisioned Service Function Chains:

    sfc:sfc-list [--name <name>]
    
  • List one/all provisioned Service Function Paths:

    sfc:sfp-list [--name <name>]
    
  • List one/all provisioned Service Function Classifiers:

    sfc:sc-list [--name <name>]
    
  • List one/all provisioned Service Nodes:

    sfc:sn-list [--name <name>]
    
  • List one/all provisioned Service Function Types:

    sfc:sft-list [--name <name>]
    
SFC Southbound REST Plug-in
Overview

The Southbound REST Plug-in is used to send configuration from datastore down to network devices supporting a REST API (i.e. they have a configured REST URI). It supports POST/PUT/DELETE operations, which are triggered accordingly by changes in the SFC data stores.

  • Access Control List (ACL)

  • Service Classifier Function (SCF)

  • Service Function (SF)

  • Service Function Group (SFG)

  • Service Function Schedule Type (SFST)

  • Service Function Forwarder (SFF)

  • Rendered Service Path (RSP)

Southbound REST Plug-in Architecture

From the user perspective, the REST plug-in is another SFC Southbound plug-in used to communicate with network devices.

Southbound REST Plug-in integration into ODL

Southbound REST Plug-in integration into ODL

Configuring Southbound REST Plugin
  1. Run ODL distribution (run karaf)

  2. In Karaf console execute: feature:install odl-sfc-sb-rest

  3. Configure REST URIs for SF/SFF through SFC User Interface or RESTCONF (required configuration steps can be found in the tutorial stated bellow)

Tutorial

Comprehensive tutorial on how to use the Southbound REST Plug-in and how to control network devices with it can be found on: https://wiki.opendaylight.org/view/Service_Function_Chaining:Main#SFC_103

SFC-OVS integration
Overview

SFC-OVS provides integration of SFC with Open vSwitch (OVS) devices. Integration is realized through mapping of SFC objects (like SF, SFF, Classifier, etc.) to OVS objects (like Bridge, TerminationPoint=Port/Interface). The mapping takes care of automatic instantiation (setup) of corresponding object whenever its counterpart is created. For example, when a new SFF is created, the SFC-OVS plug-in will create a new OVS bridge.

The feature is intended for SFC users willing to use Open vSwitch as an underlying network infrastructure for deploying RSPs (Rendered Service Paths).

SFC-OVS Architecture

SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing information from/to OVS devices. From the user perspective SFC-OVS acts as a layer between SFC datastore and OVSDB.

SFC-OVS integration into ODL

SFC-OVS integration into ODL

Configuring SFC-OVS
  1. Run ODL distribution (run karaf)

  2. In Karaf console execute: feature:install odl-sfc-ovs

  3. Configure Open vSwitch to use ODL as a manager, using following command: ovs-vsctl set-manager tcp:<odl_ip_address>:6640

Tutorials
Verifying mapping from SFF to OVS
Overview

This tutorial shows the usual workflow during creation of an OVS Bridge with use of the SFC APIs.

Prerequisites
  • Open vSwitch installed (ovs-vsctl command available in shell)

  • SFC-OVS feature configured as stated above

Instructions
  1. In a shell execute: ovs-vsctl set-manager tcp:<odl_ip_address>:6640

  2. Send POST request to URL: http://<odl_ip_address>:8181/restconf/operations/service-function-forwarder-ovs:create-ovs-bridge Use Basic auth with credentials: “admin”, “admin” and set Content-Type: application/json. The content of POST request should be following:

{
    "input":
    {
        "name": "br-test",
        "ovs-node": {
            "ip": "<Open_vSwitch_ip_address>"
        }
    }
}

Open_vSwitch_ip_address is the IP address of the machine where Open vSwitch is installed.

Verification

In a shell execute: ovs-vsctl show. There should be a Bridge with the name br-test and one port/interface called br-test.

Also, the corresponding SFF for this OVS Bridge should be configured, which can be verified through the SFC User Interface or RESTCONF as follows.

  1. Visit the SFC User Interface: http://<odl_ip_address>:8181/sfc/index.html#/sfc/serviceforwarder

  2. Use pure RESTCONF and send a GET request to URL: http://<odl_ip_address>:8181/restconf/config/service-function-forwarder:service-function-forwarders

There should be an SFF, whose name will be ending with br1 and the SFF should contain two DataPlane locators: br1 and testPort.

SFC Classifier User Guide
Overview

Description of classifier can be found in: https://datatracker.ietf.org/doc/draft-ietf-sfc-architecture/

There are two types of classifier:

  1. OpenFlow Classifier

  2. Iptables Classifier

OpenFlow Classifier

OpenFlow Classifier implements the classification criteria based on OpenFlow rules deployed into an OpenFlow switch. An Open vSwitch will take the role of a classifier and performs various encapsulations such NSH, VLAN, MPLS, etc. In the existing implementation, classifier can support NSH encapsulation. Matching information is based on ACL for MAC addresses, ports, protocol, IPv4 and IPv6. Supported protocols are TCP, UDP and SCTP. Actions information in the OF rules, shall be forwarding of the encapsulated packets with specific information related to the RSP.

Classifier Architecture

The OVSDB Southbound interface is used to create an instance of a bridge in a specific location (via IP address). This bridge contains the OpenFlow rules that perform the classification of the packets and react accordingly. The OpenFlow Southbound interface is used to translate the ACL information into OF rules within the Open vSwitch.

Note

in order to create the instance of the bridge that takes the role of a classifier, an “empty” SFF must be created.

Configuring Classifier
  1. An empty SFF must be created in order to host the ACL that contains the classification information.

  2. SFF data plane locator must be configured

  3. Classifier interface must be manually added to SFF bridge.

Administering or Managing Classifier

Classification information is based on MAC addresses, protocol, ports and IP. ACL gathers this information and is assigned to an RSP which turns to be a specific path for a Service Chain.

Iptables Classifier

Classifier manages everything from starting the packet listener to creation (and removal) of appropriate ip(6)tables rules and marking received packets accordingly. Its functionality is available only on Linux as it leverdges NetfilterQueue, which provides access to packets matched by an iptables rule. Classifier requires root privileges to be able to operate.

So far it is capable of processing ACL for MAC addresses, ports, IPv4 and IPv6. Supported protocols are TCP and UDP.

Classifier Architecture

Python code located in the project repository sfc-py/common/classifier.py.

Note

classifier assumes that Rendered Service Path (RSP) already exists in ODL when an ACL referencing it is obtained

  1. sfc_agent receives an ACL and passes it for processing to the classifier

  2. the RSP (its SFF locator) referenced by ACL is requested from ODL

  3. if the RSP exists in the ODL then ACL based iptables rules for it are applied

After this process is over, every packet successfully matched to an iptables rule (i.e. successfully classified) will be NSH encapsulated and forwarded to a related SFF, which knows how to traverse the RSP.

Rules are created using appropriate iptables command. If the Access Control Entry (ACE) rule is MAC address related both iptables and IPv6 tables rules re issued. If ACE rule is IPv4 address related, only iptables rules are issued, same for IPv6.

Note

iptables raw table contains all created rules

Configuring Classifier
Classfier does’t need any configuration.
Its only requirement is that the second (2) Netfilter Queue is not used by any other process and is avalilable for the classifier.
Administering or Managing Classifier

Classifier runs alongside sfc_agent, therefore the command for starting it locally is:

sudo python3.4 sfc-py/sfc_agent.py --rest --odl-ip-port localhost:8181
--auto-sff-name --nfq-class
SFC OpenFlow Renderer User Guide
Overview

The Service Function Chaining (SFC) OpenFlow Renderer (SFC OF Renderer) implements Service Chaining on OpenFlow switches. It listens for the creation of a Rendered Service Path (RSP) in the operational data store, and once received it programs Service Function Forwarders (SFF) that are hosted on OpenFlow capable switches to forward packets through the service chain. Currently the only tested OpenFlow capable switch is OVS 2.9.

Common acronyms used in the following sections:

  • SF - Service Function

  • SFF - Service Function Forwarder

  • SFC - Service Function Chain

  • SFP - Service Function Path

  • RSP - Rendered Service Path

SFC OpenFlow Renderer Architecture

The SFC OF Renderer is invoked after a RSP is created in the operational data store using an MD-SAL listener called SfcOfRspDataListener. Upon SFC OF Renderer initialization, the SfcOfRspDataListener registers itself to listen for RSP changes. When invoked, the SfcOfRspDataListener processes the RSP and calls the SfcOfFlowProgrammerImpl to create the necessary flows in the Service Function Forwarders configured in the RSP. Refer to the following diagram for more details.

SFC OpenFlow Renderer High Level Architecture

SFC OpenFlow Renderer High Level Architecture

SFC OpenFlow Switch Flow pipeline

The SFC OpenFlow Renderer uses the following tables for its Flow pipeline:

  • Table 0, Classifier

  • Table 1, Transport Ingress

  • Table 2, Path Mapper

  • Table 3, Path Mapper ACL

  • Table 4, Next Hop

  • Table 10, Transport Egress

The OpenFlow Table Pipeline is intended to be generic to work for all of the different encapsulations supported by SFC.

All of the tables are explained in detail in the following section.

The SFFs (SFF1 and SFF2), SFs (SF1), and topology used for the flow tables in the following sections are as described in the following diagram.

SFC OpenFlow Renderer Typical Network Topology

SFC OpenFlow Renderer Typical Network Topology

Classifier Table detailed

It is possible for the SFF to also act as a classifier. This table maps subscriber traffic to RSPs, and is explained in detail in the classifier documentation.

If the SFF is not a classifier, then this table will just have a simple Goto Table 1 flow.

Transport Ingress Table detailed

The Transport Ingress table has an entry per expected tunnel transport type to be received in a particular SFF, as established in the SFC configuration.

Here are two example on SFF1: one where the RSP ingress tunnel is MPLS assuming VLAN is used for the SFF-SF, and the other where the RSP ingress tunnel is either Eth+NSH or just NSH with no ethernet.

Priority

Match

Action

256

EtherType==0x8847 (MPLS unicast)

Goto Table 2

256

EtherType==0x8100 (VLAN)

Goto Table 2

250

EtherType==0x894f (Eth+NSH)

Goto Table 2

250

PacketType==0x894f (NSH no Eth)

Goto Table 2

5

Match Any

Drop

Table: Table Transport Ingress

Path Mapper Table detailed

The Path Mapper table has an entry per expected tunnel transport info to be received in a particular SFF, as established in the SFC configuration. The tunnel transport info is used to determine the RSP Path ID, and is stored in the OpenFlow Metadata. This table is not used for NSH, since the RSP Path ID is stored in the NSH header.

For SF nodes that do not support NSH tunneling, the IP header DSCP field is used to store the RSP Path Id. The RSP Path Id is written to the DSCP field in the Transport Egress table for those packets sent to an SF.

Here is an example on SFF1, assuming the following details:

  • VLAN ID 1000 is used for the SFF-SF

  • The RSP Path 1 tunnel uses MPLS label 100 for ingress and 101 for egress

  • The RSP Path 2 (symmetric downlink path) uses MPLS label 101 for ingress and 100 for egress

Priority

Match

Action

256

MPLS Label==100

RSP Path=1, Pop MPLS, Goto Table 4

256

MPLS Label==101

RSP Path=2, Pop MPLS, Goto Table 4

256

VLAN ID==1000, IP DSCP==1

RSP Path=1, Pop VLAN, Goto Table 4

256

VLAN ID==1000, IP DSCP==2

RSP Path=2, Pop VLAN, Goto Table 4

5

Match Any

Goto Table 3

Table: Table Path Mapper

Path Mapper ACL Table detailed

This table is only populated when PacketIn packets are received from the switch for TcpProxy type SFs. These flows are created with an inactivity timer of 60 seconds and will be automatically deleted upon expiration.

Next Hop Table detailed

The Next Hop table uses the RSP Path Id and appropriate packet fields to determine where to send the packet next. For NSH, only the NSP (Network Services Path, RSP ID) and NSI (Network Services Index, next hop) fields from the NSH header are needed to determine the VXLAN tunnel destination IP. For VLAN or MPLS, then the source MAC address is used to determine the destination MAC address.

Here are two examples on SFF1, assuming SFF1 is connected to SFF2. RSP Paths 1 and 2 are symmetric VLAN paths. RSP Paths 3 and 4 are symmetric NSH paths. RSP Path 1 ingress packets come from external to SFC, for which we don’t have the source MAC address (MacSrc).

Priority

Match

Action

256

RSP Path==1, MacSrc==SF1

MacDst=SFF2, Goto Table 10

256

RSP Path==2, MacSrc==SF1

Goto Table 10

256

RSP Path==2, MacSrc==SFF2

MacDst=SF1, Goto Table 10

246

RSP Path==1

MacDst=SF1, Goto Table 10

550

dl_type=0x894f, nsh_spi=3,nsh_si=255 (NSH, SFF Ingress RSP 3, hop 1)

load:0xa000002→ NXM_NX_TUN_IPV4_DST[], Goto Table 10

550

dl_type=0x894f nsh_spi=3,nsh_si=254 (NSH, SFF Ingress from SF, RSP 3, hop 2)

load:0xa00000a→ NXM_NX_TUN_IPV4_DST[], Goto Table 10

550

dl_type=0x894f, nsh_spi=4,nsh_si=254 (NSH, SFF1 Ingress from SFF2)

load:0xa00000a→ NXM_NX_TUN_IPV4_DST[], Goto Table 10

5

Match Any

Drop

Table: Table Next Hop

Transport Egress Table detailed

The Transport Egress table prepares egress tunnel information and sends the packets out.

Here are two examples on SFF1. RSP Paths 1 and 2 are symmetric MPLS paths that use VLAN for the SFF-SF. RSP Paths 3 and 4 are symmetric NSH paths. Since it is assumed that switches used for NSH will only have one VXLAN port, the NSH packets are just sent back where they came from.

Priority

Match

Action

256

RSP Path==1, MacDst==SF1

Push VLAN ID 1000, Port=SF1

256

RSP Path==1, MacDst==SFF2

Push MPLS Label 101, Port=SFF2

256

RSP Path==2, MacDst==SF1

Push VLAN ID 1000, Port=SF1

246

RSP Path==2

Push MPLS Label 100, Port=Ingress

256

in_port=1,dl_type=0x894f nsh_spi=0x3,nsh_si=255 (NSH, SFF Ingress RSP 3)

IN_PORT

256

in_port=1,dl_type=0x894f, nsh_spi=0x3,nsh_si=254 (NSH,SFF Ingress from SF,RSP 3)

IN_PORT

256 | in_port=1,dl_type=0x894f,
nsh_spi=0x4,nsh_si=254
(NSH, SFF1 Ingress from SFF2)

IN_PORT

5

Match Any

Drop

Table: Table Transport Egress

Administering SFC OF Renderer

To use the SFC OpenFlow Renderer Karaf, at least the following Karaf features must be installed.

  • odl-openflowplugin-nxm-extensions

  • odl-openflowplugin-flow-services

  • odl-sfc-provider

  • odl-sfc-model

  • odl-sfc-openflow-renderer

  • odl-sfc-ui (optional)

Since OpenDaylight Karaf features internally install dependent features all of the above features can be installed by simply installing the ‘’odl-sfc-openflow-renderer’’ feature.

The following command can be used to view all of the currently installed Karaf features:

opendaylight-user@root>feature:list -i

Or, pipe the command to a grep to see a subset of the currently installed Karaf features:

opendaylight-user@root>feature:list -i | grep sfc

To install a particular feature, use the Karaf feature:install command.

SFC OF Renderer Tutorial
Overview

In this tutorial, the VXLAN-GPE NSH encapsulations will be shown. The following Network Topology diagram is a logical view of the SFFs and SFs involved in creating the Service Chains.

SFC OpenFlow Renderer Typical Network Topology

SFC OpenFlow Renderer Typical Network Topology

Prerequisites

To use this example, SFF OpenFlow switches must be created and connected as illustrated above. Additionally, the SFs must be created and connected.

Note that RSP symmetry depends on the Service Function Path symmetric field, if present. If not, the RSP will be symmetric if any of the SFs involved in the chain has the bidirectional field set to true.

Target Environment

The target environment is not important, but this use-case was created and tested on Linux.

Instructions

The steps to use this tutorial are as follows. The referenced configuration in the steps is listed in the following sections.

There are numerous ways to send the configuration. In the following configuration chapters, the appropriate curl command is shown for each configuration to be sent, including the URL.

Steps to configure the SFC OF Renderer tutorial:

  1. Send the SF RESTCONF configuration

  2. Send the SFF RESTCONF configuration

  3. Send the SFC RESTCONF configuration

  4. Send the SFP RESTCONF configuration

  5. The RSP will be created internally when the SFP is created.

Once the configuration has been successfully created, query the Rendered Service Paths with either the SFC UI or via RESTCONF. Notice that the RSP is symmetrical, so the following 2 RSPs will be created:

  • sfc-path1-Path-<RSP-ID>

  • sfc-path1-Path-<RSP-ID>-Reverse

At this point the Service Chains have been created, and the OpenFlow Switches are programmed to steer traffic through the Service Chain. Traffic can now be injected from a client into the Service Chain. To debug problems, the OpenFlow tables can be dumped with the following commands, assuming SFF1 is called s1 and SFF2 is called s2.

sudo ovs-ofctl -O OpenFlow13  dump-flows s1
sudo ovs-ofctl -O OpenFlow13  dump-flows s2

In all the following configuration sections, replace the ${JSON} string with the appropriate JSON configuration. Also, change the localhost destination in the URL accordingly.

SFC OF Renderer NSH Tutorial

The following configuration sections show how to create the different elements using NSH encapsulation.

NSH Service Function configuration

The Service Function configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function:service-functions/

SF configuration JSON.

{
 "service-functions": {
   "service-function": [
     {
       "name": "sf1",
       "type": "http-header-enrichment",
       "ip-mgmt-address": "10.0.0.2",
       "sf-data-plane-locator": [
         {
           "name": "sf1dpl",
           "ip": "10.0.0.10",
           "port": 4789,
           "transport": "service-locator:vxlan-gpe",
           "service-function-forwarder": "sff1"
         }
       ]
     },
     {
       "name": "sf2",
       "type": "firewall",
       "ip-mgmt-address": "10.0.0.3",
       "sf-data-plane-locator": [
         {
           "name": "sf2dpl",
            "ip": "10.0.0.20",
            "port": 4789,
            "transport": "service-locator:vxlan-gpe",
           "service-function-forwarder": "sff2"
         }
       ]
     }
   ]
 }
}
NSH Service Function Forwarder configuration

The Service Function Forwarder configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/

SFF configuration JSON.

{
 "service-function-forwarders": {
   "service-function-forwarder": [
     {
       "name": "sff1",
       "service-node": "openflow:2",
       "sff-data-plane-locator": [
         {
           "name": "sff1dpl",
           "data-plane-locator":
           {
               "ip": "10.0.0.1",
               "port": 4789,
               "transport": "service-locator:vxlan-gpe"
           }
         }
       ],
       "service-function-dictionary": [
         {
           "name": "sf1",
           "sff-sf-data-plane-locator":
           {
               "sf-dpl-name": "sf1dpl",
               "sff-dpl-name": "sff1dpl"
           }
         }
       ]
     },
     {
       "name": "sff2",
       "service-node": "openflow:3",
       "sff-data-plane-locator": [
         {
           "name": "sff2dpl",
           "data-plane-locator":
           {
               "ip": "10.0.0.2",
               "port": 4789,
               "transport": "service-locator:vxlan-gpe"
           }
         }
       ],
       "service-function-dictionary": [
         {
           "name": "sf2",
           "sff-sf-data-plane-locator":
           {
               "sf-dpl-name": "sf2dpl",
               "sff-dpl-name": "sff2dpl"
           }
         }
       ]
     }
   ]
 }
}
NSH Service Function Chain configuration

The Service Function Chain configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/

SFC configuration JSON.

{
 "service-function-chains": {
   "service-function-chain": [
     {
       "name": "sfc-chain1",
       "sfc-service-function": [
         {
           "name": "hdr-enrich-abstract1",
           "type": "http-header-enrichment"
         },
         {
           "name": "firewall-abstract1",
           "type": "firewall"
         }
       ]
     }
   ]
 }
}
NSH Service Function Path configuration

The Service Function Path configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-path:service-function-paths/

SFP configuration JSON.

{
  "service-function-paths": {
    "service-function-path": [
      {
        "name": "sfc-path1",
        "service-chain-name": "sfc-chain1",
        "transport-type": "service-locator:vxlan-gpe",
        "symmetric": true
      }
    ]
  }
}
NSH Rendered Service Path Query

The following command can be used to query all of the created Rendered Service Paths:

curl -H "Content-Type: application/json" -H "Cache-Control: no-cache" -X GET --user admin:admin http://localhost:8181/restconf/operational/rendered-service-path:rendered-service-paths/
SFC OF Renderer MPLS Tutorial

The following configuration sections show how to create the different elements using MPLS encapsulation.

MPLS Service Function configuration

The Service Function configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function:service-functions/

SF configuration JSON.

{
 "service-functions": {
   "service-function": [
     {
       "name": "sf1",
       "type": "http-header-enrichment",
       "ip-mgmt-address": "10.0.0.2",
       "sf-data-plane-locator": [
         {
           "name": "sf1-sff1",
           "mac": "00:00:08:01:02:01",
           "vlan-id": 1000,
           "transport": "service-locator:mac",
           "service-function-forwarder": "sff1"
         }
       ]
     },
     {
       "name": "sf2",
       "type": "firewall",
       "ip-mgmt-address": "10.0.0.3",
       "sf-data-plane-locator": [
         {
           "name": "sf2-sff2",
           "mac": "00:00:08:01:03:01",
           "vlan-id": 2000,
           "transport": "service-locator:mac",
           "service-function-forwarder": "sff2"
         }
       ]
     }
   ]
 }
}
MPLS Service Function Forwarder configuration

The Service Function Forwarder configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/

SFF configuration JSON.

{
 "service-function-forwarders": {
   "service-function-forwarder": [
     {
       "name": "sff1",
       "service-node": "openflow:2",
       "sff-data-plane-locator": [
         {
           "name": "ulSff1Ingress",
           "data-plane-locator":
           {
               "mpls-label": 100,
               "transport": "service-locator:mpls"
           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "mac": "11:11:11:11:11:11",
               "port-id" : "1"
           }
         },
         {
           "name": "ulSff1ToSff2",
           "data-plane-locator":
           {
               "mpls-label": 101,
               "transport": "service-locator:mpls"
           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "mac": "33:33:33:33:33:33",
               "port-id" : "2"
           }
         },
         {
           "name": "toSf1",
           "data-plane-locator":
           {
               "mac": "22:22:22:22:22:22",
               "vlan-id": 1000,
               "transport": "service-locator:mac",
           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "mac": "33:33:33:33:33:33",
               "port-id" : "3"
           }
         }
       ],
       "service-function-dictionary": [
         {
           "name": "sf1",
           "sff-sf-data-plane-locator":
           {
               "sf-dpl-name": "sf1-sff1",
               "sff-dpl-name": "toSf1"
           }
         }
       ]
     },
     {
       "name": "sff2",
       "service-node": "openflow:3",
       "sff-data-plane-locator": [
         {
           "name": "ulSff2Ingress",
           "data-plane-locator":
           {
               "mpls-label": 101,
               "transport": "service-locator:mpls"
           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "mac": "44:44:44:44:44:44",
               "port-id" : "1"
           }
         },
         {
           "name": "ulSff2Egress",
           "data-plane-locator":
           {
               "mpls-label": 102,
               "transport": "service-locator:mpls"
           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "mac": "66:66:66:66:66:66",
               "port-id" : "2"
           }
         },
         {
           "name": "toSf2",
           "data-plane-locator":
           {
               "mac": "55:55:55:55:55:55",
               "vlan-id": 2000,
               "transport": "service-locator:mac"
           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "port-id" : "3"
           }
         }
       ],
       "service-function-dictionary": [
         {
           "name": "sf2",
           "sff-sf-data-plane-locator":
           {
               "sf-dpl-name": "sf2-sff2",
               "sff-dpl-name": "toSf2"

           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "port-id" : "3"
           }
         }
       ]
     }
   ]
 }
}
MPLS Service Function Chain configuration

The Service Function Chain configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
 --data '${JSON}' -X PUT --user admin:admin
 http://localhost:8181/restconf/config/service-function-chain:service-function-chains/

SFC configuration JSON.

{
 "service-function-chains": {
   "service-function-chain": [
     {
       "name": "sfc-chain1",
       "sfc-service-function": [
         {
           "name": "hdr-enrich-abstract1",
           "type": "http-header-enrichment"
         },
         {
           "name": "firewall-abstract1",
           "type": "firewall"
         }
       ]
     }
   ]
 }
}
MPLS Service Function Path configuration

The Service Function Path configuration can be sent with the following command. This will internally trigger the Rendered Service Paths to be created.

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user admin:admin
 http://localhost:8181/restconf/config/service-function-path:service-function-paths/

SFP configuration JSON.

{
  "service-function-paths": {
    "service-function-path": [
      {
        "name": "sfc-path1",
        "service-chain-name": "sfc-chain1",
        "transport-type": "service-locator:mpls",
        "symmetric": true
      }
    ]
  }
}

The following command can be used to query all of the Rendered Service Paths that were created when the Service Function Path was created:

curl -H "Content-Type: application/json" -H "Cache-Control: no-cache" -X GET
--user admin:admin http://localhost:8181/restconf/operational/rendered-service-path:rendered-service-paths/
SFC IOS XE Renderer User Guide
Overview

The early Service Function Chaining (SFC) renderer for IOS-XE devices (SFC IOS-XE renderer) implements Service Chaining functionality on IOS-XE capable switches. It listens for the creation of a Rendered Service Path (RSP) and sets up Service Function Forwarders (SFF) that are hosted on IOS-XE switches to steer traffic through the service chain.

Common acronyms used in the following sections:

  • SF - Service Function

  • SFF - Service Function Forwarder

  • SFC - Service Function Chain

  • SP - Service Path

  • SFP - Service Function Path

  • RSP - Rendered Service Path

  • LSF - Local Service Forwarder

  • RSF - Remote Service Forwarder

SFC IOS-XE Renderer Architecture

When the SFC IOS-XE renderer is initialized, all required listeners are registered to handle incoming data. It involves CSR/IOS-XE NodeListener which stores data about all configurable devices including their mountpoints (used here as databrokers), ServiceFunctionListener, ServiceForwarderListener (see mapping) and RenderedPathListener used to listen for RSP changes. When the SFC IOS-XE renderer is invoked, RenderedPathListener calls the IosXeRspProcessor which processes the RSP change and creates all necessary Service Paths and Remote Service Forwarders (if necessary) on IOS-XE devices.

Service Path details

Each Service Path is defined by index (represented by NSP) and contains service path entries. Each entry has appropriate service index (NSI) and definition of next hop. Next hop can be Service Function, different Service Function Forwarder or definition of end of chain - terminate. After terminating, the packet is sent to destination. If a SFF is defined as a next hop, it has to be present on device in the form of Remote Service Forwarder. RSFs are also created during RSP processing.

Example of Service Path:

service-chain service-path 200
   service-index 255 service-function firewall-1
   service-index 254 service-function dpi-1
   service-index 253 terminate
Mapping to IOS-XE SFC entities

Renderer contains mappers for SFs and SFFs. IOS-XE capable device is using its own definition of Service Functions and Service Function Forwarders according to appropriate .yang file. ServiceFunctionListener serves as a listener for SF changes. If SF appears in datastore, listener extracts its management ip address and looks into cached IOS-XE nodes. If some of available nodes match, Service function is mapped in IosXeServiceFunctionMapper to be understandable by IOS-XE device and it’s written into device’s config. ServiceForwarderListener is used in a similar way. All SFFs with suitable management ip address it mapped in IosXeServiceForwarderMapper. Remapped SFFs are configured as a Local Service Forwarders. It is not possible to directly create Remote Service Forwarder using IOS-XE renderer. RSF is created only during RSP processing.

Administering SFC IOS-XE renderer

To use the SFC IOS-XE Renderer Karaf, at least the following Karaf features must be installed:

  • odl-aaa-shiro

  • odl-sfc-model

  • odl-sfc-provider

  • odl-restconf

  • odl-netconf-topology

  • odl-sfc-ios-xe-renderer

SFC IOS-XE renderer Tutorial
Overview

This tutorial is a simple example how to create Service Path on IOS-XE capable device using IOS-XE renderer

Preconditions

To connect to IOS-XE device, it is necessary to use several modified yang models and override device’s ones. All .yang files are in the Yang/netconf folder in the sfc-ios-xe-renderer module in the SFC project. These files have to be copied to the cache/schema directory, before Karaf is started. After that, custom capabilities have to be sent to network-topology:

  • PUT ./config/network-topology:network-topology/topology/topology-netconf/node/<device-name>

    <node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
      <node-id>device-name</node-id>
      <host xmlns="urn:opendaylight:netconf-node-topology">device-ip</host>
      <port xmlns="urn:opendaylight:netconf-node-topology">2022</port>
      <username xmlns="urn:opendaylight:netconf-node-topology">login</username>
      <password xmlns="urn:opendaylight:netconf-node-topology">password</password>
      <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
      <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">0</keepalive-delay>
      <yang-module-capabilities xmlns="urn:opendaylight:netconf-node-topology">
         <override>true</override>
         <capability xmlns="urn:opendaylight:netconf-node-topology">
            urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&amp;revision=2013-07-15
         </capability>
         <capability xmlns="urn:opendaylight:netconf-node-topology">
            urn:ietf:params:xml:ns:yang:ietf-yang-types?module=ietf-yang-types&amp;revision=2013-07-15
         </capability>
         <capability xmlns="urn:opendaylight:netconf-node-topology">
            urn:ios?module=ned&amp;revision=2016-03-08
         </capability>
         <capability xmlns="urn:opendaylight:netconf-node-topology">
            http://tail-f.com/yang/common?module=tailf-common&amp;revision=2015-05-22
         </capability>
         <capability xmlns="urn:opendaylight:netconf-node-topology">
            http://tail-f.com/yang/common?module=tailf-meta-extensions&amp;revision=2013-11-07
         </capability>
         <capability xmlns="urn:opendaylight:netconf-node-topology">
            http://tail-f.com/yang/common?module=tailf-cli-extensions&amp;revision=2015-03-19
         </capability>
      </yang-module-capabilities>
    </node>
    

Note

The device name in the URL and in the XML must match.

Instructions

When the IOS-XE renderer is installed, all NETCONF nodes in topology-netconf are processed and all capable nodes with accessible mountpoints are cached. The first step is to create LSF on node.

Service Function Forwarder configuration

  • PUT ./config/service-function-forwarder:service-function-forwarders

    {
        "service-function-forwarders": {
            "service-function-forwarder": [
                {
                    "name": "CSR1Kv-2",
                    "ip-mgmt-address": "172.25.73.23",
                    "sff-data-plane-locator": [
                        {
                            "name": "CSR1Kv-2-dpl",
                            "data-plane-locator": {
                                "transport": "service-locator:vxlan-gpe",
                                "port": 6633,
                                "ip": "10.99.150.10"
                            }
                        }
                    ]
                }
            ]
        }
    }
    

If the IOS-XE node with appropriate management IP exists, this configuration is mapped and LSF is created on the device. The same approach is used for Service Functions.

  • PUT ./config/service-function:service-functions

    {
        "service-functions": {
            "service-function": [
                {
                    "name": "Firewall",
                    "ip-mgmt-address": "172.25.73.23",
                    "type": "firewall",
                    "sf-data-plane-locator": [
                        {
                            "name": "firewall-dpl",
                            "port": 6633,
                            "ip": "12.1.1.2",
                            "transport": "service-locator:gre",
                            "service-function-forwarder": "CSR1Kv-2"
                        }
                    ]
                },
                {
                    "name": "Dpi",
                    "ip-mgmt-address": "172.25.73.23",
                    "type":"dpi",
                    "sf-data-plane-locator": [
                        {
                            "name": "dpi-dpl",
                            "port": 6633,
                            "ip": "12.1.1.1",
                            "transport": "service-locator:gre",
                            "service-function-forwarder": "CSR1Kv-2"
                        }
                    ]
                },
                {
                    "name": "Qos",
                    "ip-mgmt-address": "172.25.73.23",
                    "type":"qos",
                    "sf-data-plane-locator": [
                        {
                            "name": "qos-dpl",
                            "port": 6633,
                            "ip": "12.1.1.4",
                            "transport": "service-locator:gre",
                            "service-function-forwarder": "CSR1Kv-2"
                        }
                    ]
                }
            ]
        }
    }
    

All these SFs are configured on the same device as the LSF. The next step is to prepare Service Function Chain.

  • PUT ./config/service-function-chain:service-function-chains/

    {
        "service-function-chains": {
            "service-function-chain": [
                {
                    "name": "CSR3XSF",
                    "sfc-service-function": [
                        {
                            "name": "Firewall",
                            "type": "firewall"
                        },
                        {
                            "name": "Dpi",
                            "type": "dpi"
                        },
                        {
                            "name": "Qos",
                            "type": "qos"
                        }
                    ]
                }
            ]
        }
    }
    

Service Function Path:

  • PUT ./config/service-function-path:service-function-paths/

    {
        "service-function-paths": {
            "service-function-path": [
                {
                    "name": "CSR3XSF-Path",
                    "service-chain-name": "CSR3XSF",
                    "starting-index": 255,
                    "symmetric": "true"
                }
            ]
        }
    }
    

Without a classifier, there is possibility to POST RSP directly.

  • POST ./operations/rendered-service-path:create-rendered-path

    {
      "input": {
          "name": "CSR3XSF-Path-RSP",
          "parent-service-function-path": "CSR3XSF-Path"
      }
    }
    

The resulting configuration:

!
service-chain service-function-forwarder local
  ip address 10.99.150.10
!
service-chain service-function firewall
ip address 12.1.1.2
  encapsulation gre enhanced divert
!
service-chain service-function dpi
ip address 12.1.1.1
  encapsulation gre enhanced divert
!
service-chain service-function qos
ip address 12.1.1.4
  encapsulation gre enhanced divert
!
service-chain service-path 1
  service-index 255 service-function firewall
  service-index 254 service-function dpi
  service-index 253 service-function qos
  service-index 252 terminate
!
service-chain service-path 2
  service-index 255 service-function qos
  service-index 254 service-function dpi
  service-index 253 service-function firewall
  service-index 252 terminate
!

Service Path 1 is direct, Service Path 2 is reversed. Path numbers may vary.

Service Function Scheduling Algorithms
Overview

When creating the Rendered Service Path, the origin SFC controller chose the first available service function from a list of service function names. This may result in many issues such as overloaded service functions and a longer service path as SFC has no means to understand the status of service functions and network topology. The service function selection framework supports at least four algorithms (Random, Round Robin, Load Balancing and Shortest Path) to select the most appropriate service function when instantiating the Rendered Service Path. In addition, it is an extensible framework that allows 3rd party selection algorithm to be plugged in.

Architecture

The following figure illustrates the service function selection framework and algorithms.

SF Selection Architecture

SF Selection Architecture

A user has three different ways to select one service function selection algorithm:

  1. Integrated RESTCONF Calls. OpenStack and/or other administration system could provide plugins to call the APIs to select one scheduling algorithm.

  2. Command line tools. Command line tools such as curl or browser plugins such as POSTMAN (for Google Chrome) and RESTClient (for Mozilla Firefox) could select schedule algorithm by making RESTCONF calls.

  3. SFC-UI. Now the SFC-UI provides an option for choosing a selection algorithm when creating a Rendered Service Path.

The RESTCONF northbound SFC API provides GUI/RESTCONF interactions for choosing the service function selection algorithm. MD-SAL data store provides all supported service function selection algorithms, and provides APIs to enable one of the provided service function selection algorithms. Once a service function selection algorithm is enabled, the service function selection algorithm will work when creating a Rendered Service Path.

Select SFs with Scheduler

Administrator could use both the following ways to select one of the selection algorithm when creating a Rendered Service Path.

  • Command line tools. Command line tools includes Linux commands curl or even browser plugins such as POSTMAN(for Google Chrome) or RESTClient(for Mozilla Firefox). In this case, the following JSON content is needed at the moment: Service_function_schudule_type.json

    {
      "service-function-scheduler-types": {
        "service-function-scheduler-type": [
          {
            "name": "random",
            "type": "service-function-scheduler-type:random",
            "enabled": false
          },
          {
            "name": "roundrobin",
            "type": "service-function-scheduler-type:round-robin",
            "enabled": true
          },
          {
            "name": "loadbalance",
            "type": "service-function-scheduler-type:load-balance",
            "enabled": false
          },
          {
            "name": "shortestpath",
            "type": "service-function-scheduler-type:shortest-path",
            "enabled": false
          }
        ]
      }
    }
    

    If using the Linux curl command, it could be:

    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
    --data '$${Service_function_schudule_type.json}' -X PUT
    --user admin:admin http://localhost:8181/restconf/config/service-function-scheduler-type:service-function-scheduler-types/
    

Here is also a snapshot for using the RESTClient plugin:

Mozilla Firefox RESTClient

Mozilla Firefox RESTClient

  • SFC-UI.SFC-UI provides a drop down menu for service function selection algorithm. Here is a snapshot for the user interaction from SFC-UI when creating a Rendered Service Path.

Karaf Web UI

Karaf Web UI

Note

Some service function selection algorithms in the drop list are not implemented yet. Only the first three algorithms are committed at the moment.

Random

Select Service Function from the name list randomly.

Overview

The Random algorithm is used to select one Service Function from the name list which it gets from the Service Function Type randomly.

Prerequisites
  • Service Function information are stored in datastore.

  • Either no algorithm or the Random algorithm is selected.

Target Environment

The Random algorithm will work either no algorithm type is selected or the Random algorithm is selected.

Instructions

Once the plugins are installed into Karaf successfully, a user can use his favorite method to select the Random scheduling algorithm type. There are no special instructions for using the Random algorithm.

Round Robin

Select Service Function from the name list in Round Robin manner.

Overview

The Round Robin algorithm is used to select one Service Function from the name list which it gets from the Service Function Type in a Round Robin manner, this will balance workloads to all Service Functions. However, this method cannot help all Service Functions load the same workload because it’s flow-based Round Robin.

Prerequisites
  • Service Function information are stored in datastore.

  • Round Robin algorithm is selected

Target Environment

The Round Robin algorithm will work one the Round Robin algorithm is selected.

Instructions

Once the plugins are installed into Karaf successfully, a user can use his favorite method to select the Round Robin scheduling algorithm type. There are no special instructions for using the Round Robin algorithm.

Load Balance Algorithm

Select appropriate Service Function by actual CPU utilization.

Overview

The Load Balance Algorithm is used to select appropriate Service Function by actual CPU utilization of service functions. The CPU utilization of service function obtained from monitoring information reported via NETCONF.

Prerequisites
  • CPU-utilization for Service Function.

  • NETCONF server.

  • NETCONF client.

  • Each VM has a NETCONF server and it could work with NETCONF client well.

Instructions

Set up VMs as Service Functions. enable NETCONF server in VMs. Ensure that you specify them separately. For example:

  1. Set up 4 VMs include 2 SFs’ type are Firewall, Others are Napt44. Name them as firewall-1, firewall-2, napt44-1, napt44-2 as Service Function. The four VMs can run either the same server or different servers.

  2. Install NETCONF server on every VM and enable it. More information on NETCONF can be found on the OpenDaylight wiki here: https://wiki.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf:Manual_netopeer_installation

  3. Get Monitoring data from NETCONF server. These monitoring data should be get from the NETCONF server which is running in VMs. The following static XML data is an example:

static XML data like this:

<?xml version="1.0" encoding="UTF-8"?>
<service-function-description-monitor-report>
  <SF-description>
    <number-of-dataports>2</number-of-dataports>
    <capabilities>
      <supported-packet-rate>5</supported-packet-rate>
      <supported-bandwidth>10</supported-bandwidth>
      <supported-ACL-number>2000</supported-ACL-number>
      <RIB-size>200</RIB-size>
      <FIB-size>100</FIB-size>
      <ports-bandwidth>
        <port-bandwidth>
          <port-id>1</port-id>
          <ipaddress>10.0.0.1</ipaddress>
          <macaddress>00:1e:67:a2:5f:f4</macaddress>
          <supported-bandwidth>20</supported-bandwidth>
        </port-bandwidth>
        <port-bandwidth>
          <port-id>2</port-id>
          <ipaddress>10.0.0.2</ipaddress>
          <macaddress>01:1e:67:a2:5f:f6</macaddress>
          <supported-bandwidth>10</supported-bandwidth>
        </port-bandwidth>
      </ports-bandwidth>
    </capabilities>
  </SF-description>
  <SF-monitoring-info>
    <liveness>true</liveness>
    <resource-utilization>
        <packet-rate-utilization>10</packet-rate-utilization>
        <bandwidth-utilization>15</bandwidth-utilization>
        <CPU-utilization>12</CPU-utilization>
        <memory-utilization>17</memory-utilization>
        <available-memory>8</available-memory>
        <RIB-utilization>20</RIB-utilization>
        <FIB-utilization>25</FIB-utilization>
        <power-utilization>30</power-utilization>
        <SF-ports-bandwidth-utilization>
          <port-bandwidth-utilization>
            <port-id>1</port-id>
            <bandwidth-utilization>20</bandwidth-utilization>
          </port-bandwidth-utilization>
          <port-bandwidth-utilization>
            <port-id>2</port-id>
            <bandwidth-utilization>30</bandwidth-utilization>
          </port-bandwidth-utilization>
        </SF-ports-bandwidth-utilization>
    </resource-utilization>
  </SF-monitoring-info>
</service-function-description-monitor-report>
  1. Unzip SFC release tarball.

  2. Run SFC: ${sfc}/bin/karaf. More information on Service Function Chaining can be found on the OpenDaylight SFC’s wiki page: https://wiki.opendaylight.org/view/Service_Function_Chaining:Main

  1. Deploy the SFC2 (firewall-abstract2⇒napt44-abstract2) and click button to Create Rendered Service Path in SFC UI (http://localhost:8181/sfc/index.html).

  2. Verify the Rendered Service Path to ensure the CPU utilization of the selected hop is the minimum one among all the service functions with same type. The correct RSP is firewall-1⇒napt44-2

Shortest Path Algorithm

Select appropriate Service Function by Dijkstra’s algorithm. Dijkstra’s algorithm is an algorithm for finding the shortest paths between nodes in a graph.

Overview

The Shortest Path Algorithm is used to select appropriate Service Function by actual topology.

Prerequisites
Instructions
  1. Unzip SFC release tarball.

  2. Run SFC: ${sfc}/bin/karaf.

  3. Depoly SFFs and SFs. import the service-function-forwarders.json and service-functions.json in UI (http://localhost:8181/sfc/index.html#/sfc/config)

service-function-forwarders.json:

{
  "service-function-forwarders": {
    "service-function-forwarder": [
      {
        "name": "SFF-br1",
        "service-node": "OVSDB-test01",
        "rest-uri": "http://localhost:5001",
        "sff-data-plane-locator": [
          {
            "name": "eth0",
            "service-function-forwarder-ovs:ovs-bridge": {
              "uuid": "4c3778e4-840d-47f4-b45e-0988e514d26c",
              "bridge-name": "br-tun"
            },
            "data-plane-locator": {
              "port": 5000,
              "ip": "192.168.1.1",
              "transport": "service-locator:vxlan-gpe"
            }
          }
        ],
        "service-function-dictionary": [
          {
            "sff-sf-data-plane-locator": {
               "sf-dpl-name": "sf1dpl",
               "sff-dpl-name": "sff1dpl"
            },
            "name": "napt44-1",
            "type": "napt44"
          },
          {
            "sff-sf-data-plane-locator": {
               "sf-dpl-name": "sf2dpl",
               "sff-dpl-name": "sff2dpl"
            },
            "name": "firewall-1",
            "type": "firewall"
          }
        ],
        "connected-sff-dictionary": [
          {
            "name": "SFF-br3"
          }
        ]
      },
      {
        "name": "SFF-br2",
        "service-node": "OVSDB-test01",
        "rest-uri": "http://localhost:5002",
        "sff-data-plane-locator": [
          {
            "name": "eth0",
            "service-function-forwarder-ovs:ovs-bridge": {
              "uuid": "fd4d849f-5140-48cd-bc60-6ad1f5fc0a1",
              "bridge-name": "br-tun"
            },
            "data-plane-locator": {
              "port": 5000,
              "ip": "192.168.1.2",
              "transport": "service-locator:vxlan-gpe"
            }
          }
        ],
        "service-function-dictionary": [
          {
            "sff-sf-data-plane-locator": {
               "sf-dpl-name": "sf1dpl",
               "sff-dpl-name": "sff1dpl"
            },
            "name": "napt44-2",
            "type": "napt44"
          },
          {
            "sff-sf-data-plane-locator": {
               "sf-dpl-name": "sf2dpl",
               "sff-dpl-name": "sff2dpl"
            },
            "name": "firewall-2",
            "type": "firewall"
          }
        ],
        "connected-sff-dictionary": [
          {
            "name": "SFF-br3"
          }
        ]
      },
      {
        "name": "SFF-br3",
        "service-node": "OVSDB-test01",
        "rest-uri": "http://localhost:5005",
        "sff-data-plane-locator": [
          {
            "name": "eth0",
            "service-function-forwarder-ovs:ovs-bridge": {
              "uuid": "fd4d849f-5140-48cd-bc60-6ad1f5fc0a4",
              "bridge-name": "br-tun"
            },
            "data-plane-locator": {
              "port": 5000,
              "ip": "192.168.1.2",
              "transport": "service-locator:vxlan-gpe"
            }
          }
        ],
        "service-function-dictionary": [
          {
            "sff-sf-data-plane-locator": {
               "sf-dpl-name": "sf1dpl",
               "sff-dpl-name": "sff1dpl"
            },
            "name": "test-server",
            "type": "dpi"
          },
          {
            "sff-sf-data-plane-locator": {
               "sf-dpl-name": "sf2dpl",
               "sff-dpl-name": "sff2dpl"
            },
            "name": "test-client",
            "type": "dpi"
          }
        ],
        "connected-sff-dictionary": [
          {
            "name": "SFF-br1"
          },
          {
            "name": "SFF-br2"
          }
        ]
      }
    ]
  }
}

service-functions.json:

{
  "service-functions": {
    "service-function": [
      {
        "rest-uri": "http://localhost:10001",
        "ip-mgmt-address": "10.3.1.103",
        "sf-data-plane-locator": [
          {
            "name": "preferred",
            "port": 10001,
            "ip": "10.3.1.103",
            "service-function-forwarder": "SFF-br1"
          }
        ],
        "name": "napt44-1",
        "type": "napt44"
      },
      {
        "rest-uri": "http://localhost:10002",
        "ip-mgmt-address": "10.3.1.103",
        "sf-data-plane-locator": [
          {
            "name": "master",
            "port": 10002,
            "ip": "10.3.1.103",
            "service-function-forwarder": "SFF-br2"
          }
        ],
        "name": "napt44-2",
        "type": "napt44"
      },
      {
        "rest-uri": "http://localhost:10003",
        "ip-mgmt-address": "10.3.1.103",
        "sf-data-plane-locator": [
          {
            "name": "1",
            "port": 10003,
            "ip": "10.3.1.102",
            "service-function-forwarder": "SFF-br1"
          }
        ],
        "name": "firewall-1",
        "type": "firewall"
      },
      {
        "rest-uri": "http://localhost:10004",
        "ip-mgmt-address": "10.3.1.103",
        "sf-data-plane-locator": [
          {
            "name": "2",
            "port": 10004,
            "ip": "10.3.1.101",
            "service-function-forwarder": "SFF-br2"
          }
        ],
        "name": "firewall-2",
        "type": "firewall"
      },
      {
        "rest-uri": "http://localhost:10005",
        "ip-mgmt-address": "10.3.1.103",
        "sf-data-plane-locator": [
          {
            "name": "3",
            "port": 10005,
            "ip": "10.3.1.104",
            "service-function-forwarder": "SFF-br3"
          }
        ],
        "name": "test-server",
        "type": "dpi"
      },
      {
        "rest-uri": "http://localhost:10006",
        "ip-mgmt-address": "10.3.1.103",
        "sf-data-plane-locator": [
          {
            "name": "4",
            "port": 10006,
            "ip": "10.3.1.102",
            "service-function-forwarder": "SFF-br3"
          }
        ],
        "name": "test-client",
        "type": "dpi"
      }
    ]
  }
}

The deployed topology like this:

          +----+           +----+          +----+
          |sff1|+----------|sff3|---------+|sff2|
          +----+           +----+          +----+
            |                                  |
     +--------------+                   +--------------+
     |              |                   |              |
+----------+   +--------+          +----------+   +--------+
|firewall-1|   |napt44-1|          |firewall-2|   |napt44-2|
+----------+   +--------+          +----------+   +--------+
  • Deploy the SFC2(firewall-abstract2⇒napt44-abstract2), select “Shortest Path” as schedule type and click button to Create Rendered Service Path in SFC UI (http://localhost:8181/sfc/index.html).

select schedule type

select schedule type

  • Verify the Rendered Service Path to ensure the selected hops are linked in one SFF. The correct RSP is firewall-1⇒napt44-1 or firewall-2⇒napt44-2. The first SF type is Firewall in Service Function Chain. So the algorithm will select first Hop randomly among all the SFs type is Firewall. Assume the first selected SF is firewall-2. All the path from firewall-1 to SF which type is Napt44 are list:

    • Path1: firewall-2 → sff2 → napt44-2

    • Path2: firewall-2 → sff2 → sff3 → sff1 → napt44-1 The shortest path is Path1, so the selected next hop is napt44-2.

rendered service path

rendered service path

Service Function Load Balancing User Guide
Overview

SFC Load-Balancing feature implements load balancing of Service Functions, rather than a one-to-one mapping between Service-Function-Forwarder and Service-Function.

Load Balancing Architecture

Service Function Groups (SFG) can replace Service Functions (SF) in the Rendered Path model. A Service Path can only be defined using SFGs or SFs, but not a combination of both.

Relevant objects in the YANG model are as follows:

  1. Service-Function-Group-Algorithm:

    Service-Function-Group-Algorithms {
        Service-Function-Group-Algorithm {
            String name
            String type
        }
    }
    
    Available types: ALL, SELECT, INDIRECT, FAST_FAILURE
    
  2. Service-Function-Group:

    Service-Function-Groups {
        Service-Function-Group {
            String name
            String serviceFunctionGroupAlgorithmName
            String type
            String groupId
            Service-Function-Group-Element {
                String service-function-name
                int index
            }
        }
    }
    
  3. ServiceFunctionHop: holds a reference to a name of SFG (or SF)

Tutorials

This tutorial will explain how to create a simple SFC configuration, with SFG instead of SF. In this example, the SFG will include two existing SF.

Setup SFC

For general SFC setup and scenarios, please see the SFC wiki page: https://wiki.opendaylight.org/view/Service_Function_Chaining:Main#SFC_101

Create an algorithm

POST - http://127.0.0.1:8181/restconf/config/service-function-group-algorithm:service-function-group-algorithms

{
    "service-function-group-algorithm": [
      {
        "name": "alg1"
        "type": "ALL"
      }
   ]
}

(Header “content-type”: application/json)

Create a group

POST - http://127.0.0.1:8181/restconf/config/service-function-group:service-function-groups

{
    "service-function-group": [
    {
        "rest-uri": "http://localhost:10002",
        "ip-mgmt-address": "10.3.1.103",
        "algorithm": "alg1",
        "name": "SFG1",
        "type": "napt44",
        "sfc-service-function": [
            {
                "name":"napt44-104"
            },
            {
                "name":"napt44-103-1"
            }
        ]
      }
    ]
}
SFC Proof of Transit User Guide
Overview

Several deployments use traffic engineering, policy routing, segment routing or service function chaining (SFC) to steer packets through a specific set of nodes. In certain cases regulatory obligations or a compliance policy require to prove that all packets that are supposed to follow a specific path are indeed being forwarded across the exact set of nodes specified. I.e. if a packet flow is supposed to go through a series of service functions or network nodes, it has to be proven that all packets of the flow actually went through the service chain or collection of nodes specified by the policy. In case the packets of a flow weren’t appropriately processed, a proof of transit egress device would be required to identify the policy violation and take corresponding actions (e.g. drop or redirect the packet, send an alert etc.) corresponding to the policy.

Service Function Chaining (SFC) Proof of Transit (SFC PoT) implements Service Chaining Proof of Transit functionality on capable network devices. Proof of Transit defines mechanisms to securely prove that traffic transited the defined path. After the creation of an Rendered Service Path (RSP), a user can configure to enable SFC proof of transit on the selected RSP to effect the proof of transit.

To ensure that the data traffic follows a specified path or a function chain, meta-data is added to user traffic in the form of a header. The meta-data is based on a ‘share of a secret’ and provisioned by the SFC PoT configuration from ODL over a secure channel to each of the nodes in the SFC. This meta-data is updated at each of the service-hop while a designated node called the verifier checks whether the collected meta-data allows the retrieval of the secret.

The following diagram shows the overview and essentially utilizes Shamir’s secret sharing algorithm, where each service is given a point on the curve and when the packet travels through each service, it collects these points (meta-data) and a verifier node tries to re-construct the curve using the collected points, thus verifying that the packet traversed through all the service functions along the chain.

SFC Proof of Transit overview

SFC Proof of Transit overview

Transport options for different protocols includes a new TLV in SR header for Segment Routing, NSH Type-2 meta-data, IPv6 extension headers, IPv4 variants and for VXLAN-GPE. More details are captured in the following link.

In-situ OAM: https://github.com/CiscoDevNet/iOAM

Common acronyms used in the following sections:

  • SF - Service Function

  • SFF - Service Function Forwarder

  • SFC - Service Function Chain

  • SFP - Service Function Path

  • RSP - Rendered Service Path

  • SFC PoT - Service Function Chain Proof of Transit

SFC Proof of Transit Architecture

SFC PoT feature is implemented as a two-part implementation with a north-bound handler that augments the RSP while a south-bound renderer auto-generates the required parameters and passes it on to the nodes that belong to the SFC.

The north-bound feature is enabled via odl-sfc-pot feature while the south-bound renderer is enabled via the odl-sfc-pot-netconf-renderer feature. For the purposes of SFC PoT handling, both features must be installed.

RPC handlers to augment the RSP are part of SfcPotRpc while the RSP augmentation to enable or disable SFC PoT feature is done via SfcPotRspProcessor.

SFC Proof of Transit entities

In order to implement SFC Proof of Transit for a service function chain, an RSP is a pre-requisite to identify the SFC to enable SFC PoT on. SFC Proof of Transit for a particular RSP is enabled by an RPC request to the controller along with necessary parameters to control some of the aspects of the SFC Proof of Transit process.

The RPC handler identifies the RSP and adds PoT feature meta-data like enable/disable, number of PoT profiles, profiles refresh parameters etc., that directs the south-bound renderer appropriately when RSP changes are noticed via call-backs in the renderer handlers.

Administering SFC Proof of Transit

To use the SFC Proof of Transit Karaf, at least the following Karaf features must be installed:

  • odl-sfc-model

  • odl-sfc-provider

  • odl-sfc-netconf

  • odl-restconf

  • odl-netconf-topology

  • odl-netconf-connector-all

  • odl-sfc-pot

Please note that the odl-sfc-pot-netconf-renderer or other renderers in future must be installed for the feature to take full-effect. The details of the renderer features are described in other parts of this document.

SFC Proof of Transit Tutorial
Overview

This tutorial is a simple example how to configure Service Function Chain Proof of Transit using SFC POT feature.

Preconditions

To enable a device to handle SFC Proof of Transit, it is expected that the NETCONF node device advertise capability as under ioam-sb-pot.yang present under sfc-model/src/main/yang folder. It is also expected that base NETCONF support be enabled and its support capability advertised as capabilities.

NETCONF support:urn:ietf:params:netconf:base:1.0

PoT support: (urn:cisco:params:xml:ns:yang:sfc-ioam-sb-pot?revision=2017-01-12)sfc-ioam-sb-pot

It is also expected that the devices are netconf mounted and available in the topology-netconf store.

Instructions

When SFC Proof of Transit is installed, all netconf nodes in topology-netconf are processed and all capable nodes with accessible mountpoints are cached.

First step is to create the required RSP as is usually done using RSP creation steps in SFC main.

Once RSP name is available it is used to send a POST RPC to the controller similar to below:

POST - http://ODL-IP:8181/restconf/operations/sfc-ioam-nb-pot:enable-sfc-ioam-pot-rendered-path/

{
    "input":
    {
        "sfc-ioam-pot-rsp-name": "sfc-path-3sf3sff",
        "ioam-pot-enable":true,
        "ioam-pot-num-profiles":2,
        "ioam-pot-bit-mask":"bits32",
        "refresh-period-time-units":"milliseconds",
        "refresh-period-value":5000
    }
}

The following can be used to disable the SFC Proof of Transit on an RSP which disables the PoT feature.

POST - http://ODL-IP:8181/restconf/operations/sfc-ioam-nb-pot:disable-sfc-ioam-pot-rendered-path/

{
    "input":
    {
        "sfc-ioam-pot-rsp-name": "sfc-path-3sf3sff",
    }
}
SFC PoT NETCONF Renderer User Guide
Overview

The SFC Proof of Transit (PoT) NETCONF renderer implements SFC Proof of Transit functionality on NETCONF-capable devices, that have advertised support for in-situ OAM (iOAM) support.

It listens for an update to an existing RSP with enable or disable proof of transit support and adds the auto-generated SFC PoT configuration parameters to all the SFC hop nodes. The last node in the SFC is configured as a verifier node to allow SFC PoT process to be completed.

Common acronyms are used as below:

  • SF - Service Function

  • SFC - Service Function Chain

  • RSP - Rendered Service Path

  • SFF - Service Function Forwarder

Mapping to SFC entities

The renderer module listens to RSP updates in SfcPotNetconfRSPListener and triggers configuration generation in SfcPotNetconfIoam class. Node arrival and leaving are managed via SfcPotNetconfNodeManager and SfcPotNetconfNodeListener. In addition there is a timer thread that runs to generate configuration periodically to refresh the profiles in the nodes that are part of the SFC.

Administering SFC PoT NETCONF Renderer

To use the SFC Proof of Transit Karaf, the following Karaf features must be installed:

  • odl-sfc-model

  • odl-sfc-provider

  • odl-sfc-netconf

  • odl-restconf-all

  • odl-netconf-topology

  • odl-netconf-connector-all

  • odl-sfc-pot

  • odl-sfc-pot-netconf-renderer

SFC PoT NETCONF Renderer Tutorial
Overview

This tutorial is a simple example how to enable SFC PoT on NETCONF-capable devices.

Preconditions

The NETCONF-capable device will have to support sfc-ioam-sb-pot.yang file.

It is expected that a NETCONF-capable VPP device has Honeycomb (Hc2vpp) Java-based agent that helps to translate between NETCONF and VPP internal APIs.

More details are here: In-situ OAM: https://github.com/CiscoDevNet/iOAM

Steps

When the SFC PoT NETCONF renderer module is installed, all NETCONF nodes in topology-netconf are processed and all sfc-ioam-sb-pot yang capable nodes with accessible mountpoints are cached.

The first step is to create RSP for the SFC as per SFC guidelines above.

Enable SFC PoT is done on the RSP via RESTCONF to the ODL as outlined above.

Internally, the NETCONF renderer will act on the callback to a modified RSP that has PoT enabled.

In-situ OAM algorithms for auto-generation of SFC PoT parameters are generated automatically and sent to these nodes via NETCONF.

Logical Service Function Forwarder
Overview
Rationale

When the current SFC is deployed in a cloud environment, it is assumed that each switch connected to a Service Function is configured as a Service Function Forwarder and each Service Function is connected to its Service Function Forwarder depending on the Compute Node where the Virtual Machine is located.

Deploying SFC in Cloud Environments

As shown in the picture above, this solution allows the basic cloud use cases to be fulfilled, as for example, the ones required in OPNFV Brahmaputra, however, some advanced use cases like the transparent migration of VMs can not be implemented. The Logical Service Function Forwarder enables the following advanced use cases:

  1. Service Function mobility without service disruption

  2. Service Functions load balancing and failover

As shown in the picture below, the Logical Service Function Forwarder concept extends the current SFC northbound API to provide an abstraction of the underlying Data Center infrastructure. The Data Center underlaying network can be abstracted by a single SFF. This single SFF uses the logical port UUID as data plane locator to connect SFs globally and in a location-transparent manner. SFC makes use of Genius project to track the location of the SF’s logical ports.

Single Logical SFF concept

The SFC internally distributes the necessary flow state over the relevant switches based on the internal Data Center topology and the deployment of SFs.

Changes in data model

The Logical Service Function Forwarder concept extends the current SFC northbound API to provide an abstraction of the underlying Data Center infrastructure.

The Logical SFF simplifies the configuration of the current SFC data model by reducing the number of parameters to be be configured in every SFF, since the controller will discover those parameters by interacting with the services offered by the Genius project.

The following picture shows the Logical SFF data model. The model gets simplified as most of the configuration parameters of the current SFC data model are discovered in runtime. The complete YANG model can be found here logical SFF model.

Logical SFF data model
How to configure the Logical SFF

The following are examples to configure the Logical SFF:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/restconf/config/service-function:service-functions/

Service Functions JSON.

{
"service-functions": {
    "service-function": [
        {
            "name": "firewall-1",
            "type": "firewall",
            "sf-data-plane-locator": [
                {
                    "name": "firewall-dpl",
                    "interface-name": "eccb57ae-5a2e-467f-823e-45d7bb2a6a9a",
                    "transport": "service-locator:eth-nsh",
                    "service-function-forwarder": "sfflogical1"

                }
            ]
        },
        {
            "name": "dpi-1",
            "type": "dpi",
            "sf-data-plane-locator": [
                {
                    "name": "dpi-dpl",
                    "interface-name": "df15ac52-e8ef-4e9a-8340-ae0738aba0c0",
                    "transport": "service-locator:eth-nsh",
                    "service-function-forwarder": "sfflogical1"
                }
            ]
        }
    ]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/

Service Function Forwarders JSON.

{
"service-function-forwarders": {
    "service-function-forwarder": [
       {
            "name": "sfflogical1"
        }
    ]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/

Service Function Chains JSON.

{
"service-function-chains": {
    "service-function-chain": [
        {
            "name": "SFC1",
            "sfc-service-function": [
                {
                    "name": "dpi-abstract1",
                    "type": "dpi"
                },
                {
                    "name": "firewall-abstract1",
                    "type": "firewall"
                }
            ]
        },
        {
            "name": "SFC2",
            "sfc-service-function": [
                {
                    "name": "dpi-abstract1",
                    "type": "dpi"
                }
            ]
        }
    ]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
 admin:admin http://localhost:8182/restconf/config/service-function-chain:service-function-paths/

Service Function Paths JSON.

{
"service-function-paths": {
    "service-function-path": [
        {
            "name": "SFP1",
            "service-chain-name": "SFC1",
            "starting-index": 255,
            "symmetric": "true",
            "context-metadata": "NSH1",
            "transport-type": "service-locator:vxlan-gpe"

        }
    ]
}
}

As a result of above configuration, OpenDaylight renders the needed flows in all involved SFFs. Those flows implement:

  • Two Rendered Service Paths:

    • dpi-1 (SF1), firewall-1 (SF2)

    • firewall-1 (SF2), dpi-1 (SF1)

  • The communication between SFFs and SFs based on eth-nsh

  • The communication between SFFs based on vxlan-gpe

The following picture shows a topology and traffic flow (in green) which corresponds to the above configuration.

Logical SFF Example

Logical SFF Example

The Logical SFF functionality allows OpenDaylight to find out the SFFs holding the SFs involved in a path. In this example the SFFs affected are Node3 and Node4 thus the controller renders the flows containing NSH parameters just in those SFFs.

Here you have the new flows rendered in Node3 and Node4 which implement the NSH protocol. Every Rendered Service Path is represented by an NSP value. We provisioned a symmetric RSP so we get two NSPs: 8388613 and 5. Node3 holds the first SF of NSP 8388613 and the last SF of NSP 5. Node 4 holds the first SF of NSP 5 and the last SF of NSP 8388613. Both Node3 and Node4 will pop the NSH header when the received packet has gone through the last SF of its path.

Rendered flows Node 3

cookie=0x14, duration=59.264s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
cookie=0x14, duration=59.194s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
cookie=0x14, duration=59.257s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=5 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0x14, duration=59.189s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=8388613 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0xba5eba1100000203, duration=59.213s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=5 actions=pop_nsh,set_field:6e:e0:06:b4:c5:1e->eth_src,resubmit(,17)
cookie=0xba5eba1100000201, duration=59.213s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=59.188s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=8388613 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=59.182s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=set_field:0->tun_id,output:6

Rendered Flows Node 4

cookie=0x14, duration=69.040s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
cookie=0x14, duration=69.008s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
cookie=0x14, duration=69.040s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=5 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0x14, duration=69.005s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=8388613 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0xba5eba1100000201, duration=69.029s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=5 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=69.029s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=set_field:0->tun_id,output:1
cookie=0xba5eba1100000201, duration=68.999s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000203, duration=68.996s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=8388613 actions=pop_nsh,set_field:02:14:84:5e:a8:5d->eth_src,resubmit(,17)

An interesting scenario to show the Logical SFF strength is the migration of a SF from a compute node to another. The OpenDaylight will learn the new topology by itself, then it will re-render the new flows to the new SFFs affected.

Logical SFF - SF Migration Example

Logical SFF - SF Migration Example

In our example, SF2 is moved from Node4 to Node2 then OpenDaylight removes NSH specific flows from Node4 and puts them in Node2. Check below flows showing this effect. Now Node3 keeps holding the first SF of NSP 8388613 and the last SF of NSP 5; but Node2 becomes the new holder of the first SF of NSP 5 and the last SF of NSP 8388613.

Rendered Flows Node 3 After Migration

cookie=0x14, duration=64.044s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
cookie=0x14, duration=63.947s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
cookie=0x14, duration=64.044s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=5 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0x14, duration=63.947s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=8388613 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0xba5eba1100000201, duration=64.034s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000203, duration=64.034s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=5 actions=pop_nsh,set_field:6e:e0:06:b4:c5:1e->eth_src,resubmit(,17)
cookie=0xba5eba1100000201, duration=63.947s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=8388613 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=63.942s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=set_field:0->tun_id,output:2

Rendered Flows Node 2 After Migration

cookie=0x14, duration=56.856s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
cookie=0x14, duration=56.755s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
cookie=0x14, duration=56.847s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=5 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0x14, duration=56.755s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=8388613 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0xba5eba1100000201, duration=56.823s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=5 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=56.823s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=set_field:0->tun_id,output:4
cookie=0xba5eba1100000201, duration=56.755s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000203, duration=56.750s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=8388613 actions=pop_nsh,set_field:02:14:84:5e:a8:5d->eth_src,resubmit(,17)

Rendered Flows Node 4 After Migration

-- No flows for NSH processing --
Classifier impacts

As previously mentioned, in the Logical SFF rationale, the Logical SFF feature relies on Genius to get the dataplane IDs of the OpenFlow switches, in order to properly steer the traffic through the chain.

Since one of the classifier’s objectives is to steer the packets into the SFC domain, the classifier has to be aware of where the first Service Function is located - if it migrates somewhere else, the classifier table has to be updated accordingly, thus enabling the seemless migration of Service Functions.

For this feature, mobility of the client VM is out of scope, and should be managed by its high-availability module, or VNF manager.

Keep in mind that classification always occur in the compute-node where the client VM (i.e. traffic origin) is running.

How to attach the classifier to a Logical SFF

In order to leverage this functionality, the classifier has to be configured using a Logical SFF as an attachment-point, specifying within it the neutron port to classify.

The following examples show how to configure an ACL, and a classifier having a Logical SFF as an attachment-point:

Configure an ACL

The following ACL enables traffic intended for port 80 within the subnetwork 192.168.2.0/24, for RSP1 and RSP1-Reverse.

{
  "access-lists": {
    "acl": [
      {
        "acl-name": "ACL1",
        "acl-type": "ietf-access-control-list:ipv4-acl",
        "access-list-entries": {
          "ace": [
            {
              "rule-name": "ACE1",
              "actions": {
                "service-function-acl:rendered-service-path": "RSP1"
              },
              "matches": {
                "destination-ipv4-network": "192.168.2.0/24",
                "source-ipv4-network": "192.168.2.0/24",
                "protocol": "6",
                "source-port-range": {
                    "lower-port": 0
                },
                "destination-port-range": {
                    "lower-port": 80
                }
              }
            }
          ]
        }
      },
      {
        "acl-name": "ACL2",
        "acl-type": "ietf-access-control-list:ipv4-acl",
        "access-list-entries": {
          "ace": [
            {
              "rule-name": "ACE2",
              "actions": {
                "service-function-acl:rendered-service-path": "RSP1-Reverse"
              },
              "matches": {
                "destination-ipv4-network": "192.168.2.0/24",
                "source-ipv4-network": "192.168.2.0/24",
                "protocol": "6",
                "source-port-range": {
                    "lower-port": 80
                },
                "destination-port-range": {
                    "lower-port": 0
                }
              }
            }
          ]
        }
      }
    ]
  }
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/ietf-access-control-list:access-lists/

Configure a classifier JSON

The following JSON provisions a classifier, having a Logical SFF as an attachment point. The value of the field ‘interface’ is where you indicate the neutron ports of the VMs you want to classify.

{
  "service-function-classifiers": {
    "service-function-classifier": [
      {
        "name": "Classifier1",
        "scl-service-function-forwarder": [
          {
            "name": "sfflogical1",
            "interface": "09a78ba3-78ba-40f5-a3ea-1ce708367f2b"
          }
        ],
        "acl": {
            "name": "ACL1",
            "type": "ietf-access-control-list:ipv4-acl"
         }
      }
    ]
  }
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-classifier:service-function-classifiers/
SFC pipeline impacts

After binding SFC service with a particular interface by means of Genius, as explained in the Genius User Guide, the entry point in the SFC pipeline will be table 82 (SFC_TRANSPORT_CLASSIFIER_TABLE), and from that point, packet processing will be similar to the SFC OpenFlow pipeline, just with another set of specific tables for the SFC service.

This picture shows the SFC pipeline after service integration with Genius:

SFC Logical SFF OpenFlow pipeline

SFC Logical SFF OpenFlow pipeline

Directional data plane locators for symmetric paths
Overview

A symmetric path results from a Service Function Path with the symmetric field set or when any of the constituent Service Functions is set as bidirectional. Such a path is defined by two Rendered Service Paths where one of them steers the traffic through the same Service Functions as the other but in opposite order. These two Rendered Service Paths are also said to be symmetric to each other and gives to each path a sense of direction: The Rendered Service Path that corresponds to the same order of Service Functions as that defined on the Service Function Chain is tagged as the forward or up-link path, while the Rendered Service Path that corresponds to the opposite order is tagged as reverse or down-link path.

Directional data plane locators allow the use of different interfaces or interface details between the Service Function Forwarder and the Service Function in relation with the direction of the path for which they are being used. This function is relevant for Service Functions that would have no other way of discerning the direction of the traffic, like for example legacy bump-in-the-wire network devices.

                    +-----------------------------------------------+
                    |                                               |
                    |                                               |
                    |                      SF                       |
                    |                                               |
                    |  sf-forward-dpl                sf-reverse-dpl |
                    +--------+-----------------------------+--------+
                             |                             |
                     ^       |      +              +       |      ^
                     |       |      |              |       |      |
                     |       |      |              |       |      |
                     +       |      +              +       |      +
                Forward Path | Reverse Path   Forward Path | Reverse Path
                     +       |      +              +       |      +
                     |       |      |              |       |      |
                     |       |      |              |       |      |
                     |       |      |              |       |      |
                     +       |      v              v       |      +
                             |                             |
                 +-----------+-----------------------------------------+
  Forward Path   |     sff-forward-dpl               sff-reverse-dpl   |   Forward Path
+--------------> |                                                     | +-------------->
                 |                                                     |
                 |                         SFF                         |
                 |                                                     |
<--------------+ |                                                     | <--------------+
  Reverse Path   |                                                     |   Reverse Path
                 +-----------------------------------------------------+

As shown in the previous figure, the forward path egress from the Service Function Forwarder towards the Service Function is defined by the sff-forward-dpl and sf-forward-dpl data plane locators. The forward path ingress from the Service Function to the Service Function Forwarder is defined by the sf-reverse-dpl and sff-reverse-dpl data plane locators. For the reverse path, it’s the opposite: the sff-reverse-dpl and sf-reverse-dpl define the egress from the Service Function Forwarder to the Service Function, and the sf-forward-dpl and sff-forward-dpl define the ingress into the Service Function Forwarder from the Service Function.

Note

Directional data plane locators are only supported in combination with the SFC OF Renderer at this time.

Configuration

Directional data plane locators are configured within the service-function-forwarder in the service-function-dictionary entity, which describes the association between a Service Function Forwarder and Service Functions:

service-function-forwarder.yang
     list service-function-dictionary {
         key "name";
         leaf name {
           type sfc-common:sf-name;
           description
               "The name of the service function.";
         }
         container sff-sf-data-plane-locator {
           description
             "SFF and SF data plane locators to use when sending
              packets from this SFF to the associated SF";
           leaf sf-dpl-name {
             type sfc-common:sf-data-plane-locator-name;
             description
               "The SF data plane locator to use when sending
                packets to the associated service function.
                Used both as forward and reverse locators for
                paths of a symmetric chain.";
           }
           leaf sff-dpl-name {
             type sfc-common:sff-data-plane-locator-name;
             description
               "The SFF data plane locator to use when sending
                packets to the associated service function.
                Used both as forward and reverse locators for
                paths of a symmetric chain.";
           }
           leaf sf-forward-dpl-name {
             type sfc-common:sf-data-plane-locator-name;
             description
               "The SF data plane locator to use when sending
                packets to the associated service function
                on the forward path of a symmetric chain";
           }
           leaf sf-reverse-dpl-name {
             type sfc-common:sf-data-plane-locator-name;
             description
               "The SF data plane locator to use when sending
                packets to the associated service function
                on the reverse path of a symmetric chain";
           }
           leaf sff-forward-dpl-name {
             type sfc-common:sff-data-plane-locator-name;
             description
               "The SFF data plane locator to use when sending
                packets to the associated service function
                on the forward path of a symmetric chain.";
           }
           leaf sff-reverse-dpl-name {
             type sfc-common:sff-data-plane-locator-name;
             description
               "The SFF data plane locator to use when sending
                packets to the associated service function
                on the reverse path of a symmetric chain.";
           }
         }
     }
Example

The following configuration example is based on the Logical SFF configuration one. Only the Service Function and Service Function Forwarder configuration changes with respect to that example:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/restconf/config/service-function:service-functions/

Service Functions JSON.

{
"service-functions": {
    "service-function": [
        {
            "name": "firewall-1",
            "type": "firewall",
            "sf-data-plane-locator": [
                {
                    "name": "sf-firewall-net-A-dpl",
                    "interface-name": "eccb57ae-5a2e-467f-823e-45d7bb2a6a9a",
                    "transport": "service-locator:mac",
                    "service-function-forwarder": "sfflogical1"

                },
                {
                    "name": "sf-firewall-net-B-dpl",
                    "interface-name": "7764b6f1-a5cd-46be-9201-78f917ddee1d",
                    "transport": "service-locator:mac",
                    "service-function-forwarder": "sfflogical1"

                }
            ]
        },
        {
            "name": "dpi-1",
            "type": "dpi",
            "sf-data-plane-locator": [
                {
                    "name": "sf-dpi-net-A-dpl",
                    "interface-name": "df15ac52-e8ef-4e9a-8340-ae0738aba0c0",
                    "transport": "service-locator:mac",
                    "service-function-forwarder": "sfflogical1"
                },
                {
                    "name": "sf-dpi-net-B-dpl",
                    "interface-name": "1bb09b01-422d-4ccf-8d7a-9ebf00d1a1a5",
                    "transport": "service-locator:mac",
                    "service-function-forwarder": "sfflogical1"
                }
            ]
        }
    ]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/

Service Function Forwarders JSON.

{
"service-function-forwarders": {
    "service-function-forwarder": [
        {
            "name": "sfflogical1"
            "sff-data-plane-locator": [
                {
                    "name": "sff-firewall-net-A-dpl",
                    "data-plane-locator": {
                        "interface-name": "eccb57ae-5a2e-467f-823e-45d7bb2a6a9a",
                        "transport": "service-locator:mac"
                    }
                },
                {
                    "name": "sff-firewall-net-B-dpl",
                    "data-plane-locator": {
                        "interface-name": "7764b6f1-a5cd-46be-9201-78f917ddee1d",
                        "transport": "service-locator:mac"
                    }
                },
                {
                    "name": "sff-dpi-net-A-dpl",
                    "data-plane-locator": {
                        "interface-name": "df15ac52-e8ef-4e9a-8340-ae0738aba0c0",
                        "transport": "service-locator:mac"
                    }
                },
                {
                    "name": "sff-dpi-net-B-dpl",
                    "data-plane-locator": {
                        "interface-name": "1bb09b01-422d-4ccf-8d7a-9ebf00d1a1a5",
                        "transport": "service-locator:mac"
                    }
                }
            ],
            "service-function-dictionary": [
                {
                    "name": "firewall-1",
                    "sff-sf-data-plane-locator": {
                        "sf-forward-dpl-name": "sf-firewall-net-A-dpl",
                        "sf-reverse-dpl-name": "sf-firewall-net-B-dpl",
                        "sff-forward-dpl-name": "sff-firewall-net-A-dpl",
                        "sff-reverse-dpl-name": "sff-firewall-net-B-dpl",
                    }
                },
                {
                    "name": "dpi-1",
                    "sff-sf-data-plane-locator": {
                        "sf-forward-dpl-name": "sf-dpi-net-A-dpl",
                        "sf-reverse-dpl-name": "sf-dpi-net-B-dpl",
                        "sff-forward-dpl-name": "sff-dpi-net-A-dpl",
                        "sff-reverse-dpl-name": "sff-dpi-net-B-dpl",
                    }
                }
            ]
        }
    ]
}
}

In comparison with the Logical SFF example, noticed that each Service Function is configured with two data plane locators instead of one so that each can be used in different directions of the path. To specify which locator is used on which direction, the Service Function Forwarder configuration is also more extensive compared to the previous example.

When comparing this example with the Logical SFF one, that the Service Function Forwarder is configured with data plane locators and that they hold the same interface name values as the corresponding Service Function interfaces. This is because in the Logical SFF particular case, a single logical interface fully describes an attachment of a Service Function Forwarder to a Service Function on both the Service Function and Service Function Forwarder sides. For non-Logical SFF scenarios, it would be expected for the data plane locators to have different values as we have seen on other examples through out this user guide. For example, if mac addresses are to be specified in the locators, the Service Function would have a different mac address than the Service Function Forwarder.

As a result of the overall configuration, two Rendered Service Paths are implemented. The forward path:

                      +------------+                +-------+
                      | firewall-1 |                | dpi- 1 |
                      +---+---+----+                +--+--+-+
                          ^   |                        ^  |
                 net-A-dpl|   |net-B-dpl      net-A-dpl|  |net-B-dpl
                          |   |                        |  |
+----------+              |   |                        |  |             +----------+
| client A +--------------+   +------------------------+  +------------>+ server B |
+----------+                                                            +----------+

And the reverse path:

                      +------------+                +-------+
                      | firewall 1 |                | dpi-1 |
                      +---+---+----+                +--+--+-+
                          |   ^                        |  ^
                 net-A-dpl|   |net-B-dpl      net-A-dpl|  |net-B-dpl
                          |   |                        |  |
+----------+              |   |                        |  |             +----------+
| client A +<-------------+   +------------------------+  +-------------+ server B |
+----------+                                                            +----------+

Consider the following notes to put the example in context:

  • The classification function is obviated from the illustration.

  • The forward path is up-link traffic from a client in network A to a server in network B.

  • The reverse path is down-link traffic from a server in network B to a client in network A.

  • The service functions might be legacy bump-in-the-wire network devices that need to use different interfaces for each network.

SFC Statistics User Guide

Statistics can be queried for Rendered Service Paths created on OVS bridges. Future support will be added for Service Function Forwarders and Service Functions. Future support will also be added for VPP and IOs-XE devices.

To use SFC statistics the ‘odl-sfc-statistics’ Karaf feature needs to be installed.

Statistics are queried by sending an RPC RESTconf message to ODL. For RSPs, its possible to either query statistics for one individual RSP or for all RSPs, as follows:

Querying statistics for a specific RSP:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '{ "input": { "name" : "path1-Path-42" } }' -X POST --user admin:admin
http://localhost:8181/restconf/operations/sfc-statistics-operations:get-rsp-statistics

Querying statistics for all RSPs:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '{ "input": { } }' -X POST --user admin:admin
http://localhost:8181/restconf/operations/sfc-statistics-operations:get-rsp-statistics

The following is the sort of output that can be expected for each RSP.

{
    "output": {
        "statistics": [
            {
                "name": "sfc-path-1sf1sff-Path-34",
                "statistic-by-timestamp": [
                    {
                        "service-statistic": {
                            "bytes-in": 0,
                            "bytes-out": 0,
                            "packets-in": 0,
                            "packets-out": 0
                        },
                        "timestamp": 1518561500480
                    }
                ]
            }
        ]
    }
}
SNMP4SDN User Guide
Overview

We propose a southbound plugin that can control the off-the-shelf commodity Ethernet switches for the purpose of building SDN using Ethernet switches. For Ethernet switches, forwarding table, VLAN table, and ACL are where one can install flow configuration on, and this is done via SNMP and CLI in the proposed plugin. In addition, some settings required for Ethernet switches in SDN, e.g., disabling STP and flooding, are proposed.

SNMP4SDN as an OpenDaylight southbound plugin

SNMP4SDN as an OpenDaylight southbound plugin

Configuration

Just follow the steps:

Prepare the switch list database file

A sample is here, and we suggest to save it as /etc/snmp4sdn_swdb.csv so that SNMP4SDN Plugin can automatically load this file. Note that the first line is title and should not be removed.

Prepare the vendor-specific configuration file

A sample is here, and we suggest to save it as /etc/snmp4sdn_VendorSpecificSwitchConfig.xml so that SNMP4SDN Plugin can automatically load this file.

Install SNMP4SDN Plugin

If using SNMP4SDN Plugin provided in OpenDaylight release, just do the following from the Karaf CLI:

feature:install odl-snmp4sdn-all
Troubleshooting
Installation Troubleshooting
Feature installation failure

When trying to install a feature, if the following failure occurs:

Error executing command: Could not start bundle ...
Reason: Missing Constraint: Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.7))"

A workaround: exit Karaf, and edit the file <karaf_directory>/etc/config.properties, remove the line ${services-${karaf.framework}} and the “, " in the line above.

Runtime Troubleshooting
Problem starting SNMP Trap Interface

It is possible to get the following exception during controller startup. (The error would not be printed in Karaf console, one may see it in <karaf_directory>/data/log/karaf.log)

2014-01-31 15:00:44.688 CET [fileinstall-./plugins] WARN  o.o.snmp4sdn.internal.SNMPListener - Problem starting SNMP Trap Interface: {}
 java.net.BindException: Permission denied
        at java.net.PlainDatagramSocketImpl.bind0(Native Method) ~[na:1.7.0_51]
        at java.net.AbstractPlainDatagramSocketImpl.bind(AbstractPlainDatagramSocketImpl.java:95) ~[na:1.7.0_51]
        at java.net.DatagramSocket.bind(DatagramSocket.java:376) ~[na:1.7.0_51]
        at java.net.DatagramSocket.<init>(DatagramSocket.java:231) ~[na:1.7.0_51]
        at java.net.DatagramSocket.<init>(DatagramSocket.java:284) ~[na:1.7.0_51]
        at java.net.DatagramSocket.<init>(DatagramSocket.java:256) ~[na:1.7.0_51]
        at org.snmpj.SNMPTrapReceiverInterface.<init>(SNMPTrapReceiverInterface.java:126) ~[org.snmpj-1.4.3.jar:na]
        at org.snmpj.SNMPTrapReceiverInterface.<init>(SNMPTrapReceiverInterface.java:99) ~[org.snmpj-1.4.3.jar:na]
        at org.opendaylight.snmp4sdn.internal.SNMPListener.<init>(SNMPListener.java:75) ~[bundlefile:na]
        at org.opendaylight.snmp4sdn.core.internal.Controller.start(Controller.java:174) [bundlefile:na]
...

This indicates that the controller is being run as a user which does not have sufficient OS privileges to bind the SNMPTRAP port (162/UDP)

Switch list file missing

The SNMP4SDN Plugin needs a switch list file, which is necessary for topology discovery and should be provided by the administrator (so please prepare one for the first time using SNMP4SDN Plugin, here is the sample). The default file path is /etc/snmp4sdn_swdb.csv. SNMP4SDN Plugin would automatically load this file and start topology discovery. If this file is not ready there, the following message like this will pop up:

2016-02-02 04:21:52,476 | INFO| Event Dispatcher | CmethUtil                        | 466 - org.opendaylight.snmp4sdn - 0.3.0.SNAPSHOT | CmethUtil.readDB() err: {}
java.io.FileNotFoundException: /etc/snmp4sdn_swdb.csv (No such file or directory)
    at java.io.FileInputStream.open0(Native Method)[:1.8.0_65]
    at java.io.FileInputStream.open(FileInputStream.java:195)[:1.8.0_65]
    at java.io.FileInputStream.<init>(FileInputStream.java:138)[:1.8.0_65]
    at java.io.FileInputStream.<init>(FileInputStream.java:93)[:1.8.0_65]
    at java.io.FileReader.<init>(FileReader.java:58)[:1.8.0_65]
    at org.opendaylight.snmp4sdn.internal.util.CmethUtil.readDB(CmethUtil.java:66)
    at org.opendaylight.snmp4sdn.internal.util.CmethUtil.<init>(CmethUtil.java:43)
...
Configuration

Just follow the steps:

1. Prepare the switch list database file

A sample is here, and we suggest to save it as /etc/snmp4sdn_swdb.csv so that SNMP4SDN Plugin can automatically load this file.

Note

The first line is title and should not be removed.

2. Prepare the vendor-specific configuration file

A sample is here, and we suggest to save it as /etc/snmp4sdn_VendorSpecificSwitchConfig.xml so that SNMP4SDN Plugin can automatically load this file.

3. Install SNMP4SDN Plugin

If using SNMP4SDN Plugin provided in OpenDaylight release, just do the following:

Launch Karaf in Linux console:

cd <Boron_controller_directory>/bin
(for example, cd distribution-karaf-x.x.x-Boron/bin)
./karaf

Then in Karaf console, execute:

feature:install odl-snmp4sdn-all
4. Load switch list

For initialization, we need to feed SNMP4SDN Plugin the switch list. Actually SNMP4SDN Plugin automatically try to load the switch list at /etc/snmp4sdn_swdb.csv if there is. If so, this step could be skipped. In Karaf console, execute:

snmp4sdn:ReadDB <switch_list_path>
(For example, snmp4sdn:ReadDB /etc/snmp4sdn_swdb.csv)
(in Windows OS, For example, snmp4sdn:ReadDB D://snmp4sdn_swdb.csv)

A sample is here, and we suggest to save it as /etc/snmp4sdn_swdb.csv so that SNMP4SDN Plugin can automatically load this file.

Note

The first line is title and should not be removed.

5. Show switch list
snmp4sdn:PrintDB
Tutorial
Topology Service
Execute topology discovery

The SNMP4SDN Plugin automatically executes topology discovery on startup. One may use the following commands to invoke topology discovery manually. Note that you may need to wait for seconds for itto complete.

Note

Currently, one needs to manually execute snmp4sdn:TopoDiscover first (just once), then later the automatic topology discovery can be successful. If switches change (switch added or removed), snmp4sdn:TopoDiscover is also required. A future version will fix it to eliminate these requirements.

snmp4sdn:TopoDiscover

If one like to discover all inventory (i.e. switches and their ports) but not edges, just execute “TopoDiscoverSwitches”:

snmp4sdn:TopoDiscoverSwitches

If one like to only discover all edges but not inventory, just execute “TopoDiscoverEdges”:

snmp4sdn:TopoDiscoverEdges

You can also trigger topology discovery via the REST API by using curl from the Linux console (or any other REST client):

curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:rediscover

You can change the periodic topology discovery interval via a REST API:

curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:set-discovery-interval -d "{"input":{"interval-second":'<interval_time>'}}"
For example, set the interval as 15 seconds:
curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:set-discovery-interval -d "{"input":{"interval-second":'15'}}"
Show the topology

SNMP4SDN Plugin supports to show topology via REST API:

  • Get topology

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:get-edge-list
    
  • Get switch list

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:get-node-list
    
  • Get switches’ ports list

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:get-node-connector-list
    
  • The three commands above are just for user to get the latest topology discovery result, it does not trigger SNMP4SDN Plugin to do topology discovery.

  • To trigger SNMP4SDN Plugin to do topology discover, as described in aforementioned Execute topology discovery.

Flow configuration
FDB configuration

SNMP4SDN supports to add entry on FDB table via REST API:

  • Get FDB table

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/fdb:get-fdb-table -d "{input:{"node-id":<switch-mac-address-in-number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/fdb:get-fdb-table -d "{input:{"node-id":158969157063648}}"
    
  • Get FDB table entry

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/fdb:get-fdb-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "dest-mac-addr":<destination-mac-address-in-number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/fdb:get-fdb-entry -d "{input:{"node-id":158969157063648, "vlan-id":1, "dest-mac-addr":158969157063648}}"
    
  • Set FDB table entry

    (Notice invalid value: (1) non unicast mac (2) port not in the VLAN)

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/fdb:set-fdb-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "dest-mac-addr":<destination-mac-address-in-number>, "port":<port-in-number>, "type":'<type>'}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/fdb:set-fdb-entry -d "{input:{"node-id":158969157063648, "vlan-id":1, "dest-mac-addr":187649984473770, "port":23, "type":'MGMT'}}"
    
  • Delete FDB table entry

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/fdb:del-fdb-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "dest-mac-addr":<destination-mac-address-in-number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/fdb:del-fdb-entry -d "{input:{"node-id":158969157063648, "vlan-id":1, "dest-mac-addr":187649984473770}}"
    
VLAN configuration

SNMP4SDN supports to add entry on VLAN table via REST API:

  • Get VLAN table

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:get-vlan-table -d "{input:{node-id:<switch-mac-address-in-number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:get-vlan-table -d "{input:{node-id:158969157063648}}"
    
  • Add VLAN

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:add-vlan -d "{"input":{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "vlan-name":'<vlan-name>'}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:add-vlan -d "{"input":{"node-id":158969157063648, "vlan-id":123, "vlan-name":'v123'}}"
    
  • Delete VLAN

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:delete-vlan -d "{"input":{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:delete-vlan -d "{"input":{"node-id":158969157063648, "vlan-id":123}}"
    
  • Add VLAN and set ports

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:add-vlan-and-set-ports -d "{"input":{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "vlan-name":'<vlan-name>', "tagged-port-list":'<tagged-ports-separated-by-comma>', "untagged-port-list":'<untagged-ports-separated-by-comma>'}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:add-vlan-and-set-ports -d "{"input":{"node-id":158969157063648, "vlan-id":123, "vlan-name":'v123', "tagged-port-list":'1,2,3', "untagged-port-list":'4,5,6'}}"
    
  • Set VLAN ports

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:set-vlan-ports -d "{"input":{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "tagged-port-list":'<tagged-ports-separated-by-comma>', "untagged-port-list":'<untagged-ports-separated-by-comma>'}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:set-vlan-ports -d "{"input":{"node-id":"158969157063648", "vlan-id":"123", "tagged-port-list":'4,5', "untagged-port-list":'2,3'}}"
    
ACL configuration

SNMP4SDN supports to add flow on ACL table via REST API. However, it is so far only implemented for the D-Link DGS-3120 switch.

ACL configuration via CLI is vendor-specific, and SNMP4SDN will support configuration with vendor-specific CLI in future release.

To do ACL configuration using the REST APIs, use commands like the following:

  • Clear ACL table

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:clear-acl-table -d "{"input":{"nodeId":<switch-mac-address-in-number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:clear-acl-table -d "{"input":{"nodeId":158969157063648}}"
    
  • Create ACL profile (IP layer)

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:create-acl-profile -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>,"profile-name":'<profile_name>',"acl-layer":'IP',"vlan-mask":<vlan_mask_in_number>,"src-ip-mask":'<src_ip_mask>',"dst-ip-mask":"<destination_ip_mask>"}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:create-acl-profile -d "{input:{"nodeId":158969157063648,"profile-id":1,"profile-name":'profile_1',"acl-layer":'IP',"vlan-mask":1,"src-ip-mask":'255.255.0.0',"dst-ip-mask":'255.255.255.255'}}"
    
  • Create ACL profile (MAC layer)

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:create-acl-profile -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>,"profile-name":'<profile_name>',"acl-layer":'ETHERNET',"vlan-mask":<vlan_mask_in_number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:create-acl-profile -d "{input:{"nodeId":158969157063648,"profile-id":2,"profile-name":'profile_2',"acl-layer":'ETHERNET',"vlan-mask":4095}}"
    
  • Delete ACL profile

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:del-acl-profile -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:del-acl-profile -d "{input:{"nodeId":158969157063648,"profile-id":1}}"
    
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:del-acl-profile -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-name":"<profile_name>"}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:del-acl-profile -d "{input:{"nodeId":158969157063648,"profile-name":'profile_2'}}"
    
  • Set ACL rule

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:set-acl-rule -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>,"profile-name":'<profile_name>',"rule-id":<rule_id_in_number>,"port-list":[<port_number>,<port_number>,...],"acl-layer":'<acl_layer>',"vlan-id":<vlan_id_in_number>,"src-ip":"<src_ip_address>","dst-ip":'<dst_ip_address>',"acl-action":'<acl_action>'}}"
    (<acl_layer>: IP or ETHERNET)
    (<acl_action>: PERMIT as permit, DENY as deny)
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:set-acl-rule -d "{input:{"nodeId":158969157063648,"profile-id":1,"profile-name":'profile_1',"rule-id":1,"port-list":[1,2,3],"acl-layer":'IP',"vlan-id":2,"src-ip":'1.1.1.1',"dst-ip":'2.2.2.2',"acl-action":'PERMIT'}}"
    
  • Delete ACL rule

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:del-acl-rule -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>,"profile-name":'<profile_name>',"rule-id":<rule_id_in_number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:del-acl-rule -d "{input:{"nodeId":158969157063648,"profile-id":1,"profile-name":'profile_1',"rule-id":1}}"
    
Special configuration

SNMP4SDN supports setting the following special configurations via REST API:

  • Set STP port state

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:set-stp-port-state -d "{input:{"node-id":<switch-mac-address-in-number>, "port":<port_number>, enable:<true_or_false>}}"
    (true: enable, false: disable)
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:set-stp-port-state -d "{input:{"node-id":158969157063648, "port":2, enable:false}}"
    
  • Get STP port state

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:get-stp-port-state -d "{input:{"node-id":<switch-mac-address-in-number>, "port":<port_number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:get-stp-port-state -d "{input:{"node-id":158969157063648, "port":2}}"
    
  • Get STP port root

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:get-stp-port-root -d "{input:{"node-id":<switch-mac-address-in-number>, "port":<port_number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:get-stp-port-root -d "{input:{"node-id":158969157063648, "port":2}}"
    
  • Enable STP

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:enable-stp -d "{input:{"node-id":<switch-mac-address-in-number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:enable-stp -d "{input:{"node-id":158969157063648}}"
    
  • Disable STP

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:disable-stp -d "{input:{"node-id":<switch-mac-address-in-number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:disable-stp -d "{input:{"node-id":158969157063648}}"
    
  • Get ARP table

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:get-arp-table -d "{input:{"node-id":<switch-mac-address-in-number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:get-arp-table -d "{input:{"node-id":158969157063648}}"
    
  • Set ARP entry

    (Notice to give IP address with subnet prefix)

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:set-arp-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "ip-address":'<ip_address>', "mac-address":<mac_address_in_number>}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:set-arp-entry -d "{input:{"node-id":158969157063648, "ip-address":'10.217.9.9', "mac-address":1}}"
    
  • Get ARP entry

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:get-arp-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "ip-address":'<ip_address>'}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:get-arp-entry -d "{input:{"node-id":158969157063648, "ip-address":'10.217.9.9'}}"
    
  • Delete ARP entry

    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:delete-arp-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "ip-address":'<ip_address>'}}"
    
    For example:
    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:delete-arp-entry -d "{input:{"node-id":158969157063648, "ip-address":'10.217.9.9'}}"
    
Using Postman to invoke REST API

Besides using the curl tool to invoke REST API, like the examples aforementioned, one can also use GUI tool like Postman for better data display.

Example: Get VLAN table using Postman

As the screenshot shown below, one needs to fill in required fields.

URL:
http://<controller_ip_address>:8181/restconf/operations/vlan:get-vlan-table

Accept header:
application/json

Content-type:
application/json

Body:
{input:{"node-id":<node_id>}}
for example:
{input:{"node-id":158969157063648}}
Example: Get VLAN table using Postman

Example: Get VLAN table using Postman

Multi-vendor support

So far the supported vendor-specific configurations:

  • Add VLAN and set ports

  • (More functions are TBD)

The SNMP4SDN Plugin would examine whether the configuration is described in the vendor-specific configuration file. If yes, the configuration description would be adopted, otherwise just use the default configuration. For example, adding VLAN and setting the ports is supported via SNMP standard MIB. However we found some special cases, for example, certain Accton switch requires to add VLAN first and then allows to set the ports. So one may describe this in the vendor-specific configuration file.

A vendor-specific configuration file sample is here, and we suggest to save it as /etc/snmp4sdn_VendorSpecificSwitchConfig.xml so that SNMP4SDN Plugin can automatically load it.

Help
Unified Secure Channel

This document describes how to use the Unified Secure Channel (USC) feature in OpenDaylight. This document contains configuration, administration, and management sections for the feature.

Overview

In enterprise networks, more and more controller and network management systems are being deployed remotely, such as in the cloud. Additionally, enterprise networks are becoming more heterogeneous - branch, IoT, wireless (including cloud access control). Enterprise customers want a converged network controller and management system solution. This feature is intended for device and network administrators looking to use unified secure channels for their systems.

USC Channel Architecture
  • USC Agent

    • The USC Agent provides proxy and agent functionality on top of all standard protocols supported by the device. It initiates call-home with the controller, maintains live connections with with the controller, acts as a demuxer/muxer for packets with the USC header, and authenticates the controller.

  • USC Plugin

    • The USC Plugin is responsible for communication between the controller and the USC agent . It responds to call-home with the controller, maintains live connections with the devices, acts as a muxer/demuxer for packets with the USC header, and provides support for TLS/DTLS.

  • USC Manager

    • The USC Manager handles configurations, high availability, security, monitoring, and clustering support for USC.

Installing USC Channel

To install USC, download OpenDaylight and use the Karaf console to install the following feature:

odl-usc-channel-ui

Configuring USC Channel

This section gives details about the configuration settings for various components in USC.

The USC configuration files for the Karaf distribution are located in distribution/karaf/target/assembly/etc/usc

  • certificates

    • The certificates folder contains the client key, pem, and rootca files as is necessary for security.

  • akka.conf

    • This file contains configuration related to clustering. Potential configuration properties can be found on the akka website at http://doc.akka.io

  • usc.properties

    • This file contains configuration related to USC. Use this file to set the location of certificates, define the source of additional akka configurations, and assign default settings to the USC behavior.

Administering or Managing USC Channel

After installing the odl-usc-channel-ui feature from the Karaf console, users can administer and manage USC channels from the UI or APIDOCS explorer.

Go to http://${ipaddress}:8181/index.html, sign in, and click on the USC side menu tab. From there, users can view the state of USC channels.

Go to http://${ipaddress}:8181/apidoc/explorer/index.html, sign in, and expand the usc-channel panel. From there, users can execute various API calls to test their USC deployment such as add-channel, delete-channel, and view-channel.

Tutorials

Below are tutorials for USC Channel

Viewing USC Channel

The purpose of this tutorial is to view USC Channel

Overview

This tutorial walks users through the process of viewing the USC Channel environment topology including established channels connecting the controllers and devices in the USC topology.

Prerequisites

For this tutorial, we assume that a device running a USC agent is already installed.

Instructions
  • Run the OpenDaylight distribution and install odl-usc-channel-ui from the Karaf console.

  • Go to http://${ipaddress}:8181/apidoc/explorer/index.html

  • Execute add-channel with the following json data:

    • {“input”:{“channel”:{“hostname”:”127.0.0.1”,”port”:1068,”remote”:false}}}

  • Go to http://${ipaddress}:8181/index.html

  • Click on the USC side menu tab.

  • The UI should display a table including the added channel from step 3.

Developer Guide

Overview

Integrating Animal Sniffer with OpenDaylight projects

This section provides information required to setup OpenDaylight projects with the Maven’s Animal Sniffer plugin for testing API compatibility with OpenJDK.

Steps to setup up animal sniffer plugin with your project
  1. Clone odlparent and checkout the required branch. The example below uses the branch ‘origin/master/2.0.x’

git clone https://git.opendaylight.org/gerrit/odlparent
cd odlparent
git checkout origin/master/2.0.x
  1. Modify the file odlparent/pom.xml to install the Animal Sniffer plugin as shown in the below example or refer to the change odlparent gerrit patch.

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>animal-sniffer-maven-plugin</artifactId>
  <version>1.16</version>
  <configuration>
      <signature>
          <groupId>org.codehaus.mojo.signature</groupId>
          <artifactId>java18</artifactId>
          <version>1.0</version>
      </signature>
  </configuration>
  <executions>
      <execution>
          <id>animal-sniffer</id>
          <phase>verify</phase>
          <goals>
              <goal>check</goal>
          </goals>
      </execution>
      <execution>
          <id>check-java-version</id>
          <phase>package</phase>
          <goals>
              <goal>build</goal>
          </goals>
          <configuration>
            <signature>
              <groupId>org.codehaus.mojo.signature</groupId>
              <artifactId>java18</artifactId>
              <version>1.0</version>
            </signature>
          </configuration>
      </execution>
  </executions>
</plugin>
  1. Run a mvn clean install in odlparent.

mvn clean install
  1. Clone the respective project to be tested with the plugin. As shown in the example in yangtools gerrit patch, modify the relevant pom.xml files to reference the version of odlparent which is checked-out. As shown in the example below change the version to 2.0.6-SNAPSHOT or the version of the 2.0.x-SNAPSHOT odlparent is checked out.

<parent>
    <groupId>org.opendaylight.odlparent</groupId>
    <artifactId>odlparent</artifactId>
    <version>2.0.6-SNAPSHOT</version>
    <relativePath/>
</parent>
  1. Run a mvn clean install in your project.

mvn clean install
  1. Run mvn animal-sniffer:check on your project and fix any relevant issues.

mvn animal-sniffer:check

Project-specific Developer Guides

Distribution Version reporting
Overview

This section provides an overview of odl-distribution-version feature.

A remote user of OpenDaylight usually has access to RESTCONF and NETCONF northbound interfaces, but does not have access to the system OpenDaylight is running on. OpenDaylight has released multiple versions including Service Releases, and there are incompatible changes between them. In order to know which YANG modules to use, which bugs to expect and which workarounds to apply, such user would need to know the exact version of at least one OpenDaylight component.

There are indirect ways to deduce such version, but the direct way is enabled by odl-distribution-version feature. Administrator can specify version strings, which would be available to users via NETCONF, or via RESTCONF if OpenDaylight is configured to initiate NETCONF connection to its config subsystem northbound interface.

By default, users have write access to config subsystem, so they can add, modify or delete any version strings present there. Admins can only influence whether the feature is installed, and initial values.

Config subsystem is local only, not cluster aware, so each member reports versions independently. This is suitable for heterogeneous clusters. On homogeneous clusters, make sure you set and check every member.

Key APIs and Interfaces

Current implementation relies heavily on config-parent parent POM file from Controller project.

YANG model for config subsystem

Throughout this chapter, model denotes YANG module, and module denotes item in config subsystem module list.

Version functionality relies on config subsystem and its config YANG model. The YANG model odl-distribution-version adds an identity odl-version and augments /config:modules/module/configuration adding new case for odl-version type. This case contains single leaf version, which would hold the version string.

Config subsystem can hold multiple modules, the version string should contain version of OpenDaylight component corresponding to the module name. As this is pure metadata with no consequence on OpenDaylight behavior, there is no prescribed scheme for chosing config module names. But see the default configuration file for examples.

Java API

Each config module needs to come with java classes which override customValidation() and createInstance(). Version related modules have no impact on OpenDaylight internal behavior, so the methods return void and dummy closeable respectively, without any side effect.

Default config file

Initial version values are set via config file odl-version.xml which is created in $KARAF_HOME/etc/opendaylight/karaf/ upon installation of odl-distribution-version feature. If admin wants to use different content, the file with desired content has to be created there before feature installation happens.

By default, the config file defines two config modules, named odl-distribution-version and odl-odlparent-version.

Currently the default version values are set to Maven property strings (as opposed to valid values), as the needed new functionality did not make it into Controller project in Boron. See Bug number 6003.

Karaf Feature

The odl-distribution-version feature is currently the only feature defined in feature repository of artifactId features-distribution, which is available (transitively) in OpenDaylight Karaf distribution.

RESTCONF usage

Opendaylight config subsystem NETCONF northbound is not made available just by installing odl-distribution-version, but most other feature installations would enable it. RESTCONF interfaces are enabled by installing odl-restconf feature, but that do not allow access to config subsystem by itself.

On single node deployments, installation of odl-netconf-connector-ssh is recommended, which would configure controller-config device and its MD-SAL mount point. See documentation for clustering on how to create similar devices for member modes, as controller-config name is not unique in that context.

Assuming single node deployment and user located on the same system, here is an example curl command accessing odl-odlparent-version config module:

curl 127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-distribution-version:odl-version/odl-odlparent-version
Distribution features
Overview

This section provides an overview of odl-integration-compatible-with-all and odl-integration-all features.

Integration/Distribution project produces a Karaf 4 distribution which gives users access to many Karaf features provided by upstream OpenDaylight projects. Users are free to install arbitrary subset of those features, but not every feature combination is expected to work properly.

Some features are pro-active, which means OpenDaylight in contact with othe network elements starts diving changes in the network even without prompting by users, in order to satisfy initial conditions their use case expects. Such activity from one feature may in turn affect behavior of another feature.

In some cases, there exists features which offer diferent implementation of the same service, they may fail to initialize properly (e.g. failing to bind a port already bound by the other feature).

Integration/Test project is maintaining system tests (CSIT) jobs. Aside of testing scenarios with only a minimal set of features installed (-only- jobs), the scenarios are also tested with a large set of features installed (-all- jobs).

In order to define a proper set of features to test with, Integration/Distribution project defines two “aggregate” features. Note that these features are not intended for production use, so the feature repository which defines them is not enabled by default.

The content of these features is determined by upstream OpenDaylight contributions, with Integration/Test providing insight on observed compatibuility relations. Integration/Distribution team is focused only on making sure the build process is reliable.

Feature repositories
features-index

This feature repository is enabled by default. It does not refer to any new features directly, instead it refers to upstream feature repositories, enabling any feature contained therein to be available for installation.

features-test

This feature repository defines the two aggregate features. To enable this repository, change the featuresRepositories line of org.apache.karaf.features.cfg file, by copy-pasting the feature-index value and editing the name.

Karaf features

The two aggregate features, defining sets of user-facing features defined by compatibility requirements. Note that is the compatibility relation differs between single node an cluster deployments, single node point of view takes precedence.

odl-integration-all

This feature contains the largest set of user-facing features which may affect each others operation, but the set does not affect usability of Karaf infrastructure.

Note that port binding conflicts and “server is unhealthy” status of config subsystem are considered to affect usability, as is a failure of Restconf to respond to GET on /restconf/modules with HTTP status 200.

This feature is used in verification process for Integration/Distribution contributions.

odl-integration-compatible-with-all

This feature contains the largest set of user-facing features which are not pro-active and do not affect each others operation.

Installing this set together with just one of odl-integration-all features should still result in fully operational installation, as one pro-active feature should not lead to any conflicts. This should also hold if the single added feature is outside odl-integration-all, if it is one of conflicting implementations (and no such implementations is in odl-integration-all).

This feature is used in the aforementioned -all- CSIT jobs.

NEtwork MOdeling (NEMO)
Overview

The NEMO engine provides REST APIs to express intent, and manage it. With this northbound API, user could query what intents have been handled successfully, and what types have been predefined.

NEMO Architecture

In NEMO project, it provides three features facing developer.

  • odl-nemo-engine: it is a whole model to handle intent.

  • odl-nemo-openflow-renderer: it is a southbound render to translate intent to flow table in devices supporting for OpenFlow protocol.

  • odl-nemo-cli-render: it is also a southbound render to translate intent into forwarding table in devices supporting for traditional protocol.

Key APIs and Interfaces

NEMO projects provide four basic REST methods for user to use.

  • PUT: store the information expressed in NEMO model directly without handled by NEMO engine.

  • POST: the information expressed in NEMO model will be handled by NEMO engine, and will be translated into southbound configuration.

  • GET: obtain the data stored in data store.

  • DELETE: delete the data in data store.

NEMO Intent API

NEMO provides several RPCs to handle user’s intent. All RPCs use POST method.

  • http://{controller-ip}:8181/restconf/operations/nemo-intent:register-user: a REST API to register a new user. It is the first and necessary step to express intent.

  • http://{controller-ip}:8181/restconf/operations/nemo-intent:transaction-begin: a REST type to start a transaction. The intent exist in the transaction will be handled together.

  • http://{controller-ip}:8181/restconf/operations/nemo-intent:transaction-end: a REST API to end a transaction. The intent exist in the transaction will be handled together.

  • http://{controller-ip}:8181/restconf/operations/nemo-intent:structure-style-nemo-update: a REST API to create, import or update intent in a structure style, that is, user could express the structure of intent in json body.

  • http://{controller-ip}:8181/restconf/operations/nemo-intent:structure-style-nemo-delete: a REST API to delete intent in a structure style.

  • http://{controller-ip}:8181/restconf/operations/nemo-intent:language-style-nemo-request: a REST API to create, import, update and delete intent in a language style, that is, user could express intent with NEMO script. On the other hand, with this interface, user could query which intent have been handled successfully.

API Reference Documentation

Go to http://${IPADDRESS}:8181/apidoc/explorer/index.html. User could see many useful APIs to deploy or query intent.

Neutron Service Developer Guide
Overview

This Karaf feature (odl-neutron-service) provides integration support for OpenStack Neutron via the OpenDaylight ML2 mechanism driver. The Neutron Service is only one of the components necessary for OpenStack integration. It defines YANG models for OpenStack Neutron data models and northbound API via REST API and YANG model RESTCONF.

Those developers who want to add new provider for new OpenStack Neutron extensions/services (Neutron constantly adds new extensions/services and OpenDaylight will keep up with those new things) need to communicate with this Neutron Service or add models to Neutron Service. If you want to add new extensions/services themselves to the Neutron Service, new YANG data models need to be added, but that is out of scope of this document because this guide is for a developer who will be using the feature to build something separate, but not somebody who will be developing code for this feature itself.

Neutron Service Architecture
Neutron Service Architecture

Neutron Service Architecture

The Neutron Service defines YANG models for OpenStack Neutron integration. When OpenStack admins/users request changes (creation/update/deletion) of Neutron resources, e.g., Neutron network, Neutron subnet, Neutron port, the corresponding YANG model within OpenDaylight will be modified. The OpenDaylight OpenStack will subscribe the changes on those models and will be notified those modification through MD-SAL when changes are made. Then the provider will do the necessary tasks to realize OpenStack integration. How to realize it (or even not realize it) is up to each provider. The Neutron Service itself does not take care of it.

How to Write a SB Neutron Consumer

In Boron, there is only one options for SB Neutron Consumers:

  • Listening for changes via the Neutron YANG model

Until Beryllium there was another way with the legacy I*Aware interface. From Boron, the interface was eliminated. So all the SB Neutron Consumers have to use Neutron YANG model.

Neutron YANG models

Neutron service defines YANG models for Neutron. The details can be found at

Basically those models are based on OpenStack Neutron API definitions. For exact definitions, OpenStack Neutron source code needs to be referred as the above documentation doesn’t always cover the necessary details. There is nothing special to utilize those Neutron YANG models. The basic procedure will be:

  1. subscribe for changes made to the model

  2. respond on the data change notification for each models

Note

Currently there is no way to refuse the request configuration at this point. That is left to future work.

public class NeutronNetworkChangeListener implements DataChangeListener, AutoCloseable {
    private ListenerRegistration<DataChangeListener> registration;
    private DataBroker db;

    public NeutronNetworkChangeListener(DataBroker db){
        this.db = db;
        // create identity path to register on service startup
        InstanceIdentifier<Network> path = InstanceIdentifier
                .create(Neutron.class)
                .child(Networks.class)
                .child(Network.class);
        LOG.debug("Register listener for Neutron Network model data changes");
        // register for Data Change Notification
        registration =
                this.db.registerDataChangeListener(LogicalDatastoreType.CONFIGURATION, path, this, DataChangeScope.ONE);

    }

    @Override
    public void onDataChanged(
            AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject> changes) {
        LOG.trace("Data changes : {}",changes);

        // handle data change notification
        Object[] subscribers = NeutronIAwareUtil.getInstances(INeutronNetworkAware.class, this);
        createNetwork(changes, subscribers);
        updateNetwork(changes, subscribers);
        deleteNetwork(changes, subscribers);
    }
}
Neutron configuration

From Boron, new models of configuration for OpenDaylight to tell OpenStack neutron/networking-odl its configuration/capability.

hostconfig

This is for OpenDaylight to tell per-node configuration to Neutron. Especially this is used by pseudo agent port binding heavily.

The model definition can be found at

How to populate this for pseudo agent port binding is documented at

Neutron extension config

In Boron this is experimental. The model definition can be found at

Each Neutron Service provider has its own feature set. Some support the full features of OpenStack, but others support only a subset. With same supported Neutron API, some functionality may or may not be supported. So there is a need for a way that OpenDaylight can tell networking-odl its capability. Thus networking-odl can initialize Neutron properly based on reported capability.

Neutorn Logger

There is another small Karaf feature, odl-neutron-logger, which logs changes of Neutron YANG models. which can be used for debug/audit.

It would also help to understand how to listen the change.

Neutron Northbound
How to add new API support

OpenStack Neutron is a moving target. It is continuously adding new features as new rest APIs. Here is a basic step to add new API support:

In the Neutron Northbound project:

  • Add new YANG model for it under neutron/model/src/main/yang and update neutron.yang

  • Add northbound API for it, and neutron-spi

    • Implement Neutron<New API>Request.java and Neutron<New API>Norhtbound.java under neutron/northbound-api/src/main/java/org/opendaylight/neutron/northbound/api/

    • Implement INeutron<New API>CRUD.java and new data structure if any under neutron/neutron-spi/src/main/java/org/opendaylight/neutron/spi/

    • update neutron/neutron-spi/src/main/java/org/opendaylight/neutron/spi/NeutronCRUDInterfaces.java to wire new CRUD interface

    • Add unit tests, Neutron<New structure>JAXBTest.java under neutron/neutron-spi/src/test/java/org/opendaylight/neutron/spi/

  • update neutron/northbound-api/src/main/java/org/opendaylight/neutron/northbound/api/NeutronNorthboundRSApplication.java to wire new northbound api to RSApplication

  • Add transcriber, Neutron<New API>Interface.java under transcriber/src/main/java/org/opendaylight/neutron/transcriber/

  • update transcriber/src/main/java/org/opendaylight/neutron/transcriber/NeutronTranscriberProvider.java to wire a new transcriber

    • Add integration tests Neutron<New API>Tests.java under integration/test/src/test/java/org/opendaylight/neutron/e2etest/

    • update integration/test/src/test/java/org/opendaylight/neutron/e2etest/ITNeutronE2E.java to run a newly added tests.

In OpenStack networking-odl

  • Add new driver (or plugin) for new API with tests.

In a southbound Neutron Provider

  • implement actual backend to realize those new API by listening related YANG models.

How to write transcriber

For each Neutron data object, there is an Neutron*Interface defined within the transcriber artifact that will write that object to the MD-SAL configuration datastore.

All Neutron*Interface extend AbstractNeutronInterface, in which two methods are defined:

  • one takes the Neutron object as input, and will create a data object from it.

  • one takes an uuid as input, and will create a data object containing the uuid.

protected abstract T toMd(S neutronObject);
protected abstract T toMd(String uuid);

In addition the AbstractNeutronInterface class provides several other helper methods (addMd, updateMd, removeMd), which handle the actual writing to the configuration datastore.

The semantics of the toMD() methods

Each of the Neutron YANG models defines structures containing data. Further each YANG-modeled structure has it own builder. A particular toMD() method instantiates an instance of the correct builder, fills in the properties of the builder from the corresponding values of the Neutron object and then creates the YANG-modeled structures via the build() method.

As an example, one of the toMD code for Neutron Networks is presented below:

protected Network toMd(NeutronNetwork network) {
    NetworkBuilder networkBuilder = new NetworkBuilder();
    networkBuilder.setAdminStateUp(network.getAdminStateUp());
    if (network.getNetworkName() != null) {
        networkBuilder.setName(network.getNetworkName());
    }
    if (network.getShared() != null) {
        networkBuilder.setShared(network.getShared());
    }
    if (network.getStatus() != null) {
        networkBuilder.setStatus(network.getStatus());
    }
    if (network.getSubnets() != null) {
        List<Uuid> subnets = new ArrayList<Uuid>();
        for( String subnet : network.getSubnets()) {
            subnets.add(toUuid(subnet));
        }
        networkBuilder.setSubnets(subnets);
    }
    if (network.getTenantID() != null) {
        networkBuilder.setTenantId(toUuid(network.getTenantID()));
    }
    if (network.getNetworkUUID() != null) {
        networkBuilder.setUuid(toUuid(network.getNetworkUUID()));
    } else {
        logger.warn("Attempting to write neutron network without UUID");
    }
    return networkBuilder.build();
}
ODL Parent Developer Guide
Parent POMs
Overview

The ODL Parent component for OpenDaylight provides a number of Maven parent POMs which allow Maven projects to be easily integrated in the OpenDaylight ecosystem. Technically, the aim of projects in OpenDaylight is to produce Karaf features, and these parent projects provide common support for the different types of projects involved.

These parent projects are:

  • odlparent-lite — the basic parent POM for Maven modules which don’t produce artifacts (e.g. aggregator POMs)

  • odlparent — the common parent POM for Maven modules containing Java code

  • bundle-parent — the parent POM for Maven modules producing OSGi bundles

The following parent projects are deprecated, but still used in Carbon:

  • feature-parent — the parent POM for Maven modules producing Karaf 3 feature repositories

  • karaf-parent — the parent POM for Maven modules producing Karaf 3 distributions

The following parent projects are new in Carbon, for Karaf 4 support (which won’t be complete until Nitrogen):

  • single-feature-parent — the parent POM for Maven modules producing a single Karaf 4 feature

  • feature-repo-parent — the parent POM for Maven modules producing Karaf 4 feature repositories

  • karaf4-parent — the parent POM for Maven modules producing Karaf 4 distributions

odlparent-lite

This is the base parent for all OpenDaylight Maven projects and modules. It provides the following, notably to allow publishing artifacts to Maven Central:

  • license information;

  • organization information;

  • issue management information (a link to our Bugzilla);

  • continuous integration information (a link to our Jenkins setup);

  • default Maven plugins (maven-clean-plugin, maven-deploy-plugin, maven-install-plugin, maven-javadoc-plugin with HelpMojo support, maven-project-info-reports-plugin, maven-site-plugin with Asciidoc support, jdepend-maven-plugin);

  • distribution management information.

It also defines two profiles which help during development:

  • q (-Pq), the quick profile, which disables tests, code coverage, Javadoc generation, code analysis, etc. — anything which isn’t necessary to build the bundles and features (see this blog post for details);

  • addInstallRepositoryPath (-DaddInstallRepositoryPath=…/karaf/system) which can be used to drop a bundle in the appropriate Karaf location, to enable hot-reloading of bundles during development (see this blog post for details).

For modules which don’t produce any useful artifacts (e.g. aggregator POMs), you should add the following to avoid processing artifacts:

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-deploy-plugin</artifactId>
            <configuration>
                <skip>true</skip>
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-install-plugin</artifactId>
            <configuration>
                <skip>true</skip>
            </configuration>
        </plugin>
    </plugins>
</build>
odlparent

This inherits from odlparent-lite and mainly provides dependency and plugin management for OpenDaylight projects.

If you use any of the following libraries, you should rely on odlparent to provide the appropriate versions:

  • Akka (and Scala)

  • Apache Commons:

    • commons-codec

    • commons-fileupload

    • commons-io

    • commons-lang

    • commons-lang3

    • commons-net

  • Apache Shiro

  • Guava

  • JAX-RS with Jersey

  • JSON processing:

    • GSON

    • Jackson

  • Logging:

    • Logback

    • SLF4J

  • Netty

  • OSGi:

    • Apache Felix

    • core OSGi dependencies (core, compendium…)

  • Testing:

    • Hamcrest

    • JSON assert

    • JUnit

    • Mockito

    • Pax Exam

    • PowerMock

  • XML/XSL:

    • Xerces

    • XML APIs

Note

This list isn’t exhaustive. It’s also not cast in stone; if you’d like to add a new dependency (or migrate a dependency), please contact the mailing list.

odlparent also enforces some Checkstyle verification rules. In particular, it enforces the common license header used in all OpenDaylight code:

/*
 * Copyright © ${year} ${holder} and others.  All rights reserved.
 *
 * This program and the accompanying materials are made available under the
 * terms of the Eclipse Public License v1.0 which accompanies this distribution,
 * and is available at http://www.eclipse.org/legal/epl-v10.html
 */

where “${year}” is initially the first year of publication, then (after a year has passed) the first and latest years of publication, separated by commas (e.g. “2014, 2016”), and “${holder}” is the initial copyright holder (typically, the first author’s employer). “All rights reserved” is optional.

If you need to disable this license check, e.g. for files imported under another license (EPL-compatible of course), you can override the maven-checkstyle-plugin configuration. features-test does this for its CustomBundleUrlStreamHandlerFactory class, which is ASL-licensed:

<plugin>
    <artifactId>maven-checkstyle-plugin</artifactId>
    <executions>
        <execution>
            <id>check-license</id>
            <goals>
                <goal>check</goal>
            </goals>
            <phase>process-sources</phase>
            <configuration>
                <configLocation>check-license.xml</configLocation>
                <headerLocation>EPL-LICENSE.regexp.txt</headerLocation>
                <includeResources>false</includeResources>
                <includeTestResources>false</includeTestResources>
                <sourceDirectory>${project.build.sourceDirectory}</sourceDirectory>
                <excludes>
                    <!-- Skip Apache Licensed files -->
                    org/opendaylight/odlparent/featuretest/CustomBundleUrlStreamHandlerFactory.java
                </excludes>
                <failsOnError>false</failsOnError>
                <consoleOutput>true</consoleOutput>
            </configuration>
        </execution>
    </executions>
</plugin>
bundle-parent

This inherits from odlparent and enables functionality useful for OSGi bundles:

  • maven-javadoc-plugin is activated, to build the Javadoc JAR;

  • maven-source-plugin is activated, to build the source JAR;

  • maven-bundle-plugin is activated (including extensions), to build OSGi bundles (using the “bundle” packaging).

In addition to this, JUnit is included as a default dependency in “test” scope.

features-parent

This inherits from odlparent and enables functionality useful for Karaf features:

  • karaf-maven-plugin is activated, to build Karaf features — but for OpenDaylight, projects need to use “jar” packaging (not “feature” or “kar”);

  • features.xml files are processed from templates stored in src/main/features/features.xml;

  • Karaf features are tested after build to ensure they can be activated in a Karaf container.

The features.xml processing allows versions to be ommitted from certain feature dependencies, and replaced with “{{version}}”. For example:

<features name="odl-mdsal-${project.version}" xmlns="http://karaf.apache.org/xmlns/features/v1.2.0"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://karaf.apache.org/xmlns/features/v1.2.0 http://karaf.apache.org/xmlns/features/v1.2.0">

    <repository>mvn:org.opendaylight.odlparent/features-odlparent/{{VERSION}}/xml/features</repository>

    [...]
    <feature name='odl-mdsal-broker-local' version='${project.version}' description="OpenDaylight :: MDSAL :: Broker">
        <feature version='${yangtools.version}'>odl-yangtools-common</feature>
        <feature version='${mdsal.version}'>odl-mdsal-binding-dom-adapter</feature>
        <feature version='${mdsal.model.version}'>odl-mdsal-models</feature>
        <feature version='${project.version}'>odl-mdsal-common</feature>
        <feature version='${config.version}'>odl-config-startup</feature>
        <feature version='${config.version}'>odl-config-netty</feature>
        <feature version='[3.3.0,4.0.0)'>odl-lmax</feature>
        [...]
        <bundle>mvn:org.opendaylight.controller/sal-dom-broker-config/{{VERSION}}</bundle>
        <bundle start-level="40">mvn:org.opendaylight.controller/blueprint/{{VERSION}}</bundle>
        <configfile finalname="${config.configfile.directory}/${config.mdsal.configfile}">mvn:org.opendaylight.controller/md-sal-config/{{VERSION}}/xml/config</configfile>
    </feature>

As illustrated, versions can be ommitted in this way for repository dependencies, bundle dependencies and configuration files. They must be specified traditionally (either hard-coded, or using Maven properties) for feature dependencies.

karaf-parent

This allows building a Karaf 3 distribution, typically for local testing purposes. Any runtime-scoped feature dependencies will be included in the distribution, and the karaf.localFeature property can be used to specify the boot feature (in addition to standard).

single-feature-parent

This inherits from odlparent and enables functionality useful for Karaf 4 features:

  • karaf-maven-plugin is activated, to build Karaf features, typically with “feature” packaging (“kar” is also supported);

  • feature.xml files are generated based on the compile-scope dependencies defined in the POM, optionally initialised from a stub in src/main/feature/feature.xml.

  • Karaf features are tested after build to ensure they can be activated in a Karaf container.

The feature.xml processing adds transitive dependencies by default, which allows features to be defined using only the most significant dependencies (those that define the feature); other requirements are determined automatically as long as they exist as Maven dependencies.

“configfiles” need to be defined both as Maven dependencies (with the appropriate type and classifier) and as <configfile> elements in the feature.xml stub.

Other features which a feature depends on need to be defined as Maven dependencies with type “xml” and classifier “features” (note the plural here).

feature-repo-parent

This inherits from odlparent and enables functionality useful for Karaf 4 feature repositories. It follows the same principles as single-feature-parent, but is designed specifically for repositories and should be used only for this type of artifacts.

It builds a feature repository referencing all the (feature) dependencies listed in the POM.

karaf4-parent

This allows building a Karaf 4 distribution, typically for local testing purposes. Any runtime-scoped feature dependencies will be included in the distribution, and the karaf.localFeature property can be used to specify the boot feature (in addition to standard).

Features (for Karaf 3)

The ODL Parent component for OpenDaylight provides a number of Karaf 3 features which can be used by other Karaf 3 features to use certain third-party upstream dependencies.

These features are:

  • Akka features (in the features-akka repository):

    • odl-akka-all — all Akka bundles;

    • odl-akka-scala-2.11 — Scala runtime for OpenDaylight;

    • odl-akka-system-2.4 — Akka actor framework bundles;

    • odl-akka-clustering-2.4 — Akka clustering bundles and dependencies;

    • odl-akka-leveldb-0.7 — LevelDB;

    • odl-akka-persistence-2.4 — Akka persistence;

  • general third-party features (in the features-odlparent repository):

    • odl-netty-4 — all Netty bundles;

    • odl-guava-18 — Guava 18;

    • odl-guava-21 — Guava 21 (not indended for use in Carbon);

    • odl-lmax-3 — LMAX Disruptor;

    • odl-triemap-0.2 — Concurrent Trie HashMap.

To use these, you need to declare a dependency on the appropriate repository in your features.xml file:

<repository>mvn:org.opendaylight.odlparent/features-odlparent/{{VERSION}}/xml/features</repository>

and then include the feature, e.g.:

<feature name='odl-mdsal-broker-local' version='${project.version}' description="OpenDaylight :: MDSAL :: Broker">
    [...]
    <feature version='[3.3.0,4.0.0)'>odl-lmax</feature>
    [...]
</feature>

You also need to depend on the features repository in your POM:

<dependency>
    <groupId>org.opendaylight.odlparent</groupId>
    <artifactId>features-odlparent</artifactId>
    <classifier>features</classifier>
    <type>xml</type>
</dependency>

assuming the appropriate dependency management:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.opendaylight.odlparent</groupId>
            <artifactId>odlparent-artifacts</artifactId>
            <version>1.8.0-SNAPSHOT</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>

(the version number there is appropriate for Carbon). For the time being you also need to depend separately on the individual JARs as compile-time dependencies to build your dependent code; the relevant dependencies are managed in odlparent’s dependency management.

The suggested version ranges are as follows:
  • odl-netty: [4.0.37,4.1.0) or [4.0.37,5.0.0);

  • odl-guava: [18,19) (if your code is ready for it, [19,20) is also available, but the current default version of Guava in OpenDaylight is 18);

  • odl-lmax: [3.3.4,4.0.0)

Features (for Karaf 4)

There are equivalent features to all the Karaf 3 features, for Karaf 4. The repositories use “features4” instead of “features”, and the features use “odl4” instead of “odl”.

The following new features are specific to Karaf 4:

  • Karaf wrapper features (also in the features4-odlparent repository) — these can be used to pull in a Karaf feature using a Maven dependency in a POM:

    • odl-karaf-feat-feature — the Karaf feature feature;

    • odl-karaf-feat-jdbc — the Karaf jdbc feature;

    • odl-karaf-feat-jetty — the Karaf jetty feature;

    • odl-karaf-feat-war — the Karaf war feature.

To use these, all you need to do now is add the appropriate dependency in your feature POM; for example:

<dependency>
    <groupId>org.opendaylight.odlparent</groupId>
    <artifactId>odl4-guava-18</artifactId>
    <classifier>features</classifier>
    <type>xml</type>
</dependency>

assuming the appropriate dependency management:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.opendaylight.odlparent</groupId>
            <artifactId>odlparent-artifacts</artifactId>
            <version>1.8.0-SNAPSHOT</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>

(the version number there is appropriate for Carbon). We no longer use version ranges, the feature dependencies all use the odlparent version (but you should rely on the artifacts POM).

OpenFlow Protocol Library Developer Guide
Introduction

OpenFlow Protocol Library is component in OpenDaylight, that mediates communication between OpenDaylight controller and hardware devices supporting OpenFlow protocol. Primary goal is to provide user (or upper layers of OpenDaylight) communication channel, that can be used for managing network hardware devices.

Features Overview

There are three features inside openflowjava:

  • odl-openflowjava-protocol provides all openflowjava bundles, that are needed for communication with openflow devices. It ensures message translation and handles network connections. It also provides openflow protocol specific model.

  • odl-openflowjava-all currently contains only odl-openflowjava-protocol feature.

  • odl-openflowjava-stats provides mechanism for message counting and reporting. Can be used for performance analysis.

odl-openflowjava-protocol Architecture

Basic bundles contained in this feature are openflow-protocol-api, openflow-protocol-impl, openflow-protocol-spi and util.

  • openflow-protocol-api - contains openflow model, constants and keys used for (de)serializer registration.

  • openflow-protocol-impl - contains message factories, that translate binary messages into DataObjects and vice versa. Bundle also contains network connection handlers - servers, netty pipeline handlers, …

  • openflow-protocol-spi - entry point for openflowjava configuration, startup and close. Basically starts implementation.

  • util - utility classes for binary-Java conversions and to ease experimenter key creation

odl-openflowjava-stats Feature

Runs over odl-openflowjava-protocol. It counts various message types / events and reports counts in specified time periods. Statistics collection can be configured in openflowjava-config/src/main/resources/45-openflowjava-stats.xml

Key APIs and Interfaces

Basic API / SPI classes are ConnectionAdapter (Rpc/notifications) and SwitchConnectionProcider (configure, start, shutdown)

Installation

Pull the code and import project into your IDE.

git clone ssh://<username>@git.opendaylight.org:29418/openflowjava.git
Configuration

Current implementation allows to configure:

  • listening port (mandatory)

  • transfer protocol (mandatory)

  • switch idle timeout (mandatory)

  • TLS configuration (optional)

  • thread count (optional)

You can find exemplary Openflow Protocol Library instance configuration below:

<data xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
  <modules xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
    <!-- default OF-switch-connection-provider (port 6633) -->
    <module>
      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider:impl">prefix:openflow-switch-connection-provider-impl</type>
      <name>openflow-switch-connection-provider-default-impl</name>
      <port>6633</port>
<!--  Possible transport-protocol options: TCP, TLS, UDP -->
      <transport-protocol>TCP</transport-protocol>
      <switch-idle-timeout>15000</switch-idle-timeout>
<!--       Exemplary TLS configuration:
            - uncomment the <tls> tag
            - copy exemplary-switch-privkey.pem, exemplary-switch-cert.pem and exemplary-cacert.pem
              files into your virtual machine
            - set VM encryption options to use copied keys
            - start communication
           Please visit OpenflowPlugin or Openflow Protocol Library#Documentation wiki pages
           for detailed information regarding TLS -->
<!--       <tls>
             <keystore>/exemplary-ctlKeystore</keystore>
             <keystore-type>JKS</keystore-type>
             <keystore-path-type>CLASSPATH</keystore-path-type>
             <keystore-password>opendaylight</keystore-password>
             <truststore>/exemplary-ctlTrustStore</truststore>
             <truststore-type>JKS</truststore-type>
             <truststore-path-type>CLASSPATH</truststore-path-type>
             <truststore-password>opendaylight</truststore-password>
             <certificate-password>opendaylight</certificate-password>
           </tls> -->
<!--       Exemplary thread model configuration. Uncomment <threads> tag below to adjust default thread model -->
<!--       <threads>
             <boss-threads>2</boss-threads>
             <worker-threads>8</worker-threads>
           </threads> -->
    </module>
    <!-- default OF-switch-connection-provider (port 6653) -->
    <module>
      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider:impl">prefix:openflow-switch-connection-provider-impl</type>
      <name>openflow-switch-connection-provider-legacy-impl</name>
      <port>6653</port>
<!--  Possible transport-protocol options: TCP, TLS, UDP -->
      <transport-protocol>TCP</transport-protocol>
      <switch-idle-timeout>15000</switch-idle-timeout>
<!--       Exemplary TLS configuration:
            - uncomment the <tls> tag
            - copy exemplary-switch-privkey.pem, exemplary-switch-cert.pem and exemplary-cacert.pem
              files into your virtual machine
            - set VM encryption options to use copied keys
            - start communication
           Please visit OpenflowPlugin or Openflow Protocol Library#Documentation wiki pages
           for detailed information regarding TLS -->
<!--       <tls>
             <keystore>/exemplary-ctlKeystore</keystore>
             <keystore-type>JKS</keystore-type>
             <keystore-path-type>CLASSPATH</keystore-path-type>
             <keystore-password>opendaylight</keystore-password>
             <truststore>/exemplary-ctlTrustStore</truststore>
             <truststore-type>JKS</truststore-type>
             <truststore-path-type>CLASSPATH</truststore-path-type>
             <truststore-password>opendaylight</truststore-password>
             <certificate-password>opendaylight</certificate-password>
           </tls> -->
<!--       Exemplary thread model configuration. Uncomment <threads> tag below to adjust default thread model -->
<!--       <threads>
             <boss-threads>2</boss-threads>
             <worker-threads>8</worker-threads>
           </threads> -->
    </module>
  <module>
    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:common:config:impl">prefix:openflow-provider-impl</type>
    <name>openflow-provider-impl</name>
    <openflow-switch-connection-provider>
      <type xmlns:ofSwitch="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider">ofSwitch:openflow-switch-connection-provider</type>
      <name>openflow-switch-connection-provider-default</name>
    </openflow-switch-connection-provider>
    <openflow-switch-connection-provider>
      <type xmlns:ofSwitch="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider">ofSwitch:openflow-switch-connection-provider</type>
      <name>openflow-switch-connection-provider-legacy</name>
    </openflow-switch-connection-provider>
    <binding-aware-broker>
      <type xmlns:binding="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">binding:binding-broker-osgi-registry</type>
      <name>binding-osgi-broker</name>
    </binding-aware-broker>
  </module>
</modules>

Possible transport-protocol options:

  • TCP

  • TLS

  • UDP

Switch-idle timeout specifies time needed to detect idle state of switch. When no message is received from switch within this time, upper layers are notified on switch idleness. To be able to use this exemplary TLS configuration:

  • uncomment the <tls> tag

  • copy exemplary-switch-privkey.pem, exemplary-switch-cert.pem and exemplary-cacert.pem files into your virtual machine

  • set VM encryption options to use copied keys (please visit TLS support wiki page for detailed information regarding TLS)

  • start communication

Thread model configuration specifies how many threads are desired to perform Netty’s I/O operations.

  • boss-threads specifies the number of threads that register incoming connections

  • worker-threads specifies the number of threads performing read / write (+ serialization / deserialization) operations.

Architecture
Public API (openflow-protocol-api)

Set of interfaces and builders for immutable data transfer objects representing Openflow Protocol structures.

Transfer objects and service APIs are infered from several YANG models using code generator to reduce verbosity of definition and repeatibility of code.

The following YANG modules are defined:

  • openflow-types - defines common Openflow specific types

  • openflow-instruction - defines base Openflow instructions

  • openflow-action - defines base Openflow actions

  • openflow-augments - defines object augmentations

  • openflow-extensible-match - defines Openflow OXM match

  • openflow-protocol - defines Openflow Protocol messages

  • system-notifications - defines system notification objects

  • openflow-configuration - defines structures used in ConfigSubsystem

This modules also reuse types from following YANG modules:

  • ietf-inet-types - IP adresses, IP prefixes, IP-protocol related types

  • ietf-yang-types - Mac Address, etc.

The use of predefined types is to make APIs contracts more safe, better readable and documented (e.g using MacAddress instead of byte array…)

TCP Channel pipeline (openflow-protocol-impl)

Creates channel processing pipeline based on configuration and support.

TCP Channel pipeline.

imageopenflowjava/500px-TCPChannelPipeline.png[width=500]

Switch Connection Provider.

Implementation of connection point for other projects. Library exposes its functionality through this class. Library can be configured, started and shutdowned here. There are also methods for custom (de)serializer registration.

Tcp Connection Initializer.

In order to initialize TCP connection to a device (switch), OF Plugin calls method initiateConnection() in SwitchConnectionProvider. This method in turn initializes (Bootstrap) server side channel towards the device.

TCP Handler.

Represents single server that is handling incoming connections over TCP / TLS protocol. TCP Handler creates a single instance of TCP Channel Initializer that will initialize channels. After that it binds to configured InetAddress and port. When a new device connects, TCP Handler registers its channel and passes control to TCP Channel Initializer.

TCP Channel Initializer.

This class is used for channel initialization / rejection and passing arguments. After a new channel has been registered it calls Switch Connection Handler’s (OF Plugin) accept method to decide if the library should keep the newly registered channel or if the channel should be closed. If the channel has been accepted, TCP Channel Initializer creates the whole pipeline with needed handlers and also with ConnectionAdapter instance. After the channel pipeline is ready, Switch Connection Handler is notified with onConnectionReady notification. OpenFlow Plugin can now start sending messages downstream.

Idle Handler.

If there are no messages received for more than time specified, this handler triggers idle state notification. The switch idle timeout is received as a parameter from ConnectionConfiguration settings. Idle State Handler is inactive while there are messages received within the switch idle timeout. If there are no messages received for more than timeout specified, handler creates SwitchIdleEvent message and sends it upstream.

TLS Handler.

It encrypts and decrypts messages over TLS protocol. Engaging TLS Handler into pipeline is matter of configuration (<tls> tag). TLS communication is either unsupported or required. TLS Handler is represented as a Netty’s SslHandler.

OF Frame Decoder.

Parses input stream into correct length message frames for further processing. Framing is based on Openflow header length. If received message is shorter than minimal length of OpenFlow message (8 bytes), OF Frame Decoder waits for more data. After receiving at least 8 bytes the decoder checks length in OpenFlow header. If there are still some bytes missing, the decoder waits for them. Else the OF Frame Decoder sends correct length message to next handler in the channel pipeline.

OF Version Detector.

Detects version of used OpenFlow Protocol and discards unsupported version messages. If the detected version is supported, OF Version Detector creates VersionMessageWrapper object containing the detected version and byte message and sends this object upstream.

OF Decoder.

Chooses correct deserilization factory (based on message type) and deserializes messages into generated DTOs (Data Transfer Object). OF Decoder receives VersionMessageWrapper object and passes it to DeserializationFactory which will return translated DTO. DeserializationFactory creates MessageCodeKey object with version and type of received message and Class of object that will be the received message deserialized into. This object is used as key when searching for appropriate decoder in DecoderTable. DecoderTable is basically a map storing decoders. Found decoder translates received message into DTO. If there was no decoder found, null is returned. After returning translated DTO back to OF Decoder, the decoder checks if it is null or not. When the DTO is null, the decoder logs this state and throws an Exception. Else it passes the DTO further upstream. Finally, the OF Decoder releases ByteBuf containing received and decoded byte message.

OF Encoder.

Chooses correct serialization factory (based on type of DTO) and serializes DTOs into byte messages. OF Encoder does the opposite than the OF Decoder using the same principle. OF Encoder receives DTO, passes it for translation and if the result is not null, it sends translated DTO downstream as a ByteBuf. Searching for appropriate encoder is done via MessageTypeKey, based on version and class of received DTO.

Delegating Inbound Handler.

Delegates received DTOs to Connection Adapter. It also reacts on channelInactive and channelUnregistered events. Upon one of these events is triggered, DelegatingInboundHandler creates DisconnectEvent message and sends it upstream, notifying upper layers about switch disconnection.

Channel Outbound Queue.

Message flushing handler. Stores outgoing messages (DTOs) and flushes them. Flush is performed based on time expired and on the number of messages enqueued.

Connection Adapter.

Provides a facade on top of pipeline, which hides netty.io specifics. Provides a set of methods to register for incoming messages and to send messages to particular channel / session. ConnectionAdapterImpl basically implements three interfaces (unified in one superinterface ConnectionFacade):

  • ConnectionAdapter

  • MessageConsumer

  • OpenflowProtocolService

ConnectionAdapter interface has methods for setting up listeners (message, system and connection ready listener), method to check if all listeners are set, checking if the channel is alive and disconnect method. Disconnect method clears responseCache and disables consuming of new messages.

MessageConsumer interface holds only one method: consume(). Consume() method is called from DelegatingInboundHandler. This method processes received DTO’s based on their type. There are three types of received objects:

  • System notifications - invoke system notifications in OpenFlow Plugin (systemListener set). In case of DisconnectEvent message, the Connection Adapter clears response cache and disables consume() method processing,

  • OpenFlow asynchronous messages (from switch) - invoke corresponding notifications in OpenFlow Plugin,

  • OpenFlow symmetric messages (replies to requests) - create RpcResponseKey with XID and DTO’s class set. This RpcResponseKey is then used to find corresponding future object in responseCache. Future object is set with success flag, received message and errors (if any occurred). In case no corresponding future was found in responseCache, Connection Adapter logs warning and discards the message. Connection Adapter also logs warning when an unknown DTO is received.

OpenflowProtocolService interface contains all rpc-methods for sending messages from upper layers (OpenFlow Plugin) downstream and responding. Request messages return Future filled with expected reply message, otherwise the expected Future is of type Void.

NOTE: MultipartRequest message is the only exception. Basically it is request - reply Message type, but it wouldn’t be able to process more following MultipartReply messages if this was implemented as rpc (only one Future). This is why MultipartReply is implemented as notification. OpenFlow Plugin takes care of correct message processing.

UDP Channel pipeline (openflow-protocol-impl)

Creates UDP channel processing pipeline based on configuration and support. Switch Connection Provider, Channel Outbound Queue and Connection Adapter fulfill the same role as in case of TCP connection / channel pipeline (please see above).

UDP Channel pipeline

UDP Channel pipeline

UDP Handler.

Represents single server that is handling incoming connections over UDP (DTLS) protocol. UDP Handler creates a single instance of UDP Channel Initializer that will initialize channels. After that it binds to configured InetAddress and port. When a new device connects, UDP Handler registers its channel and passes control to UDP Channel Initializer.

UDP Channel Initializer.

This class is used for channel initialization and passing arguments. After a new channel has been registered (for UDP there is always only one channel) UDP Channel Initializer creates whole pipeline with needed handlers.

DTLS Handler.

Haven’t been implemented yet. Will take care of secure DTLS connections.

OF Datagram Packet Handler.

Combines functionality of OF Frame Decoder and OF Version Detector. Extracts messages from received datagram packets and checks if message version is supported. If there is a message received from yet unknown sender, OF Datagram Packet Handler creates Connection Adapter for this sender and stores it under sender’s address in UdpConnectionMap. This map is also used for sending the messages and for correct Connection Adapter lookup - to delegate messages from one channel to multiple sessions.

OF Datagram Packet Decoder.

Chooses correct deserilization factory (based on message type) and deserializes messages into generated DTOs. OF Decoder receives VersionMessageUdpWrapper object and passes it to DeserializationFactory which will return translated DTO. DeserializationFactory creates MessageCodeKey object with version and type of received message and Class of object that will be the received message deserialized into. This object is used as key when searching for appropriate decoder in DecoderTable. DecoderTable is basically a map storing decoders. Found decoder translates received message into DTO (DataTransferObject). If there was no decoder found, null is returned. After returning translated DTO back to OF Datagram Packet Decoder, the decoder checks if it is null or not. When the DTO is null, the decoder logs this state. Else it looks up appropriate Connection Adapter in UdpConnectionMap and passes the DTO to found Connection Adapter. Finally, the OF Decoder releases ByteBuf containing received and decoded byte message.

OF Datagram Packet Encoder.

Chooses correct serialization factory (based on type of DTO) and serializes DTOs into byte messages. OF Datagram Packet Encoder does the opposite than the OF Datagram Packet Decoder using the same principle. OF Encoder receives DTO, passes it for translation and if the result is not null, it sends translated DTO downstream as a datagram packet. Searching for appropriate encoder is done via MessageTypeKey, based on version and class of received DTO.

SPI (openflow-protocol-spi)

Defines interface for library’s connection point for other projects. Library exposes its functionality through this interface.

Integration test (openflow-protocol-it)

Testing communication with simple client.

Simple client(simple-client)

Lightweight switch simulator - programmable with desired scenarios.

Utility (util)

Contains utility classes, mainly for work with ByteBuf.

Library’s lifecycle

Steps (after the library’s bundle is started):

  • [1] Library is configured by ConfigSubsystem (adress, ports, encryption, …)

  • [2] Plugin injects its SwitchConnectionHandler into the Library

  • [3] Plugin starts the Library

  • [4] Library creates configured protocol handler (e.g. TCP Handler)

  • [5] Protocol Handler creates Channel Initializer

  • [6] Channel Initializer asks plugin whether to accept incoming connection on each new switch connection

  • [7] Plugin responds:

    • true - continue building pipeline

    • false - reject connection / disconnect channel

  • [8] Library notifies Plugin with onSwitchConnected(ConnectionAdapter) notification, passing reference to ConnectionAdapter, that will handle the connection

  • [9] Plugin registers its system and message listeners

  • [10] FireConnectionReadyNotification() is triggered, announcing that pipeline handlers needed for communication have been created and Plugin can start communication

  • [11] Plugin shutdowns the Library when desired

Library lifecycle

Library lifecycle

Statistics collection
Introduction

Statistics collection collects message statistics. Current collected statistics (DS - downstream, US - upstream):

  • DS_ENTERED_OFJAVA - all messages that entered openflowjava (picked up from openflowplugin)

  • DS_ENCODE_SUCCESS - successfully encoded messages

  • DS_ENCODE_FAIL - messages that failed during encoding (serialization) process

  • DS_FLOW_MODS_ENTERED - all flow-mod messages that entered openflowjava

  • DS_FLOW_MODS_SENT - all flow-mod messages that were successfully sent

  • US_RECEIVED_IN_OFJAVA - messages received from switch

  • US_DECODE_SUCCESS - successfully decoded messages

  • US_DECODE_FAIL - messages that failed during decoding (deserialization) process

  • US_MESSAGE_PASS - messages handed over to openflowplugin

Karaf

In orded to start statistics, it is needed to feature:install odl-openflowjava-stats. To see the logs one should use log:set DEBUG org.opendaylight.openflowjava.statistics and than probably log:display (you can log:list to see if the logging has been set). To adjust collection settings it is enough to modify 45-openflowjava-stats.xml.

JConsole

JConsole provides two commands for the statistics collection:

  • printing current statistics

  • resetting statistic counters

After attaching JConsole to correct process, one only needs to go into MBeans tab org.opendaylight.controller RuntimeBean statistics-collection-service-impl statistics-collection-service-impl Operations to be able to use this commands.

TLS Support

Note

see OpenFlow Plugin Developper Guide

Extensibility
Introduction

Entry point for the extensibility is SwitchConnectionProvider. SwitchConnectionProvider contains methods for (de)serializer registration. To register deserializer it is needed to use .register*Deserializer(key, impl). To register serializer one must use .register*Serializer(key, impl). Registration can occur either during configuration or at runtime.

NOTE: In case when experimenter message is received and no (de)serializer was registered, the library will throw IllegalArgumentException.

Basic Principle

In order to use extensions it is needed to augment existing model and register new (de)serializers.

Augmenting the model: 1. Create new augmentation

Register (de)serializers: 1. Create your (de)serializer 2. Let it implement OFDeserializer<> / OFSerializer<> - in case the structure you are (de)serializing needs to be used in Multipart TableFeatures messages, let it implement HeaderDeserializer<> / HeaderSerializer 3. Implement prescribed methods 4. Register your deserializer under appropriate key (in our case ExperimenterActionDeserializerKey) 5. Register your serializer under appropriate key (in our case ExperimenterActionSerializerKey) 6. Done, test your implementation

NOTE: If you don’t know what key should be used with your (de)serializer implementation, please visit Registration keys page.

Example

Let’s say we have vendor / experimenter action represented by this structure:

struct foo_action {
    uint16_t type;
    uint16_t length;
    uint32_t experimenter;
    uint16_t first;
    uint16_t second;
    uint8_t  pad[4];
}

First, we have to augment existing model. We create new module, which imports “openflow-types.yang” (don’t forget to update your pom.xml with api dependency). Now we create foo action identity:

import openflow-types {prefix oft;}
identity foo {
    description "Foo action description";
    base oft:action-base;
}

This will be used as type in our structure. Now we must augment existing action structure, so that we will have the desired fields first and second. In order to create new augmentation, our module has to import “openflow-action.yang”. The augment should look like this:

import openflow-action {prefix ofaction;}
augment "/ofaction:actions-container/ofaction:action" {
    ext:augment-identifier "foo-action";
        leaf first {
            type uint16;
        }
        leaf second {
            type uint16;
        }
    }

We are finished with model changes. Run mvn clean compile to generate sources. After generation is done, we need to implement our (de)serializer.

Deserializer:

public class FooActionDeserializer extends OFDeserializer<Action> {
   @Override
   public Action deserialize(ByteBuf input) {
       ActionBuilder builder = new ActionBuilder();
       input.skipBytes(SIZE_OF_SHORT_IN_BYTES); *// we know the type of action*
       builder.setType(Foo.class);
       input.skipBytes(SIZE_OF_SHORT_IN_BYTES); *// we don't need length*
       *// now create experimenterIdAugmentation - so that openflowplugin can
       differentiate correct vendor codec*
       ExperimenterIdActionBuilder expIdBuilder = new ExperimenterIdActionBuilder();
       expIdBuilder.setExperimenter(new ExperimenterId(input.readUnsignedInt()));
       builder.addAugmentation(ExperimenterIdAction.class, expIdBuilder.build());
       FooActionBuilder fooBuilder = new FooActionBuilder();
       fooBuilder.setFirst(input.readUnsignedShort());
       fooBuilder.setSecond(input.readUnsignedShort());
       builder.addAugmentation(FooAction.class, fooBuilder.build());
       input.skipBytes(4); *// padding*
       return builder.build();
   }
}

Serializer:

public class FooActionSerializer extends OFSerializer<Action> {
   @Override
   public void serialize(Action action, ByteBuf outBuffer) {
       outBuffer.writeShort(FOO_CODE);
       outBuffer.writeShort(16);
       *// we don't have to check for ExperimenterIdAction augmentation - our
       serializer*
       *// was called based on the vendor / experimenter ID, so we simply write
       it to buffer*
       outBuffer.writeInt(VENDOR / EXPERIMENTER ID);
       FooAction foo = action.getAugmentation(FooAction.class);
       outBuffer.writeShort(foo.getFirst());
       outBuffer.writeShort(foo.getSecond());
       outBuffer.writeZero(4); //write padding
   }
}

Register both deserializer and serializer: SwitchConnectionProvider.registerDeserializer(new ExperimenterActionDeserializerKey(0x04, VENDOR / EXPERIMENTER ID), new FooActionDeserializer()); SwitchConnectionProvider.registerSerializer(new ExperimenterActionSerializerKey(0x04, VENDOR / EXPERIMENTER ID), new FooActionSerializer());

We are ready to test our implementation.

NOTE: Vendor / Experimenter structures define only vendor / experimenter ID as common distinguisher (besides action type). Vendor / Experimenter ID is unique for all vendor messages - that’s why vendor is able to register only one class under ExperimenterAction(De)SerializerKey. And that’s why vendor has to switch / choose between his subclasses / subtypes on his own.

Detailed walkthrough: Deserialization extensibility

External interface & class description.

OFGeneralDeserializer:

  • OFDeserializer<E extends DataObject>

    • deserialize(ByteBuf) - deserializes given ByteBuf

  • HeaderDeserializer<E extends DataObject>

    • deserializeHeaders(ByteBuf) - deserializes only E headers (used in Multipart TableFeatures messages)

DeserializerRegistryInjector

  • injectDeserializerRegistry(DeserializerRegistry) - injects deserializer registry into deserializer. Useful when custom deserializer needs access to other deserializers.

NOTE: DeserializerRegistryInjector is not OFGeneralDeserializer descendand. It is a standalone interface.

MessageCodeKey and its descendants These keys are used as for deserializer lookup in DeserializerRegistry. MessageCodeKey should is used in general, while its descendants are used in more special cases. For Example ActionDeserializerKey is used for Action deserializer lookup and (de)registration. Vendor is provided with special keys, which contain only the most necessary fields. These keys usually start with “Experimenter” prefix (MatchEntryDeserializerKey is an exception).

MessageCodeKey has these fields:

  • short version - Openflow wire version number

  • int value - value read from byte message

  • Class<?> clazz - class of object being creating

  • [1] The scenario starts in a custom bundle which wants to extend library’s functionality. The custom bundle creates deserializers which implement exposed OFDeserializer / HeaderDeserializer interfaces (wrapped under OFGeneralDeserializer unifying super interface).

  • [2] Created deserializers are paired with corresponding ExperimenterKeys, which are used for deserializer lookup. If you don’t know what key should be used with your (de)serializer implementation, please visit Registration keys page.

  • [3] Paired deserializers are passed to the OF Library via SwitchConnectionProvider.registerCustomDeserializer(key, impl). Library registers the deserializer.

    • While registering, Library checks if the deserializer is an instance of DeserializerRegistryInjector interface. If yes, the DeserializerRegistry (which stores all deserializer references) is injected into the deserializer.

This is particularly useful when the deserializer needs access to other deserializers. For example IntructionsDeserializer needs access to ActionsDeserializer in order to be able to process OFPIT_WRITE_ACTIONS/OFPIT_APPLY_ACTIONS instructions.

Deserialization scenario walkthrough

Deserialization scenario walkthrough

Detailed walkthrough: Serialization extensibility

External interface & class description.

OFGeneralSerializer:

  • OFSerializer<E extends DataObject>

    • serialize(E,ByteBuf) - serializes E into given ByteBuf

  • HeaderSerializer<E extends DataObject>

    • serializeHeaders(E,ByteBuf) - serializes E headers (used in Multipart TableFeatures messages)

SerializerRegistryInjector * injectSerializerRegistry(SerializerRegistry) - injects serializer registry into serializer. Useful when custom serializer needs access to other serializers.

NOTE: SerializerRegistryInjector is not OFGeneralSerializer descendand.

MessageTypeKey and its descendants These keys are used as for serializer lookup in SerializerRegistry. MessageTypeKey should is used in general, while its descendants are used in more special cases. For Example ActionSerializerKey is used for Action serializer lookup and (de)registration. Vendor is provided with special keys, which contain only the most necessary fields. These keys usually start with “Experimenter” prefix (MatchEntrySerializerKey is an exception).

MessageTypeKey has these fields:

  • short version - Openflow wire version number

  • Class<E> msgType - DTO class

Scenario walkthrough

  • [1] Serialization extensbility principles are similar to the deserialization principles. The scenario starts in a custom bundle. The custom bundle creates serializers which implement exposed OFSerializer / HeaderSerializer interfaces (wrapped under OFGeneralSerializer unifying super interface).

  • [2] Created serializers are paired with their ExperimenterKeys, which are used for serializer lookup. If you don’t know what key should be used with your serializer implementation, please visit Registration keys page.

  • [3] Paired serializers are passed to the OF Library via SwitchConnectionProvider.registerCustomSerializer(key, impl). Library registers the serializer.

  • While registering, Library checks if the serializer is an instance of SerializerRegistryInjector interface. If yes, the SerializerRegistry (which stores all serializer references) is injected into the serializer.

This is particularly useful when the serializer needs access to other deserializers. For example IntructionsSerializer needs access to ActionsSerializer in order to be able to process OFPIT_WRITE_ACTIONS/OFPIT_APPLY_ACTIONS instructions.

Serialization scenario walkthrough

Serialization scenario walkthrough

Internal description

SwitchConnectionProvider SwitchConnectionProvider constructs and initializes both deserializer and serializer registries with default (de)serializers. It also injects the DeserializerRegistry into the DeserializationFactory, the SerializerRegistry into the SerializationFactory. When call to register custom (de)serializer is made, SwitchConnectionProvider calls register method on appropriate registry.

DeserializerRegistry / SerializerRegistry Both registries contain init() method to initialize default (de)serializers. Registration checks if key or (de)serializer implementation are not null. If at least one of the is null, NullPointerException is thrown. Else the (de)serializer implementation is checked if it is (De)SerializerRegistryInjector instance. If it is an instance of this interface, the registry is injected into this (de)serializer implementation.

GetSerializer(key) or GetDeserializer(key) performs registry lookup. Because there are two separate interfaces that might be put into the registry, the registry uses their unifying super interface. Get(De)Serializer(key) method casts the super interface to desired type. There is also a null check for the (de)serializer received from the registry. If the deserializer wasn’t found, NullPointerException with key description is thrown.

Registration keys

Deserialization.

Possible openflow extensions and their keys

There are three vendor specific extensions in Openflow v1.0 and eight in Openflow v1.3. These extensions are registered under registration keys, that are shown in table below:

Extension type

OpenFlo w

Registration key

Utility class

Vendor message

1.0

ExperimenterIdDeserializerKe y(1, experimenterId, ExperimenterMessage.class)

ExperimenterDeseriali zerKeyFactory

Action

1.0

ExperimenterActionDeserializ erKey(1, experimenter ID)

.

Stats message

1.0

ExperimenterMultipartReplyMe ssageDeserializerKey(1, experimenter ID)

ExperimenterDeseriali zerKeyFactory

Experimenter message

1.3

ExperimenterIdDeserializerKe y(4, experimenterId, ExperimenterMessage.class)

ExperimenterDeseriali zerKeyFactory

Match entry

1.3

MatchEntryDeserializerKey(4, (number) ${oxm_class}, (number) ${oxm_field});

.

key.setExperimenterId(experi menter ID);

.

Action

1.3

ExperimenterActionDeserializ erKey(4, experimenter ID)

.

Instruction

1.3

ExperimenterInstructionDeser ializerKey(4, experimenter ID)

.

Multipart

1.3

ExperimenterIdDeserializerKe y(4, experimenterId, MultipartReplyMessage.class)

ExperimenterDeseriali zerKeyFactory

Multipart - Table features

1.3

ExperimenterIdDeserializerKe y(4, experimenterId, TableFeatureProperties.class )

ExperimenterDeseriali zerKeyFactory

Error

1.3

ExperimenterIdDeserializerKe y(4, experimenterId, ErrorMessage.class)

ExperimenterDeseriali zerKeyFactory

Queue property

1.3

ExperimenterIdDeserializerKe y(4, experimenterId, QueueProperty.class)

ExperimenterDeseriali zerKeyFactory

Meter band type

1.3

ExperimenterIdDeserializerKe y(4, experimenterId, MeterBandExperimenterCase.cl ass)

ExperimenterDeseriali zerKeyFactory

Table: Deserialization

Serialization.

Possible openflow extensions and their keys

There are three vendor specific extensions in Openflow v1.0 and seven Openflow v1.3. These extensions are registered under registration keys, that are shown in table below:

Extension type

OpenFlo w

Registration key

Utility class

Vendor message

1.0

ExperimenterIdSerializerKey< >(1, experimenterId, ExperimenterInput.class)

ExperimenterSerialize rKeyFactory

Action

1.0

ExperimenterActionSerializer Key(1, experimenterId, sub-type)

.

Stats message

1.0

ExperimenterMultipartRequest SerializerKey(1, experimenter ID)

ExperimenterSerialize rKeyFactory

Experimenter message

1.3

ExperimenterIdSerializerKey< >(4, experimenterId, ExperimenterInput.class)

ExperimenterSerialize rKeyFactory

Match entry

1.3

MatchEntrySerializerKey<>(4, (class) ${oxm_class}, (class) ${oxm_field});

.

key.setExperimenterId(experi menter ID)

.

Action

1.3

ExperimenterActionSerializer Key(4, experimenterId, sub-type)

.

Instruction

1.3

ExperimenterInstructionSeria lizerKey(4, experimenter ID)

.

Multipart

1.3

ExperimenterIdSerializerKey< >(4, experimenterId, MultipartRequestExperimenter Case.class)

ExperimenterSerialize rKeyFactory

Multipart - Table features

1.3

ExperimenterIdSerializerKey< >(4, experimenterId, TableFeatureProperties.class )

ExperimenterSerialize rKeyFactory

Meter band type

1.3

ExperimenterIdSerializerKey< >(4, experimenterId, MeterBandExperimenterCase.cl ass)

ExperimenterSerialize rKeyFactory

Table: Serialization

OpenFlow Plugin Project Developer Guide

This section covers topics which are developer specific and which have not been covered in the user guide. Please see the OpenFlow plugin user guide first.

It can be found on the OpenDaylight software download page.

Event Sequences
Session Establishment

The OpenFlow Protocol Library provides interface SwitchConnectionHandler which contains method onSwitchConnected (step 1). This event is raised in the OpenFlow Protocol Library when an OpenFlow device connects to OpenDaylight and caught in the ConnectionManagerImpl class in the OpenFlow plugin.

There the plugin creates a new instance of the ConnectionContextImpl class (step 1.1) and also instances of HandshakeManagerImpl (which uses HandshakeListenerImpl) and ConnectionReadyListenerImpl. ConnectionReadyListenerImpl contains method onConnectionReady() which is called when connection is prepared. This method starts the handshake with the OpenFlow device (switch) from the OpenFlow plugin side. Then handshake can be also started from device side. In this case method shake() from HandshakeManagerImpl is called (steps 1.1.1 and 2).

The handshake consists of an exchange of HELLO messages in addition to an exchange of device features (steps 2.1. and 3). The handshake is completed by HandshakeManagerImpl. After receiving device features, the HandshakeListenerImpl is notifed via the onHanshakeSuccessfull() method. After this, the device features, node id and connection state are stored in a ConnectionContext and the method deviceConnected() of DeviceManagerImpl is called.

When deviceConnected() is called, it does the following:

  1. creates a new transaction chain (step 4.1)

  2. creates a new instance of DeviceContext (step 4.2.2)

  3. initializes the device context: the static context of device is populated by calling createDeviceFeaturesForOF<version>() to populate table, group, meter features and port descriptions (step 4.2.1 and 4.2.1.1)

  4. creates an instance of RequestContext for each type of feature

When the OpenFlow device responds to these requests (step 4.2.1.1) with multipart replies (step 5) they are processed and stored to MD-SAL operational datastore. The createDeviceFeaturesForOF<version>() method returns a Future which is processed in the callback (step 5.1) (part of initializeDeviceContext() in the deviceConnected() method) by calling the method onDeviceCtxLevelUp() from StatisticsManager (step 5.1.1).

The call to createDeviceFeaturesForOF<version>(): . creates a new instance of StatisticsContextImpl (step 5.1.1.1).

  1. calls gatherDynamicStatistics() on that instance which returns a Future which will produce a value when done

    1. this method calls methods to get dynamic data (flows, tables, groups) from the device (step 5.1.1.2, 5.1.1.2.1, 5.1.1.2.1.1)

    2. if everything works, this data is also stored in the MD-SAL operational datastore

If the Future is successful, it is processed (step 6.1.1) in a callback in StatisticsManagerImpl which:

  1. schedules the next time to poll the device for statistics

  2. sets the device state to synchronized (step 6.1.1.2)

  3. calls onDeviceContextLevelUp() in RpcManagerImpl

The onDeviceContextLevelUp() call:

  1. creates a new instance of RequestContextImpl

  2. registers implementation for supported services

  3. calls onDeviceContextLevelUp() in DeviceManagerImpl (step 6.1.1.2.1.2) which causes the information about the new device be be written to the MD-SAL operational datastore (step 6.1.1.2.2)

Session establishment

Session establishment

Handshake

The first thing that happens when an OpenFlow device connects to OpenDaylight is that the OpenFlow plugin gathers basic information about the device and establishes agreement on key facts like the version of OpenFlow which will be used. This process is called the handshake.

The handshake starts with HELLO message which can be sent either by the OpenFlow device or the OpenFlow plugin. After this, there are several scenarios which can happen:

  1. if the first HELLO message contains a version bitmap, it is possible to determine if there is a common version of OpenFlow or not:

    1. if there is a single common version use it and the VERSION IS SETTLED

    2. if there are more than one common versions, use the highest (newest) protocol and the VERSION IS SETTLED

    3. if there are no common versions, the device is DISCONNECTED

  2. if the first HELLO message does not contain a version bitmap, then STEB-BY-STEP negotiation is used

  3. if second (or more) HELLO message is received, then STEP-BY-STEP negotiation is used

STEP-BY-STEP negotiation:
  • if last version proposed by the OpenFlow plugin is the same as the version received from the OpenFlow device, then the VERSION IS SETTLED

  • if the version received in the current HELLO message from the device is the same as from previous then negotiation has failed and the device is DISCONNECTED

  • if the last version from the device is greater than the last version proposed from the plugin, wait for the next HELLO message in the hope that it will advertise support for a lower version

  • if the last version from the device is is less than the last version proposed from the plugin:

    • propose the highest version the plugin supports that is less than or equal to the version received from the device and wait for the next HELLO message

    • if if the plugin doesn’t support a lower version, the device is DISCONNECTED

After selecting of version we can say that the VERSION IS SETTLED and the OpenFlow plugin can ask device for its features. At this point handshake ends.

Handshake process

Handshake process

Adding a Flow

There are two ways to add a flow in in the OpenFlow plugin: adding it to the MD-SAL config datastore or calling an RPC. Both of these can either be done using the native MD-SAL interfaces or using RESTCONF. This discussion focuses on calling the RPC.

If user send flow via REST interface (step 1) it will cause that invokeRpc() is called on RpcBroker. The RpcBroker then looks for an appropriate implementation of the interface. In the case of the OpenFlow plugin, this is the addFlow() method of SalFlowServiceImpl (step 1.1). The same thing happens if the RPC is called directly from the native MD-SAL interfaces.

The addFlow() method then

  1. calls the commitEntry() method (step 2) from the OpenFlow Protocol Library which is responsible for sending the flow to the device

  2. creates a new RequestContext by calling createRequestContext() (step 3)

  3. creates a callback to handle any events that happen because of sending the flow to the device

The callback method is triggered when a barrier reply message (step 2.1) is received from the device indicating that the flow was either installed or an appropriate error message was sent. If the flow was successfully sent to the device, the RPC result is set to success (step 5). // SalFlowService contains inside method addFlow() other callback which caught notification from callback for barrier message.

At this point, no information pertaining to the flow has been added to the MD-SAL operational datastore. That is accomplished by the periodic gathering of statistics from OpenFlow devices.

The StatisticsContext for each given OpenFlow device periodically polls it using gatherStatistics() of StatisticsGatheringUtil which issues an OpenFlow OFPT_MULTIPART_REQUEST - OFPMP_FLOW. The response to this request (step 7) is processed in StatisticsGatheringUtil class where flow data is written to the MD-SAL operational datastore via the writeToTransaction() method of DeviceContext.

Add flow

Add flow

Description of OpenFlow Plugin Modules

The OpenFlow plugin project contains a variety of OpenDaylight modules, which are loaded using the configuration subsystem. This section describes the YANG files used to model each module.

General model (interfaces) - openflow-plugin-cfg.yang.

  • the provided module is defined (identity openflow-provider)

  • and target implementation is assigned (...OpenflowPluginProvider)

module openflow-provider {
   yang-version 1;
   namespace "urn:opendaylight:params:xml:ns:yang:openflow:common:config[urn:opendaylight:params:xml:ns:yang:openflow:common:config]";
   prefix "ofplugin-cfg";

   import config {prefix config; revision-date 2013-04-05; }
   description
       "openflow-plugin-custom-config";
   revision "2014-03-26" {
       description
           "Initial revision";
   }
   identity openflow-provider{
       base config:service-type;
       config:java-class "org.opendaylight.openflowplugin.openflow.md.core.sal.OpenflowPluginProvider";
   }
}

Implementation model - openflow-plugin-cfg-impl.yang

  • the implementation of module is defined (identity openflow-provider-impl)

    • class name of generated implementation is defined (ConfigurableOpenFlowProvider)

  • via augmentation the configuration of module is defined:

    • this module requires instance of binding-aware-broker (container binding-aware-broker)

    • and list of openflow-switch-connection-provider (those are provided by openflowjava, one plugin instance will orchestrate multiple openflowjava modules)

module openflow-provider-impl {
   yang-version 1;
   namespace "urn:opendaylight:params:xml:ns:yang:openflow:common:config:impl[urn:opendaylight:params:xml:ns:yang:openflow:common:config:impl]";
   prefix "ofplugin-cfg-impl";

   import config {prefix config; revision-date 2013-04-05;}
   import openflow-provider {prefix openflow-provider;}
   import openflow-switch-connection-provider {prefix openflow-switch-connection-provider;revision-date 2014-03-28;}
   import opendaylight-md-sal-binding { prefix md-sal-binding; revision-date 2013-10-28;}


   description
       "openflow-plugin-custom-config-impl";

   revision "2014-03-26" {
       description
           "Initial revision";
   }

   identity openflow-provider-impl {
       base config:module-type;
       config:provided-service openflow-provider:openflow-provider;
       config:java-name-prefix ConfigurableOpenFlowProvider;
   }

   augment "/config:modules/config:module/config:configuration" {
       case openflow-provider-impl {
           when "/config:modules/config:module/config:type = 'openflow-provider-impl'";

           container binding-aware-broker {
               uses config:service-ref {
                   refine type {
                       mandatory true;
                       config:required-identity md-sal-binding:binding-broker-osgi-registry;
                   }
               }
           }
           list openflow-switch-connection-provider {
               uses config:service-ref {
                   refine type {
                       mandatory true;
                       config:required-identity openflow-switch-connection-provider:openflow-switch-connection-provider;
                   }
               }
           }
       }
   }
}
Generating config and sal classes out of yangs

In order to involve suitable code generators, this is needed in pom:

<build> ...
  <plugins>
    <plugin>
      <groupId>org.opendaylight.yangtools</groupId>
      <artifactId>yang-maven-plugin</artifactId>
      <executions>
        <execution>
          <goals>
            <goal>generate-sources</goal>
          </goals>
          <configuration>
            <codeGenerators>
              <generator>
                <codeGeneratorClass>
                  org.opendaylight.controller.config.yangjmxgenerator.plugin.JMXGenerator
                </codeGeneratorClass>
                <outputBaseDir>${project.build.directory}/generated-sources/config</outputBaseDir>
                <additionalConfiguration>
                  <namespaceToPackage1>
                    urn:opendaylight:params:xml:ns:yang:controller==org.opendaylight.controller.config.yang
                  </namespaceToPackage1>
                </additionalConfiguration>
              </generator>
              <generator>
                <codeGeneratorClass>
                  org.opendaylight.yangtools.maven.sal.api.gen.plugin.CodeGeneratorImpl
                </codeGeneratorClass>
                <outputBaseDir>${project.build.directory}/generated-sources/sal</outputBaseDir>
              </generator>
              <generator>
                <codeGeneratorClass>org.opendaylight.yangtools.yang.unified.doc.generator.maven.DocumentationGeneratorImpl</codeGeneratorClass>
                <outputBaseDir>${project.build.directory}/site/models</outputBaseDir>
              </generator>
            </codeGenerators>
            <inspectDependencies>true</inspectDependencies>
          </configuration>
        </execution>
      </executions>
      <dependencies>
        <dependency>
          <groupId>org.opendaylight.controller</groupId>
          <artifactId>yang-jmx-generator-plugin</artifactId>
          <version>0.2.5-SNAPSHOT</version>
        </dependency>
        <dependency>
          <groupId>org.opendaylight.yangtools</groupId>
          <artifactId>maven-sal-api-gen-plugin</artifactId>
          <version>${yangtools.version}</version>
          <type>jar</type>
        </dependency>
      </dependencies>
    </plugin>
    ...
  • JMX generator (target/generated-sources/config)

  • sal CodeGeneratorImpl (target/generated-sources/sal)

Altering generated files

Those files were generated under src/main/java in package as referred in yangs (if exist, generator will not overwrite them):

  • ConfigurableOpenFlowProviderModuleFactory

    here the instantiateModule methods are extended in order to capture and inject osgi BundleContext into module, so it can be injected into final implementation - OpenflowPluginProvider + module.setBundleContext(bundleContext);

  • ConfigurableOpenFlowProviderModule

    here the createInstance method is extended in order to inject osgi BundleContext into module implementation + pluginProvider.setContext(bundleContext);

Configuration xml file

Configuration file contains

  • required capabilities

    • modules definitions from openflowjava

    • modules definitions from openflowplugin

  • modules definition

    • openflow:switch:connection:provider:impl (listening on port 6633, name=openflow-switch-connection-provider-legacy-impl)

    • openflow:switch:connection:provider:impl (listening on port 6653, name=openflow-switch-connection-provider-default-impl)

    • openflow:common:config:impl (having 2 services (wrapping those 2 previous modules) and binding-broker-osgi-registry injected)

  • provided services

    • openflow-switch-connection-provider-default

    • openflow-switch-connection-provider-legacy

    • openflow-provider

<snapshot>
 <required-capabilities>
   <capability>urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider:impl?module=openflow-switch-connection-provider-impl&amp;revision=2014-03-28</capability>
   <capability>urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider?module=openflow-switch-connection-provider&amp;revision=2014-03-28</capability>
   <capability>urn:opendaylight:params:xml:ns:yang:openflow:common:config:impl?module=openflow-provider-impl&amp;revision=2014-03-26</capability>
   <capability>urn:opendaylight:params:xml:ns:yang:openflow:common:config?module=openflow-provider&amp;revision=2014-03-26</capability>
 </required-capabilities>

 <configuration>


     <modules xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
       <module>
         <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider:impl">prefix:openflow-switch-connection-provider-impl</type>
         <name>openflow-switch-connection-provider-default-impl</name>
         <port>6633</port>
         <switch-idle-timeout>15000</switch-idle-timeout>
       </module>
       <module>
         <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider:impl">prefix:openflow-switch-connection-provider-impl</type>
         <name>openflow-switch-connection-provider-legacy-impl</name>
         <port>6653</port>
         <switch-idle-timeout>15000</switch-idle-timeout>
       </module>


       <module>
         <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:common:config:impl">prefix:openflow-provider-impl</type>
         <name>openflow-provider-impl</name>

         <openflow-switch-connection-provider>
           <type xmlns:ofSwitch="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider">ofSwitch:openflow-switch-connection-provider</type>
           <name>openflow-switch-connection-provider-default</name>
         </openflow-switch-connection-provider>
         <openflow-switch-connection-provider>
           <type xmlns:ofSwitch="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider">ofSwitch:openflow-switch-connection-provider</type>
           <name>openflow-switch-connection-provider-legacy</name>
         </openflow-switch-connection-provider>


         <binding-aware-broker>
           <type xmlns:binding="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">binding:binding-broker-osgi-registry</type>
           <name>binding-osgi-broker</name>
         </binding-aware-broker>
       </module>
     </modules>

     <services xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
       <service>
         <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider">prefix:openflow-switch-connection-provider</type>
         <instance>
           <name>openflow-switch-connection-provider-default</name>
           <provider>/modules/module[type='openflow-switch-connection-provider-impl'][name='openflow-switch-connection-provider-default-impl']</provider>
         </instance>
         <instance>
           <name>openflow-switch-connection-provider-legacy</name>
           <provider>/modules/module[type='openflow-switch-connection-provider-impl'][name='openflow-switch-connection-provider-legacy-impl']</provider>
         </instance>
       </service>

       <service>
         <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:common:config">prefix:openflow-provider</type>
         <instance>
           <name>openflow-provider</name>
           <provider>/modules/module[type='openflow-provider-impl'][name='openflow-provider-impl']</provider>
         </instance>
       </service>
     </services>


 </configuration>
</snapshot>
API changes

In order to provide multiple instances of modules from openflowjava there is an API change. Previously OFPlugin got access to SwitchConnectionProvider exposed by OFJava and injected collection of configurations so that for each configuration new instance of tcp listening server was created. Now those configurations are provided by configSubsystem and configured modules (wrapping the original SwitchConnectionProvider) are injected into OFPlugin (wrapping SwitchConnectionHandler).

Providing config file (IT, local distribution/base, integration/distributions/base)
openflowplugin-it

Here the whole configuration is contained in one file (controller.xml). Required entries needed in order to startup and wire OEPlugin + OFJava are simply added there.

OFPlugin/distribution/base

Here new config file has been added (src/main/resources/configuration/initial/42-openflow-protocol-impl.xml) and is being copied to config/initial subfolder of build.

integration/distributions/build

In order to push the actual config into config/initial subfolder of distributions/base in integration project there was a new artifact in OFPlugin created - openflowplugin-controller-config, containing only the config xml file under src/main/resources. Another change was committed into integration project. During build this config xml is being extracted and copied to the final folder in order to be accessible during controller run.

Internal message statistics API

To aid in testing and diagnosis, the OpenFlow plugin provides information about the number and rate of different internal events.

The implementation does two things: collects event counts and exposes counts. Event counts are grouped by message type, e.g., PacketInMessage, and checkpoint, e.g., TO_SWITCH_ENQUEUED_SUCCESS. Once gathered, the results are logged as well as being exposed using OSGi command line (deprecated) and JMX.

Collect

Each message is counted as it passes through various processing checkpoints. The following checkpoints are defined as a Java enum and tracked:

/**
  * statistic groups overall in OFPlugin
  */
enum STATISTIC_GROUP {
     /** message from switch, enqueued for processing */
     FROM_SWITCH_ENQUEUED,
     /** message from switch translated successfully - source */
     FROM_SWITCH_TRANSLATE_IN_SUCCESS,
     /** message from switch translated successfully - target */
     FROM_SWITCH_TRANSLATE_OUT_SUCCESS,
     /** message from switch where translation failed - source */
     FROM_SWITCH_TRANSLATE_SRC_FAILURE,
     /** message from switch finally published into MD-SAL */
     FROM_SWITCH_PUBLISHED_SUCCESS,
     /** message from switch - publishing into MD-SAL failed */
     FROM_SWITCH_PUBLISHED_FAILURE,

     /** message from MD-SAL to switch via RPC enqueued */
     TO_SWITCH_ENQUEUED_SUCCESS,
     /** message from MD-SAL to switch via RPC NOT enqueued */
     TO_SWITCH_ENQUEUED_FAILED,
     /** message from MD-SAL to switch - sent to OFJava successfully */
     TO_SWITCH_SUBMITTED_SUCCESS,
     /** message from MD-SAL to switch - sent to OFJava but failed*/
     TO_SWITCH_SUBMITTED_FAILURE
}

When a message passes through any of those checkpoints then counter assigned to corresponding checkpoint and message is incremented by 1.

Expose statistics

As described above, there are three ways to access the statistics:

  • OSGi command line (this is considered deprecated)

    osgi> dumpMsgCount

  • OpenDaylight logging console (statistics are logged here every 10 seconds)

    required logback settings : <logger name="org.opendaylight.openflowplugin.openflow.md.queue.MessageSpyCounterImpl" level="DEBUG"\/>

  • JMX (via JConsole)

    start OpenFlow plugin with the -jmx parameter

    start JConsole by running jconsole

    the JConsole MBeans tab should contain org.opendaylight.controller

    RuntimeBean has a msg-spy-service-impl

    Operations provides makeMsgStatistics report functionality

Example results
OFplugin Debug stats.png

OFplugin Debug stats.png

DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_ENQUEUED: MSG[PortStatusMessage] -> +0 | 1
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_ENQUEUED: MSG[MultipartReplyMessage] -> +24 | 81
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_ENQUEUED: MSG[PacketInMessage] -> +8 | 111
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_IN_SUCCESS: MSG[PortStatusMessage] -> +0 | 1
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_IN_SUCCESS: MSG[MultipartReplyMessage] -> +24 | 81
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_IN_SUCCESS: MSG[PacketInMessage] -> +8 | 111
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[QueueStatisticsUpdate] -> +3 | 7
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[NodeUpdated] -> +0 | 3
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[NodeConnectorStatisticsUpdate] -> +3 | 7
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[GroupDescStatsUpdated] -> +3 | 7
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[FlowsStatisticsUpdate] -> +3 | 19
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[PacketReceived] -> +8 | 111
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[MeterFeaturesUpdated] -> +0 | 3
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[GroupStatisticsUpdated] -> +3 | 7
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[GroupFeaturesUpdated] -> +0 | 3
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[MeterConfigStatsUpdated] -> +3 | 7
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[MeterStatisticsUpdated] -> +3 | 7
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[NodeConnectorUpdated] -> +0 | 12
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[FlowTableStatisticsUpdate] -> +3 | 8
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_TRANSLATE_SRC_FAILURE: no activity detected
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[QueueStatisticsUpdate] -> +3 | 7
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[NodeUpdated] -> +0 | 3
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[NodeConnectorStatisticsUpdate] -> +3 | 7
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[GroupDescStatsUpdated] -> +3 | 7
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[FlowsStatisticsUpdate] -> +3 | 19
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[PacketReceived] -> +8 | 111
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[MeterFeaturesUpdated] -> +0 | 3
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[GroupStatisticsUpdated] -> +3 | 7
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[GroupFeaturesUpdated] -> +0 | 3
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[MeterConfigStatsUpdated] -> +3 | 7
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[MeterStatisticsUpdated] -> +3 | 7
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[NodeConnectorUpdated] -> +0 | 12
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_SUCCESS: MSG[FlowTableStatisticsUpdate] -> +3 | 8
DEBUG o.o.o.s.MessageSpyCounterImpl - FROM_SWITCH_PUBLISHED_FAILURE: no activity detected
DEBUG o.o.o.s.MessageSpyCounterImpl - TO_SWITCH_ENQUEUED_SUCCESS: MSG[AddFlowInput] -> +0 | 12
DEBUG o.o.o.s.MessageSpyCounterImpl - TO_SWITCH_ENQUEUED_FAILED: no activity detected
DEBUG o.o.o.s.MessageSpyCounterImpl - TO_SWITCH_SUBMITTED_SUCCESS: MSG[AddFlowInput] -> +0 | 12
DEBUG o.o.o.s.MessageSpyCounterImpl - TO_SWITCH_SUBMITTED_FAILURE: no activity detected
Application: Forwarding Rules Synchronizer
Basics
Description

Forwarding Rules Synchronizer (FRS) is a newer version of Forwarding Rules Manager (FRM). It was created to solve most shortcomings of FRM. FRS solving errors with retry mechanism. Sending barrier if needed. Using one service for flows, groups and meters. And it has less changes requests send to device since calculating difference and using compression queue.

It is located in the Java package:

package org.opendaylight.openflowplugin.applications.frsync;
Listeners
  • 1x config - FlowCapableNode

  • 1x operational - Node

System of work
  • one listener in config datastore waiting for changes

    • update cache

    • skip event if operational not present for node

    • send syncup entry to reactor for synchronization

      • node added: after part of modification and whole operational snapshot

      • node updated: after and before part of modification

      • node deleted: null and before part of modification

  • one listener in operational datastore waiting for changes

    • update cache

    • on device connected

      • register for cluster services

    • on device disconnected remove from cache

      • remove from cache

      • unregister for cluster services

    • if registered for reconciliation

      • do reconciliation through syncup (only when config present)

  • reactor (provides syncup w/decorators assembled in this order)

    • Cluster decorator - skip action if not master for device

    • FutureZip decorator (FutureZip extends Future decorator)

      • Future - run delegate syncup in future - submit task to executor service

      • FutureZip - provides state compression - compress optimized config delta if waiting for execution with new one

    • Guard decorator - per device level locking

    • Retry decorator - register for reconciliation if syncup failed

    • Reactor impl - calculate diff from after/before parts of syncup entry and execute

Strategy

In the old FRM uses an incremental strategy with all changes made one by one, where FRS uses a flat batch system with changes made in bulk. It uses one service SalFlatBatchService instead of three (flow, group, meter).

Boron release

FRS is used in Boron as separate feature and it is not loaded by any other feature. It has to be run separately.

odl-openflowplugin-app-forwardingrules-sync
FRS additions
Retry mechanism
  • is started when change request to device return as failed (register for reconcile)

  • wait for next consistent operational and do reconciliation with actual config (not only diff)

ZipQueue
  • only the diff (before/after) between last config changes is sent to device

  • when there are more config changes for device in a row waiting to be processed they are compressed into one entry (after is still replaced with the latest)

Cluster-aware
  • FRS is cluster aware using ClusteringSingletonServiceProvider from the MD-SAL

  • on mastership change reconciliation is done (register for reconcile)

SalFlatBatchService

FRS uses service with implemented barrier waiting logic between dependent objects

Service: SalFlatBatchService
Basics

SalFlatBatchService was created along forwardingrules-sync application as the service that should application used by default. This service uses only one input with bag of flow/group/meter objects and their common add/update/remove action. So you practically send only one input (of specific bags) to this service.

  • interface: org.opendaylight.yang.gen.v1.urn.opendaylight.flat.batch.service.rev160321.SalFlatBatchService

  • implementation: org.opendaylight.openflowplugin.impl.services.SalFlatBatchServiceImpl

  • method: processFlatBatch(input)

  • input: org.opendaylight.yang.gen.v1.urn.opendaylight.flat.batch.service.rev160321.ProcessFlatBatchInput

Usage benefits
  • possibility to use only one input bag with particular failure analysis preserved

  • automatic barrier decision (chain+wait)

  • less RPC routing in cluster environment (since one call encapsulates all others)

ProcessFlatBatchInput

Input for SalFlatBatchService (ProcessFlatBatchInput object) consists of:

  • node - NodeRef

  • batch steps - List<Batch> - defined action + bag of objects + order for failures analysis

    • BatchChoice - yang-modeled action choice (e.g. FlatBatchAddFlowCase) containing batch bag of objects (e.g. flows to be added)

    • BatchOrder - (integer) order of batch step (should be incremented by single action)

  • exitOnFirstError - boolean flag

Workflow
  1. prepare list of steps based on input

  2. mark barriers in steps where needed

  3. prepare particular F/G/M-batch service calls from Flat-batch steps

    • F/G/M-batch services encapsulate bulk of single service calls

    • they actually chain barrier after processing all single calls if actual step is marked as barrier-needed

  4. chain futures and start executing

    • start all actions that can be run simultaneously (chain all on one starting point)

    • in case there is a step marked as barrier-needed

      • wait for all fired jobs up to one with barrier

      • merge rpc results (status, errors, batch failures) into single one

      • the latest job with barrier is new starting point for chaining

Services encapsulation
  • SalFlatBatchService

    • SalFlowBatchService

      • SalFlowService

    • SalGroupBatchService

      • SalGroupService

    • SalMeterBatchService

      • SalMeterService

Barrier decision
  • decide on actual step and all previous steps since the latest barrier

  • if condition in table below is satisfied the latest step before actual is marked as barrier-needed

actual step

previous steps contain

FLOW_ADD or FLOW_UPDATE

GROUP_ADD or METER_ADD

GROUP_ADD

GROUP_ADD or GROUP_UPDATE

GROUP_REMOVE

FLOW_UPDATE or FLOW_REMOVE or GROUP_UPDATE or GROUP_REMOVE

METER_REMOVE

FLOW_UPDATE or FLOW_REMOVE

Error handling

There is flag in ProcessFlatBatchInput to stop process on the first error.

  • true - if partial step is not successful stop whole processing

  • false (default) - try to process all steps regardless partial results

If error occurs in any of partial steps upper FlatBatchService call will return as unsuccessful in both cases. However every partial error is attached to general flat batch result along with BatchFailure (contains BatchOrder and BatchItemIdChoice to identify failed step).

Cluster singleton approach in plugin
Basics
Description

The existing OpenDaylight service deployment model assumes symmetric clusters, where all services are activated on all nodes in the cluster. However, many services require that there is a single active service instance per cluster. We call such services singleton services. The Entity Ownership Service (EOS) represents the base Leadership choice for one Entity instance. Every Cluster Singleton service type must have its own Entity and every Cluster Singleton service instance must have its own Entity Candidate. Every registered Entity Candidate should be notified about its actual role. All this “work” is done by MD-SAL so the Openflowplugin need “only” to register as service in SingletonClusteringServiceProvider given by MD-SAL.

Change against using EOS service listener

In this new clustering singleton approach plugin uses API from the MD-SAL project: SingletonClusteringService which comes with three methods.

instantiateServiceInstance()
closeServiceInstance()
getIdentifier()

This service has to be registered to a SingletonClusteringServiceProvider from MD-SAL which take care if mastership is changed in cluster environment.

First method in SingletonClusteringService is being called when the cluster node becomes a MASTER. Second is being called when status changes to SLAVE or device is disconnected from cluster. Last method plugins returns NodeId as ServiceGroupIdentifier Startup after device is connected

On the start up the plugin we need to initialize first four managers for each working area providing information and services

  • Device manager

  • RPC manager

  • Role manager

  • Statistics manager

After connection the device the listener Device manager get the event and start up to creating the context for this connection. Startup after device connection

Services are managed by SinlgetonClusteringServiceProvider from MD-SAL project. So in startup we simply create a instance of LifecycleService and register all contexts into it.

Role change

Plugin is no longer registered as Entity Ownership Service (EOS) listener therefore does not need to and cannot respond on EOS ownership changes.

Service start

Services start asynchronously but the start is managed by LifecycleService. If something goes wrong LifecycleService stop starting services in context and this speeds up the reconnect process. But the services haven’t changed and plugin need to start all this:

  • Activating transaction chain manager

  • Initial gathering of device statistics

  • Initial submit to DS

  • Sending role MASTER to device

  • RPC services registration

  • Statistics gathering start

Service stop

If closeServiceInstance occurred plugin just simply try to store all unsubmitted transactions and close the transaction chain manager, stop RPC services, stop Statistics gathering and after that all unregister txEntity from EOS.

Karaf feature tree
Openflow plugin karaf feature tree

Openflow plugin karaf feature tree

Short HOWTO create such a tree.

Wiring up notifications
Introduction

We need to translate OpenFlow messages coming up from the OpenFlow Protocol Library into MD-SAL Notification objects and then publish them to the MD-SAL.

Mechanics
  1. Create a Translator class

  2. Register the Translator

  3. Register the notificationPopListener to handle your Notification Objects

Create a Translator class

You can see an example in PacketInTranslator.java.

First, simply create the class

public class PacketInTranslator implements IMDMessageTranslator<OfHeader, List<DataObject>> {

Then implement the translate function:

public class PacketInTranslator implements IMDMessageTranslator<OfHeader, List<DataObject>> {

    protected static final Logger LOG = LoggerFactory
            .getLogger(PacketInTranslator.class);
    @Override
    public PacketReceived translate(SwitchConnectionDistinguisher cookie,
            SessionContext sc, OfHeader msg) {
            ...
    }

Make sure to check that you are dealing with the expected type and cast it:

if(msg instanceof PacketInMessage) {
    PacketInMessage message = (PacketInMessage)msg;
    List<DataObject> list = new CopyOnWriteArrayList<DataObject>();

Do your transation work and return

PacketReceived pktInEvent = pktInBuilder.build();
list.add(pktInEvent);
return list;
Register your Translator Class

Next you need to go to MDController.java and in init() add register your Translator:

public void init() {
        LOG.debug("Initializing!");
        messageTranslators = new ConcurrentHashMap<>();
        popListeners = new ConcurrentHashMap<>();
        //TODO: move registration to factory
        addMessageTranslator(ErrorMessage.class, OF10, new ErrorTranslator());
        addMessageTranslator(ErrorMessage.class, OF13, new ErrorTranslator());
        addMessageTranslator(PacketInMessage.class,OF10, new PacketInTranslator());
        addMessageTranslator(PacketInMessage.class,OF13, new PacketInTranslator());

Notice that there is a separate registration for each of OpenFlow 1.0 and OpenFlow 1.3. Basically, you indicate the type of OpenFlow Protocol Library message you wish to translate for, the OpenFlow version, and an instance of your Translator.

Register your MD-SAL Message for Notification to the MD-SAL

Now, also in MDController.init() register to have the notificationPopListener handle your MD-SAL Message:

addMessagePopListener(PacketReceived.class, new NotificationPopListener<DataObject>());
You are done

That’s all there is to it. Now when a message comes up from the OpenFlow Protocol Library, it will be translated and published to the MD-SAL.

Message Order Preservation

While the Helium release of OpenFlow Plugin relied on queues to ensure messages were delivered in order, subsequent releases instead ensure that all the messages from a given device are delivered using the same thread and thus message order is guaranteed without queues. The OpenFlow plugin allocates a number of threads equal to twice the number of processor cores on machine it is run, e.g., 8 threads if the machine has 4 cores.

Note

While each device is assigned to one thread, multiple devices can be assigned to the same thread.

OVSDB Developer Guide
OVSDB Integration

The Open vSwitch database (OVSDB) Southbound Plugin component for OpenDaylight implements the OVSDB RFC 7047 management protocol that allows the southbound configuration of switches that support OVSDB. The component comprises a library and a plugin. The OVSDB protocol uses JSON-RPC calls to manipulate a physical or virtual switch that supports OVSDB. Many vendors support OVSDB on various hardware platforms. The OpenDaylight controller uses the library project to interact with an OVS instance.

Note

Read the OVSDB User Guide before you begin development.

OpenDaylight OVSDB southbound plugin architecture and design

OpenVSwitch (OVS) is generally accepted as the unofficial standard for Virtual Switching in the Open hypervisor based solutions. Every other Virtual Switch implementation, properietery or otherwise, uses OVS in some form. For information on OVS, see Open vSwitch.

In Software Defined Networking (SDN), controllers and applications interact using two channels: OpenFlow and OVSDB. OpenFlow addresses the forwarding-side of the OVS functionality. OVSDB, on the other hand, addresses the management-plane. A simple and concise overview of Open Virtual Switch Database(OVSDB) is available at: http://networkstatic.net/getting-started-ovsdb/

Overview of OpenDaylight Controller architecture

The OpenDaylight controller platform is designed as a highly modular and plugin based middleware that serves various network applications in a variety of use-cases. The modularity is achieved through the Java OSGi framework. The controller consists of many Java OSGi bundles that work together to provide the required controller functionalities.

The bundles can be placed in the following broad categories:
  • Network Service Functional Modules (Examples: Topology Manager, Inventory Manager, Forwarding Rules Manager,and others)

  • NorthBound API Modules (Examples: Topology APIs, Bridge Domain APIs, Neutron APIs, Connection Manager APIs, and others)

  • Service Abstraction Layer(SAL)- (Inventory Services, DataPath Services, Topology Services, Network Config, and others)

  • SouthBound Plugins (OpenFlow Plugin, OVSDB Plugin, OpenDove Plugin, and others)

  • Application Modules (Simple Forwarding, Load Balancer)

Each layer of the Controller architecture performs specified tasks, and hence aids in modularity. While the Northbound API layer addresses all the REST-Based application needs, the SAL layer takes care of abstracting the SouthBound plugin protocol specifics from the Network Service functions.

Each of the SouthBound Plugins serves a different purpose, with some overlapping. For example, the OpenFlow plugin might serve the Data-Plane needs of an OVS element, while the OVSDB plugin can serve the management plane needs of the same OVS element. As the OpenFlow Plugin talks OpenFlow protocol with the OVS element, the OVSDB plugin will use OVSDB schema over JSON-RPC transport.

OVSDB southbound plugin
The Open vSwitch Database Management Protocol-draft-02 and Open vSwitch Manual provide theoretical information about OVSDB. The OVSDB protocol draft is generic enough to lay the groundwork on Wire Protocol and Database Operations, and the OVS Manual currently covers 13 tables leaving space for future OVS expansion, and vendor expansions on proprietary implementations. The OVSDB Protocol is a database records transport protocol using JSON RPC1.0. For information on the protocol structure, see Getting Started with OVSDB. The OpenDaylight OVSDB southbound plugin consists of one or more OSGi bundles addressing the following services or functionalities:
  • Connection Service - Based on Netty

  • Network Configuration Service

  • Bidirectional JSON-RPC Library

  • OVSDB Schema definitions and Object mappers

  • Overlay Tunnel management

  • OVSDB to OpenFlow plugin mapping service

  • Inventory Service

Connection service
One of the primary services that most southbound plugins provide in OpenDaylight a Connection Service. The service provides protocol specific connectivity to network elements, and supports the connectivity management services as specified by the OpenDaylight Connection Manager. The connectivity services include:
  • Connection to a specified element given IP-address, L4-port, and other connectivity options (such as authentication,…)

  • Disconnection from an element

  • Handling Cluster Mode change notifications to support the OpenDaylight Clustering/High-Availability feature

Network Configuration Service
The goal of the OpenDaylight Network Configuration services is to provide complete management plane solutions needed to successfully install, configure, and deploy the various SDN based network services. These are generic services which can be implemented in part or full by any south-bound protocol plugin. The south-bound plugins can be either of the following:
  • The new network virtualization protocol plugins such as OVSDB JSON-RPC

  • The traditional management protocols such as SNMP or any others in the middle.

The above definition, and more information on Network Configuration Services, is available at : https://wiki.opendaylight.org/view/OpenDaylight_Controller:NetworkConfigurationServices

Bidirectional JSON-RPC library

The OVSDB plugin implements a Bidirectional JSON-RPC library. It is easy to design the library as a module that manages the Netty connection towards the Element.

The main responsibilities of this Library are:
  • Demarshal and marshal JSON Strings to JSON objects

  • Demarshal and marshal JSON Strings from and to the Network Element.

OVSDB Schema definitions and Object mappers

The OVSDB Schema definitions and Object Mapping layer sits above the JSON-RPC library. It maps the generic JSON objects to OVSDB schema POJOs (Plain Old Java Object) and vice-versa. This layer mostly provides the Java Object definition for the corresponding OVSDB schema (13 of them) and also will provide much more friendly API abstractions on top of these object data. This helps in hiding the JSON semantics from the functional modules such as Configuration Service and Tunnel management.

On the demarshaling side the mapping logic differentiates the Request and Response messages as follows :
  • Request messages are mapped by its “method”

  • Response messages are mapped by their IDs which were originally populated by the Request message. The JSON semantics of these OVSDB schema is quite complex. The following figures summarize two of the end-to-end scenarios:
End-to-end handling of a Create Bridge request

End-to-end handling of a Create Bridge request

End-to-end handling of a monitor response

End-to-end handling of a monitor response

Overlay tunnel management

Network Virtualization using OVS is achieved through Overlay Tunnels. The actual Type of the Tunnel may be GRE, VXLAN, or STT. The differences in the encapsulation and configuration decide the tunnel types. Establishing a tunnel using configuration service requires just the sending of OVSDB messages towards the ovsdb-server. However, the scaling issues that would arise on the state management at the data-plane (using OpenFlow) can get challenging. Also, this module can assist in various optimizations in the presence of Gateways. It can also help in providing Service guarantees for the VMs using these overlays with the help of underlay orchestration.

OVSDB to OpenFlow plugin mapping service
The connect() of the ConnectionService would result in a Node that represents an ovsdb-server. The CreateBridgeDomain() Configuration on the above Node would result in creating an OVS bridge. This OVS Bridge is an OpenFlow Agent for the OpenDaylight OpenFlow plugin with its own Node represented as (example) OF|xxxx.yyyy.zzzz. Without any help from the OVSDB plugin, the Node Mapping Service of the Controller platform would not be able to map the following:
{OVSDB_NODE + BRIDGE_IDENTFIER} <---> {OF_NODE}.

Without such mapping, it would be extremely difficult for the applications to manage and maintain such nodes. This Mapping Service provided by the OVSDB plugin would essentially help in providing more value added services to the orchestration layers that sit atop the Northbound APIs (such as OpenStack).

OVSDB: New features
Schema independent library

The OVS connection is a node which can have multiple databases. Each database is represented by a schema. A single connection can have multiple schemas. OSVDB supports multiple schemas. Currently, these are two schemas available in the OVSDB, but there is no restriction on the number of schemas. Owing to the Northbound v3 API, no code changes in ODL are needed for supporting additional schemas.

Schemas:
OVSDB Library Developer Guide
Overview

The OVSDB library manages the Netty connections to network nodes and handles bidirectional JSON-RPC messages. It not only provides OVSDB protocol functionality to OpenDaylight OVSDB plugin but also can be used as standalone JAVA library for OVSDB protocol.

The main responsibilities of OVSDB library include:

  • Manage connections to peers

  • Marshal and unmarshal JSON Strings to JSON objects.

  • Marshal and unmarshal JSON Strings from and to the Network Element.

Connection Service

The OVSDB library provides connection management through the OvsdbConnection interface. The OvsdbConnection interface provides OVSDB connection management APIs which include both active and passive connections. From the library perspective, active OVSDB connections are initiated from the controller to OVS nodes while passive OVSDB connections are initiated from OVS nodes to the controller. In the active connection scenario an application needs to provide the IP address and listening port of OVS nodes to the library management API. On the other hand, the library management API only requires the info of the controller listening port in the passive connection scenario.

For a passive connection scenario, the library also provides a connection event listener through the OvsdbConnectionListener interface. The listener interface has connected() and disconnected() methods to notify an application when a new passive connection is established or an existing connection is terminated.

SSL Connection

In addition to a regular TCP connection, the OvsdbConnection interface also provides a connection management API for an SSL connection. To start an OVSDB connection with SSL, an application will need to provide a Java SSLContext object to the management API. There are different ways to create a Java SSLContext, but in most cases a Java KeyStore with certificate and private key provided by the application is required. Detailed steps about how to create a Java SSLContext is out of the scope of this document and can be found in the Java documentation for JAVA Class SSlContext.

In the active connection scenario, the library uses the given SSLContext to create a Java SSLEngine and configures the SSL engine with the client mode for SSL handshaking. Normally clients are not required to authenticate themselves.

In the passive connection scenario, the library uses the given SSLContext to create a Java SSLEngine which will operate in server mode for SSL handshaking. For security reasons, the SSLv3 protocol and some cipher suites are disabled. Currently the OVSDB server only supports the TLS_RSA_WITH_AES_128_CBC_SHA cipher suite and the following protocols: SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2.

The SSL engine is also configured to operate on two-way authentication mode for passive connection scenarios, i.e., the OVSDB server (controller) will authenticate clients (OVS nodes) and clients (OVS nodes) are also required to authenticate the server (controller). In the two-way authentication mode, an application should keep a trust manager to store the certificates of trusted clients and initialize a Java SSLContext with this trust manager. Thus during the SSL handshaking process the OVSDB server (controller) can use the trust manager to verify clients and only accept connection requests from trusted clients. On the other hand, users should also configure OVS nodes to authenticate the controller. Open vSwitch already supports this functionality in the ovsdb-server command with option --ca-cert=cacert.pem and --bootstrap-ca-cert=cacert.pem. On the OVS node, a user can use the option --ca-cert=cacert.pem to specify a controller certificate directly and the node will only allow connections to the controller with the specified certificate. If the OVS node runs ovsdb-server with option --bootstrap-ca-cert=cacert.pem, it will authenticate the controller with the specified certificate cacert.pem. If the certificate file doesn’t exist, it will attempt to obtain a certificate from the peer (controller) on its first SSL connection and save it to the named PEM file cacert.pem. Here is an example of ovsdb-server with --bootstrap-ca-cert=cacert.pem option:

ovsdb-server--pidfile--detach--log-file--remotepunix:/var/run/openvswitch/db.sock--remote=db:hardware_vtep,Global,managers--private-key=/etc/openvswitch/ovsclient-privkey.pem--certificate=/etc/openvswitch/ovsclient-cert.pem--bootstrap-ca-cert=/etc/openvswitch/vswitchd.cacert

OVSDB protocol transactions

The OVSDB protocol defines the RPC transaction methods in RFC 7047. The following RPC methods are supported in OVSDB protocol:

  • List databases

  • Get schema

  • Transact

  • Cancel

  • Monitor

  • Update notification

  • Monitor cancellation

  • Lock operations

  • Locked notification

  • Stolen notification

  • Echo

According to RFC 7047, an OVSDB server must implement all methods, and an OVSDB client is only required to implement the “Echo” method and otherwise free to implement whichever methods suit its needs. However, the OVSDB library currently doesn’t support all RPC methods. For the “Echo” method, the library can handle “Echo” messages from a peer and send a JSON response message back, but the library doesn’t support actively sending an “Echo” JSON request to a peer. Other unsupported RPC methods are listed below:

  • Cancel

  • Lock operations

  • Locked notification

  • Stolen notification

In the OVSDB library the RPC methods are defined in the Java interface OvsdbRPC. The library also provides a high-level interface OvsdbClient as the main interface to interact with peers through the OVSDB protocol. In the passive connection scenario, each connection will have a corresponding OvsdbClient object, and the application can obtain the OvsdbClient object through connection listener callback methods. In other words, if the application implements the OvsdbConnectionListener interface, it will get notifications of connection status changes with the corresponding OvsdbClient object of that connection.

OVSDB database operations

RFC 7047 also defines database operations, such as insert, delete, and update, to be performed as part of a “transact” RPC request. The OVSDB library defines the data operations in Operations.java and provides the TransactionBuilder class to help build “transact” RPC requests. To build a JSON-RPC transact request message, the application can obtain the TransactionBuilder object through a transactBuilder() method in the OvsdbClient interface.

The TransactionBuilder class provides the following methods to help build transactions:

  • getOperations(): Get the list of operations in this transaction.

  • add(): Add data operation to this transaction.

  • build(): Return the list of operations in this transaction. This is the same as the getOperations() method.

  • execute(): Send the JSON RPC transaction to peer.

  • getDatabaseSchema(): Get the database schema of this transaction.

If the application wants to build and send a “transact” RPC request to modify OVSDB tables on a peer, it can take the following steps:

  1. Statically import parameter “op” in Operations.java

    importstaticorg.opendaylight.ovsdb.lib.operations.Operations.op;

  2. Obtain transaction builder through transacBuilder() method in OvsdbClient:

    TransactionBuildertransactionBuilder=ovsdbClient.transactionBuilder(dbSchema);

  3. Add operations to transaction builder:

    transactionBuilder.add(op.insert(schema,row));

  4. Send transaction to peer and get JSON RPC response:

    operationResults=transactionBuilder.execute().get();

    Note

    Although the “select” operation is supported in the OVSDB library, the library implementation is a little different from RFC 7047. In RFC 7047, section 5.2.2 describes the “select” operation as follows:

    “The “rows” member of the result is an array of objects. Each object corresponds to a matching row, with each column specified in “columns” as a member, the column’s name as the member name, and its value as the member value. If “columns” is not specified, all the table’s columns are included (including the internally generated “_uuid” and “_version” columns).”

    The OVSDB library implementation always requires the column’s name in the “columns” field of a JSON message. If the “columns” field is not specified, none of the table’s columns are included. If the application wants to get the table entry with all columns, it needs to specify all the columns’ names in the “columns” field.

Reference Documentation

RFC 7047 The Open vSwitch Databse Management Protocol https://tools.ietf.org/html/rfc7047

OVSDB MD-SAL Southbound Plugin Developer Guide
Overview

The Open vSwitch Database (OVSDB) Model Driven Service Abstraction Layer (MD-SAL) Southbound Plugin provides an MD-SAL based interface to Open vSwitch systems. This is done by augmenting the MD-SAL topology node with a YANG model which replicates some (but not all) of the Open vSwitch schema.

OVSDB MD-SAL Southbound Plugin Architecture and Operation

The architecture and operation of the OVSDB MD-SAL Southbound plugin is illustrated in the following set of diagrams.

Connecting to an OVSDB Node

An OVSDB node is a system which is running the OVS software and is capable of being managed by an OVSDB manager. The OVSDB MD-SAL Southbound plugin in OpenDaylight is capable of operating as an OVSDB manager. Depending on the configuration of the OVSDB node, the connection of the OVSDB manager can be active or passive.

Active OVSDB Node Manager Workflow

An active OVSDB node manager connection is made when OpenDaylight initiates the connection to the OVSDB node. In order for this to work, you must configure the OVSDB node to listen on a TCP port for the connection (i.e. OpenDaylight is active and the OVSDB node is passive). This option can be configured on the OVSDB node using the following command:

ovs-vsctl set-manager ptcp:6640

The following diagram illustrates the sequence of events which occur when OpenDaylight initiates an active OVSDB manager connection to an OVSDB node.

Active OVSDB Manager Connection

Active OVSDB Manager Connection

Step 1

Create an OVSDB node by using RESTCONF or an OpenDaylight plugin. The OVSDB node is listed under the OVSDB topology node.

Step 2

Add the OVSDB node to the OVSDB MD-SAL southbound configuration datastore. The OVSDB southbound provider is registered to listen for data change events on the portion of the MD-SAL topology data store which contains the OVSDB southbound topology node augmentations. The addition of an OVSDB node causes an event which is received by the OVSDB Southbound provider.

Step 3

The OVSDB Southbound provider initiates a connection to the OVSDB node using the connection information provided in the configuration OVSDB node (i.e. IP address and TCP port number).

Step 4

The OVSDB Southbound provider adds the OVSDB node to the OVSDB MD-SAL operational data store. The operational data store contains OVSDB node objects which represent active connections to OVSDB nodes.

Step 5

The OVSDB Southbound provider requests the schema and databases which are supported by the OVSDB node.

Step 6

The OVSDB Southbound provider uses the database and schema information to construct a monitor request which causes the OVSDB node to send the controller any updates made to the OVSDB databases on the OVSDB node.

Passive OVSDB Node Manager Workflow

A passive OVSDB node connection to OpenDaylight is made when the OVSDB node initiates the connection to OpenDaylight. In order for this to work, you must configure the OVSDB node to connect to the IP address and OVSDB port on which OpenDaylight is listening. This option can be configured on the OVSDB node using the following command:

ovs-vsctl set-manager tcp:<IP address>:6640

The following diagram illustrates the sequence of events which occur when an OVSDB node connects to OpenDaylight.

Passive OVSDB Manager Connection

Passive OVSDB Manager Connection

Step 1

The OVSDB node initiates a connection to OpenDaylight.

Step 2

The OVSDB Southbound provider adds the OVSDB node to the OVSDB MD-SAL operational data store. The operational data store contains OVSDB node objects which represent active connections to OVSDB nodes.

Step 3

The OVSDB Southbound provider requests the schema and databases which are supported by the OVSDB node.

Step 4

The OVSDB Southbound provider uses the database and schema information to construct a monitor request which causes the OVSDB node to send back any updates which have been made to the OVSDB databases on the OVSDB node.

OVSDB Node ID in the Southbound Operational MD-SAL

When OpenDaylight initiates an active connection to an OVSDB node, it writes an external-id to the Open_vSwitch table on the OVSDB node. The external-id is an OpenDaylight instance identifier which identifies the OVSDB topology node which has just been created. Here is an example showing the value of the opendaylight-iid entry in the external-ids column of the Open_vSwitch table where the node-id of the OVSDB node is ovsdb:HOST1.

$ ovs-vsctl list open_vswitch
...
external_ids        : {opendaylight-iid="/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']"}
...

The opendaylight-iid entry in the external-ids column of the Open_vSwitch table causes the OVSDB node to have same node-id in the operational MD-SAL datastore as in the configuration MD-SAL datastore. This holds true if the OVSDB node manager settings are subsequently changed so that a passive OVSDB manager connection is made.

If there is no opendaylight-iid entry in the external-ids column and a passive OVSDB manager connection is made, then the node-id of the OVSDB node in the operational MD-SAL datastore will be constructed using the UUID of the Open_vSwitch table as follows.

"node-id": "ovsdb://uuid/b8dc0bfb-d22b-4938-a2e8-b0084d7bd8c1"

The opendaylight-iid entry can be removed from the Open_vSwitch table using the following command.

$ sudo ovs-vsctl remove open_vswitch . external-id "opendaylight-iid"
OVSDB Changes by using OVSDB Southbound Config MD-SAL

After the connection has been made to an OVSDB node, you can make changes to the OVSDB node by using the OVSDB Southbound Config MD-SAL. You can make CRUD operations by using the RESTCONF interface or by a plugin using the MD-SAL APIs. The following diagram illustrates the high-level flow of events.

OVSDB Changes by using the Southbound Config MD-SAL

OVSDB Changes by using the Southbound Config MD-SAL

Step 1

A change to the OVSDB Southbound Config MD-SAL is made. Changes include adding or deleting bridges and ports, or setting attributes of OVSDB nodes, bridges or ports.

Step 2

The OVSDB Southbound provider receives notification of the changes made to the OVSDB Southbound Config MD-SAL data store.

Step 3

As appropriate, OVSDB transactions are constructed and transmitted to the OVSDB node to update the OVSDB database on the OVSDB node.

Step 4

The OVSDB node sends update messages to the OVSDB Southbound provider to indicate the changes made to the OVSDB nodes database.

Step 5

The OVSDB Southbound provider maps the changes received from the OVSDB node into corresponding changes made to the OVSDB Southbound Operational MD-SAL data store.

Detecting changes in OVSDB coming from outside OpenDaylight

Changes to the OVSDB nodes database may also occur independently of OpenDaylight. OpenDaylight also receives notifications for these events and updates the Southbound operational MD-SAL. The following diagram illustrates the sequence of events.

OVSDB Changes made directly on the OVSDB node

OVSDB Changes made directly on the OVSDB node

Step 1

Changes are made to the OVSDB node outside of OpenDaylight (e.g. ovs-vsctl).

Step 2

The OVSDB node constructs update messages to inform OpenDaylight of the changes made to its databases.

Step 3

The OVSDB Southbound provider maps the OVSDB database changes to corresponding changes in the OVSDB Southbound operational MD-SAL data store.

OVSDB Model

The OVSDB Southbound MD-SAL operates using a YANG model which is based on the abstract topology node model found in the network topology model.

The augmentations for the OVSDB Southbound MD-SAL are defined in the ovsdb.yang file.

There are three augmentations:

ovsdb-node-augmentation

This augments the topology node and maps primarily to the Open_vSwitch table of the OVSDB schema. It contains the following attributes.

  • connection-info - holds the local and remote IP address and TCP port numbers for the OpenDaylight to OVSDB node connections

  • db-version - version of the OVSDB database

  • ovs-version - version of OVS

  • list managed-node-entry - a list of references to ovsdb-bridge-augmentation nodes, which are the OVS bridges managed by this OVSDB node

  • list datapath-type-entry - a list of the datapath types supported by the OVSDB node (e.g. system, netdev) - depends on newer OVS versions

  • list interface-type-entry - a list of the interface types supported by the OVSDB node (e.g. internal, vxlan, gre, dpdk, etc.) - depends on newer OVS verions

  • list openvswitch-external-ids - a list of the key/value pairs in the Open_vSwitch table external_ids column

  • list openvswitch-other-config - a list of the key/value pairs in the Open_vSwitch table other_config column

ovsdb-bridge-augmentation

This augments the topology node and maps to an specific bridge in the OVSDB bridge table of the associated OVSDB node. It contains the following attributes.

  • bridge-uuid - UUID of the OVSDB bridge

  • bridge-name - name of the OVSDB bridge

  • bridge-openflow-node-ref - a reference (instance-identifier) of the OpenFlow node associated with this bridge

  • list protocol-entry - the version of OpenFlow protocol to use with the OpenFlow controller

  • list controller-entry - a list of controller-uuid and is-connected status of the OpenFlow controllers associated with this bridge

  • datapath-id - the datapath ID associated with this bridge on the OVSDB node

  • datapath-type - the datapath type of this bridge

  • fail-mode - the OVSDB fail mode setting of this bridge

  • flow-node - a reference to the flow node corresponding to this bridge

  • managed-by - a reference to the ovsdb-node-augmentation (OVSDB node) that is managing this bridge

  • list bridge-external-ids - a list of the key/value pairs in the bridge table external_ids column for this bridge

  • list bridge-other-configs - a list of the key/value pairs in the bridge table other_config column for this bridge

ovsdb-termination-point-augmentation

This augments the topology termination point model. The OVSDB Southbound MD-SAL uses this model to represent both the OVSDB port and OVSDB interface for a given port/interface in the OVSDB schema. It contains the following attributes.

  • port-uuid - UUID of an OVSDB port row

  • interface-uuid - UUID of an OVSDB interface row

  • name - name of the port

  • interface-type - the interface type

  • list options - a list of port options

  • ofport - the OpenFlow port number of the interface

  • ofport_request - the requested OpenFlow port number for the interface

  • vlan-tag - the VLAN tag value

  • list trunks - list of VLAN tag values for trunk mode

  • vlan-mode - the VLAN mode (e.g. access, native-tagged, native-untagged, trunk)

  • list port-external-ids - a list of the key/value pairs in the port table external_ids column for this port

  • list interface-external-ids - a list of the key/value pairs in the interface table external_ids interface for this interface

  • list port-other-configs - a list of the key/value pairs in the port table other_config column for this port

  • list interface-other-configs - a list of the key/value pairs in the interface table other_config column for this interface

Examples of OVSDB Southbound MD-SAL API
Connect to an OVSDB Node

This example RESTCONF command adds an OVSDB node object to the OVSDB Southbound configuration data store and attempts to connect to the OVSDB host located at the IP address 10.11.12.1 on TCP port 6640.

POST http://<host>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
Content-Type: application/json
{
  "node": [
     {
       "node-id": "ovsdb:HOST1",
       "connection-info": {
         "ovsdb:remote-ip": "10.11.12.1",
         "ovsdb:remote-port": 6640
       }
     }
  ]
}
Query the OVSDB Southbound Configuration MD-SAL

Following on from the previous example, if the OVSDB Southbound configuration MD-SAL is queried, the RESTCONF command and the resulting reply is similar to the following example.

GET http://<host>:8080/restconf/config/network-topology:network-topology/topology/ovsdb:1/
Application/json data in the reply
{
  "topology": [
    {
      "topology-id": "ovsdb:1",
      "node": [
        {
          "node-id": "ovsdb:HOST1",
          "ovsdb:connection-info": {
            "remote-port": 6640,
            "remote-ip": "10.11.12.1"
          }
        }
      ]
    }
  ]
}
Reference Documentation

Openvswitch schema

OVSDB Hardware VTEP Developer Guide
Overview

TBD

OVSDB Hardware VTEP Architecture

TBD

P4 Plugin Developer Guide
P4 Plugin Architecture
  • Netconf-Adapter

    • Responsible for device connection, interface resource collection, and providing gRPC server information to P4Runtime client.

  • Runtime

    • Implements a gRPC client, more precisely a P4Runtime client, that provides several RPCs for users at runtime.

    • Supports setting and retrieving forwarding pipeline configuration dynamically; adding or removing multiple devices; setting up a controller cluster; adding, modifying, or deleting table entries; adding, modifying, or deleting action profile members, adding, modifying, or deleting action profile groups, and setting packet-in/packet-out.

APIs in P4 Plugin

The sections below give details about the configuration settings for the components that can be configured.

Netconf Adapter
API Description
  • p4plugin/adapter/netconf-adapter/api/src/main/yang/p4plugin-netconf-adapter-api.yang

    • write-inventory

      • Write the collecting interface resource to inventory data store.

    • read-inventory

      • Acquire the interface resource from inventory data store.

Runtime
API Description
  • p4plugin/runtime/api/src/main/yang/p4plugin-device.yang

    • add-device

      • Add a P4 device. Users need to provide node ID, device ID, gRPC server address, configuration file path, and runtime file path as input.

        In the following scenario, users must catch and handle the exception: If node ID or P4 target address (device ID and gRPC server address) already exists, parsing the configuration file and runtime file causes an exception, such as IOException.

    • remove-device

      • Remove a P4 device from local list.

    • query-devices

      • Query how many devices are there currently, and return a list that contains node IDs.

    • connect-to-device

      • Open the stream channel, which is for packet-in and packet-out, and send master arbitration update message right after the stream channel is created. The returned value is the connection state.

    • set-pipeline-config

      • Set forwarding pipeline configuration to a specific device through the gRPC channel, and input the node ID associated with the device.

    • get-pipeline-config

      • Get forwarding pipeline configuration, input the node ID associated, and return a string that is the content of the runtime file.

  • p4plugin/core/api/src/main/yang/p4plugin-runtime.yang

    • add-table-entry

      • Add entry to a specific device. Users must provide parameters such as table name; action name and action parameters; match field name and match field value; and so on. The node ID must also be provided.

    • modify-table-entry

      • Modify an existing entry to a specific device. The parameters are the same as the add-table-entry method.

    • delete-table-entry

      • Delete an existing entry from a specific device. When deleting entries, users only need to provide table name and match field information; no action information is required.

    • add-action-profile-member

      • Add a member to a profile. User must provide member ID.

    • modify-action-profile-member

      • Modify a member that already exists in a profile.

    • delete-action-profile-member

      • Delete a member that already exists in a profile.

    • add-action-profile-group

      • Add a group to a profile.

    • modify-action-profile-group

      • Modify a group that already exists in a profile.

    • delete-action-profile-group

      • Delete a group that already exists in a profile.

    • read-table-entry

      • Read an entry from a specific device, input node ID, and table name; and output a JSON string. The returned value is Base64 encoded.

    • read-action-profile-member

      • Read the members of an action profile, input node ID, and action profile name; and output a JSON string. The returned value is Base64 encoded.

    • read-action-profile-group

      • Read the action profile groups of an action profile, input node ID and action profile name; and output a JSON string. The returned value is Base64 encoded.

  • p4plugin/core/api/src/main/yang/p4plugin-packet.yang

    • p4-transmit-packet

      • Transmit a packet to a specific P4 device.

    • p4-packet-received

      • Receive a packet from P4 device.

  • p4plugin/core/api/src/main/yang/p4plugin-cluster.yang

Sample Configurations
1. Write Inventory

REST API : POST /restconf/operations/p4plugin-netconf-adapter-api:write-inventory

Sample JSON Data

{
     "input": {

    }
}
2. Add device

REST API : POST /restconf/operations/p4plugin-device:add-device

Sample JSON Data

{
    "input": {
        "nid": "node0",
         "config-file-path": "/home/opendaylight/p4lang/behavioral-model/mininet/simple_router.json",
         "runtime-file-path": "/home/opendaylight/p4lang/behavioral-model/mininet/simple_router.proto.txt",
         "did": "0",
         "ip": "10.42.94.144",
         "port": "50051"
    }
}
3. Connect to device

REST API : POST /restconf/operations/p4plugin-device:connect-to-device

Sample JSON Data

{
    "input": {
         "nid": "node0"
     }
}
4. Set pipeline config

REST API : POST /restconf/operations/p4plugin-device:set-pipeline-config

Sample JSON Data

{
    "input": {
        "nid": "node0"
    }
}
5. Add table entry

REST API : POST /restconf/operations/p4plugin-runtime:add-table-entry

Sample JSON Data

{
    "input": {
        "action-name": "set_nhop",
         "action-param": [
             {
                 "param-name": "nhop_ipv4",
                 "param-value": "10.0.0.10"
             },
             {
                   "param-name": "port",
                 "param-value": "1"
             }
         ],
         "priority": "0",
         "controller-metadata": "0",
         "table-name": "ipv4_lpm",
         "field": [
             {
                 "field-name": "ipv4.dstAddr",
                 "lpm-value": "10.0.0.0",
                 "prefix-len": "24"
             }
         ],
         "nid": "node0"
    }
}
6. Read table entry

REST API : POST /restconf/operations/p4plugin-runtime:read-table-entry

Sample JSON Data

{
    "input": {
        "table-name": "ipv4_lpm",
         "nid": "node0"
    }
}
Service Function Chaining
OpenDaylight Service Function Chaining (SFC) Overview

OpenDaylight Service Function Chaining (SFC) provides the ability to define an ordered list of a network services (e.g. firewalls, load balancers). These service are then “stitched” together in the network to create a service chain. This project provides the infrastructure (chaining logic, APIs) needed for ODL to provision a service chain in the network and an end-user application for defining such chains.

  • ACE - Access Control Entry

  • ACL - Access Control List

  • SCF - Service Classifier Function

  • SF - Service Function

  • SFC - Service Function Chain

  • SFF - Service Function Forwarder

  • SFG - Service Function Group

  • SFP - Service Function Path

  • RSP - Rendered Service Path

  • NSH - Network Service Header

SFC Classifier Control and Date plane Developer guide
Overview

Description of classifier can be found in: https://datatracker.ietf.org/doc/draft-ietf-sfc-architecture/

Classifier manages everything from starting the packet listener to creation (and removal) of appropriate ip(6)tables rules and marking received packets accordingly. Its functionality is available only on Linux as it leverages NetfilterQueue, which provides access to packets matched by an iptables rule. Classifier requires root privileges to be able to operate.

So far it is capable of processing ACL for MAC addresses, ports, IPv4 and IPv6. Supported protocols are TCP and UDP.

Classifier Architecture

Python code located in the project repository sfc-py/common/classifier.py.

Note

classifier assumes that Rendered Service Path (RSP) already exists in ODL when an ACL referencing it is obtained

  1. sfc_agent receives an ACL and passes it for processing to the classifier

  2. the RSP (its SFF locator) referenced by ACL is requested from ODL

  3. if the RSP exists in the ODL then ACL based iptables rules for it are applied

After this process is over, every packet successfully matched to an iptables rule (i.e. successfully classified) will be NSH encapsulated and forwarded to a related SFF, which knows how to traverse the RSP.

Rules are created using appropriate iptables command. If the Access Control Entry (ACE) rule is MAC address related both iptables and IPv6 tables rules are issued. If ACE rule is IPv4 address related, only iptables rules are issued, same for IPv6.

Note

iptables raw table contains all created rules

Information regarding already registered RSP(s) are stored in an internal data-store, which is represented as a dictionary:

{rsp_id: {'name': <rsp_name>,
          'chains': {'chain_name': (<ipv>,),
                     ...
                     },
          'sff': {'ip': <ip>,
                  'port': <port>,
                  'starting-index': <starting-index>,
                  'transport-type': <transport-type>
                  },
          },
...
}
  • name: name of the RSP

  • chains: dictionary of iptables chains related to the RSP with information about IP version for which the chain exists

  • SFF: SFF forwarding parameters

    • ip: SFF IP address

    • port: SFF port

    • starting-index: index given to packet at first RSP hop

    • transport-type: encapsulation protocol

Key APIs and Interfaces

This features exposes API to configure classifier (corresponds to service-function-classifier.yang)

API Reference Documentation

See: sfc-model/src/main/yang/service-function-classifier.yang

SFC-OVS Plug-in
Overview

SFC-OVS provides integration of SFC with Open vSwitch (OVS) devices. Integration is realized through mapping of SFC objects (like SF, SFF, Classifier, etc.) to OVS objects (like Bridge, TerminationPoint=Port/Interface). The mapping takes care of automatic instantiation (setup) of corresponding object whenever its counterpart is created. For example, when a new SFF is created, the SFC-OVS plug-in will create a new OVS bridge and when a new OVS Bridge is created, the SFC-OVS plug-in will create a new SFF.

SFC-OVS Architecture

SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing information from/to OVS devices. The core functionality consists of two types of mapping:

  1. mapping from OVS to SFC

    • OVS Bridge is mapped to SFF

    • OVS TerminationPoints are mapped to SFF DataPlane locators

  2. mapping from SFC to OVS

    • SFF is mapped to OVS Bridge

    • SFF DataPlane locators are mapped to OVS TerminationPoints

SFC < — > OVS mapping flow diagram

SFC < — > OVS mapping flow diagram

Key APIs and Interfaces
  • SFF to OVS mapping API (methods to convert SFF object to OVS Bridge and OVS TerminationPoints)

  • OVS to SFF mapping API (methods to convert OVS Bridge and OVS TerminationPoints to SFF object)

SFC Southbound REST Plug-in
Overview

The Southbound REST Plug-in is used to send configuration from datastore down to network devices supporting a REST API (i.e. they have a configured REST URI). It supports POST/PUT/DELETE operations, which are triggered accordingly by changes in the SFC data stores.

  • Access Control List (ACL)

  • Service Classifier Function (SCF)

  • Service Function (SF)

  • Service Function Group (SFG)

  • Service Function Schedule Type (SFST)

  • Service Function Forwarder (SFF)

  • Rendered Service Path (RSP)

Southbound REST Plug-in Architecture
  1. listeners - used to listen on changes in the SFC data stores

  2. JSON exporters - used to export JSON-encoded data from binding-aware data store objects

  3. tasks - used to collect REST URIs of network devices and to send JSON-encoded data down to these devices

Southbound REST Plug-in Architecture diagram

Southbound REST Plug-in Architecture diagram

Key APIs and Interfaces

The plug-in provides Southbound REST API designated to listening REST devices. It supports POST/PUT/DELETE operations. The operation (with corresponding JSON-encoded data) is sent to unique REST URL belonging to certain data type.

  • Access Control List (ACL): http://<host>:<port>/config/ietf-acl:access-lists/access-list/

  • Service Function (SF): http://<host>:<port>/config/service-function:service-functions/service-function/

  • Service Function Group (SFG): http://<host>:<port>/config/service-function:service-function-groups/service-function-group/

  • Service Function Schedule Type (SFST): http://<host>:<port>/config/service-function-scheduler-type:service-function-scheduler-types/service-function-scheduler-type/

  • Service Function Forwarder (SFF): http://<host>:<port>/config/service-function-forwarder:service-function-forwarders/service-function-forwarder/

  • Rendered Service Path (RSP): http://<host>:<port>/operational/rendered-service-path:rendered-service-paths/rendered-service-path/

Therefore, network devices willing to receive REST messages must listen on these REST URLs.

Note

Service Classifier Function (SCF) URL does not exist, because SCF is considered as one of the network devices willing to receive REST messages. However, there is a listener hooked on the SCF data store, which is triggering POST/PUT/DELETE operations of ACL object, because ACL is referenced in service-function-classifier.yang

Service Function Load Balancing Developer Guide
Overview

SFC Load-Balancing feature implements load balancing of Service Functions, rather than a one-to-one mapping between Service Function Forwarder and Service Function.

Load Balancing Architecture

Service Function Groups (SFG) can replace Service Functions (SF) in the Rendered Path model. A Service Path can only be defined using SFGs or SFs, but not a combination of both.

Relevant objects in the YANG model are as follows:

  1. Service-Function-Group-Algorithm:

    Service-Function-Group-Algorithms {
        Service-Function-Group-Algorithm {
            String name
            String type
        }
    }
    
    Available types: ALL, SELECT, INDIRECT, FAST_FAILURE
    
  2. Service-Function-Group:

    Service-Function-Groups {
        Service-Function-Group {
            String name
            String serviceFunctionGroupAlgorithmName
            String type
            String groupId
            Service-Function-Group-Element {
                String service-function-name
                int index
            }
        }
    }
    
  3. ServiceFunctionHop: holds a reference to a name of SFG (or SF)

Key APIs and Interfaces

This feature enhances the existing SFC API.

REST API commands include: * For Service Function Group (SFG): read existing SFG, write new SFG, delete existing SFG, add Service Function (SF) to SFG, and delete SF from SFG * For Service Function Group Algorithm (SFG-Alg): read, write, delete

Bundle providing the REST API: sfc-sb-rest * Service Function Groups and Algorithms are defined in: sfc-sfg and sfc-sfg-alg * Relevant JAVA API: SfcProviderServiceFunctionGroupAPI, SfcProviderServiceFunctionGroupAlgAPI

Service Function Scheduling Algorithms
Overview

When creating the Rendered Service Path (RSP), the earlier release of SFC chose the first available service function from a list of service function names. Now a new API is introduced to allow developers to develop their own schedule algorithms when creating the RSP. There are four scheduling algorithms (Random, Round Robin, Load Balance and Shortest Path) are provided as examples for the API definition. This guide gives a simple introduction of how to develop service function scheduling algorithms based on the current extensible framework.

Architecture

The following figure illustrates the service function selection framework and algorithms.

SF Scheduling Algorithm framework Architecture

SF Scheduling Algorithm framework Architecture

The YANG Model defines the Service Function Scheduling Algorithm type identities and how they are stored in the MD-SAL data store for the scheduling algorithms.

The MD-SAL data store stores all informations for the scheduling algorithms, including their types, names, and status.

The API provides some basic APIs to manage the informations stored in the MD-SAL data store, like putting new items into it, getting all scheduling algorithms, etc.

The RESTCONF API provides APIs to manage the informations stored in the MD-SAL data store through RESTful calls.

The Service Function Chain Renderer gets the enabled scheduling algorithm type, and schedules the service functions with scheduling algorithm implementation.

Key APIs and Interfaces

While developing a new Service Function Scheduling Algorithm, a new class should be added and it should extend the base schedule class SfcServiceFunctionSchedulerAPI. And the new class should implement the abstract function:

public List<String> scheduleServiceFuntions(ServiceFunctionChain chain, int serviceIndex).

  • ``ServiceFunctionChain chain``: the chain which will be rendered

  • ``int serviceIndex``: the initial service index for this rendered service path

  • ``List<String>``: a list of service function names which scheduled by the Service Function Scheduling Algorithm.

API Reference Documentation

Please refer the API docs generated in the mdsal-apidocs.

SFC Proof of Transit Developer Guide
Overview

SFC Proof of Transit implements the in-situ OAM (iOAM) Proof of Transit verification for SFCs and other paths. The implementation is broadly divided into the North-bound (NB) and the South-bound (SB) side of the application. The NB side is primarily charged with augmenting the RSP with user-inputs for enabling the PoT on the RSP, while the SB side is dedicated to auto-generated SFC PoT parameters, periodic refresh of these parameters and delivering the parameters to the NETCONF and iOAM capable nodes (eg. VPP instances).

Architecture

The following diagram gives the high level overview of the different parts.

SFC Proof of Transit Internal Architecture

SFC Proof of Transit Internal Architecture

The Proof of Transit feature is enabled by two sub-features:

  1. ODL SFC PoT: feature:install odl-sfc-pot

  2. ODL SFC PoT NETCONF Renderer: feature:install odl-sfc-pot-netconf-renderer

Details

The following classes and handlers are involved.

  1. The class (SfcPotRpc) sets up RPC handlers for enabling the feature.

  2. There are new RPC handlers for two new RPCs (EnableSfcIoamPotRenderedPath and DisableSfcIoamPotRenderedPath) and effected via SfcPotRspProcessor class.

  3. When a user configures via a POST RPC call to enable Proof of Transit on a particular SFC (via the Rendered Service Path), the configuration drives the creation of necessary augmentations to the RSP (to modify the RSP) to effect the Proof of Transit configurations.

  4. The augmentation meta-data added to the RSP are defined in the sfc-ioam-nb-pot.yang file.

    Note

    There are no auto generated configuration parameters added to the RSP to avoid RSP bloat.

  5. Adding SFC Proof of Transit meta-data to the RSP is done in the SfcPotRspProcessor class.

  6. Once the RSP is updated, the RSP data listeners in the SB renderer modules (odl-sfc-pot-netconf-renderer) will listen to the RSP changes and send out configurations to the necessary network nodes that are part of the SFC.

  7. The configurations are handled mainly in the SfcPotAPI, SfcPotConfigGenerator, SfcPotPolyAPI, SfcPotPolyClass and SfcPotPolyClassAPI classes.

  8. There is a sfc-ioam-sb-pot.yang file that shows the format of the iOAM PoT configuration data sent to each node of the SFC.

  9. A timer is started based on the “ioam-pot-refresh-period” value in the SB renderer module that handles configuration refresh periodically.

  10. The SB and timer handling are done in the odl-sfc-pot-netconf-renderer module. Note: This is NOT done in the NB odl-sfc-pot module to avoid periodic updates to the RSP itself.

  11. ODL creates a new profile of a set of keys and secrets at a constant rate and updates an internal data store with the configuration. The controller labels the configurations per RSP as “even” or “odd” – and the controller cycles between “even” and “odd” labeled profiles. The rate at which these profiles are communicated to the nodes is configurable and in future, could be automatic based on profile usage. Once the profile has been successfully communicated to all nodes (all Netconf transactions completed), the controller sends an “enable pot-profile” request to the ingress node.

  12. The nodes are to maintain two profiles (an even and an odd pot-profile). One profile is currently active and in use, and one profile is about to get used. A flag in the packet is indicating whether the odd or even pot-profile is to be used by a node. This is to ensure that during profile change we’re not disrupting the service. I.e. if the “odd” profile is active, the controller can communicate the “even” profile to all nodes and only if all the nodes have received it, the controller will tell the ingress node to switch to the “even” profile. Given that the indicator travels within the packet, all nodes will switch to the “even” profile. The “even” profile gets active on all nodes – and nodes are ready to receive a new “odd” profile.

  13. HashedTimerWheel implementation is used to support the periodic configuration refresh. The default refresh is 5 seconds to start with.

  14. Depending on the last updated profile, the odd or the even profile is updated in the fresh timer pop and the configurations are sent down appropriately.

  15. SfcPotTimerQueue, SfcPotTimerWheel, SfcPotTimerTask, SfcPotTimerData and SfcPotTimerThread are the classes that handle the Proof of Transit protocol profile refresh implementation.

  16. The RSP data store is NOT being changed periodically and the timer and configuration refresh modules are present in the SB renderer module handler and hence there are are no scale or RSP churn issues affecting the design.

The following diagram gives the overall sequence diagram of the interactions between the different classes.

SFC Proof of Transit Sequence Diagram

SFC Proof of Transit Sequence Diagram

Logical Service Function Forwarder
Overview
Rationale

When the current SFC is deployed in a cloud environment, it is assumed that each switch connected to a Service Function is configured as a Service Function Forwarder and each Service Function is connected to its Service Function Forwarder depending on the Compute Node where the Virtual Machine is located. This solution allows the basic cloud use cases to be fulfilled, as for example, the ones required in OPNFV Brahmaputra, however, some advanced use cases, like the transparent migration of VMs can not be implemented. The Logical Service Function Forwarder enables the following advanced use cases:

  1. Service Function mobility without service disruption

  2. Service Functions load balancing and failover

As shown in the picture below, the Logical Service Function Forwarder concept extends the current SFC northbound API to provide an abstraction of the underlying Data Center infrastructure. The Data Center underlaying network can be abstracted by a single SFF. This single SFF uses the logical port UUID as data plane locator to connect SFs globally and in a location-transparent manner. SFC makes use of Genius project to track the location of the SF’s logical ports.

Single Logical SFF concept

The SFC internally distributes the necessary flow state over the relevant switches based on the internal Data Center topology and the deployment of SFs.

Changes in data model

The Logical Service Function Forwarder concept extends the current SFC northbound API to provide an abstraction of the underlying Data Center infrastructure.

The Logical SFF simplifies the configuration of the current SFC data model by reducing the number of parameters to be be configured in every SFF, since the controller will discover those parameters by interacting with the services offered by the Genius project.

The following picture shows the Logical SFF data model. The model gets simplified as most of the configuration parameters of the current SFC data model are discovered in runtime. The complete YANG model can be found here logical SFF model.

Logical SFF data model

There are other minor changes in the data model; the SFC encapsulation type has been added or moved in the following files:

Interaction with Genius

Feature sfc-genius functionally enables SFC integration with Genius. This allows configuring a Logical SFF and SFs attached to this Logical SFF via logical interfaces (i.e. neutron ports) that are registered with Genius.

As shown in the following picture, SFC will interact with Genius project’s services to provide the Logical SFF functionality.

SFC and Genius

The following are the main Genius’ services used by SFC:

  1. Interaction with Interface Tunnel Manager (ITM)

  2. Interaction with the Interface Manager

  3. Interaction with Resource Manager

SFC Service registration with Genius

Genius handles the coexistence of different network services. As such, SFC service is registered with Genius performing the following actions:

SFC Service Binding

As soon as a Service Function associated to the Logical SFF is involved in a Rendered Service Path, SFC service is bound to its logical interface via Genius Interface Manager. This has the effect of forwarding every incoming packet from the Service Function to the SFC pipeline of the attached switch, as long as it is not consumed by a different bound service with higher priority.

SFC Service Terminating Action

As soon as SFC service is bound to the interface of a Service Function for the first time on a specific switch, a terminating service action is configured on that switch via Genius Interface Tunnel Manager. This has the effect of forwarding every incoming packet from a different switch to the SFC pipeline as long as the traffic is VXLAN encapsulated on VNI 0.

The following sequence diagrams depict how the overall process takes place:

sfc-genius at RSP render

SFC genius module interaction with Genius at RSP creation.

sfc-genius at RSP removal

SFC genius module interaction with Genius at RSP removal.

For more information on how Genius allows different services to coexist, see the Genius User Guide.

Path Rendering

During path rendering, Genius is queried to obtain needed information, such as:

  • Location of a logical interface on the data-plane.

  • Tunnel interface for a specific pair of source and destination switches.

  • Egress OpenFlow actions to output packets to a specific interface.

See RSP Rendering section for more information.

VM migration

Upon VM migration, it’s logical interface is first unregistered and then registered with Genius, possibly at a new physical location. sfc-genius reacts to this by re-rendering all the RSPs on which the associated SF participates, if any.

The following picture illustrates the process:

sfc-genius at VM migration

SFC genius module at VM migration.

RSP Rendering changes for paths using the Logical SFF
  1. Construction of the auxiliary rendering graph

    When starting the rendering of a RSP, the SFC renderer builds an auxiliary graph with information about the required hops for traffic traversing the path. RSP processing is achieved by iteratively evaluating each of the entries in the graph, writing the required flows in the proper switch for each hop.

    It is important to note that the graph includes both traffic ingress (i.e. traffic entering into the first SF) and traffic egress (i.e. traffic leaving the chain from the last SF) as hops. Therefore, the number of entries in the graph equals the number of SFs in the chain plus one.

    _images/sfc-genius-example-auxiliary-graph.png

    The process of rendering a chain when the switches involved are part of the Logical SFF also starts with the construction of the hop graph. The difference is that when the SFs used in the chain are using a logical interface, the SFC renderer will also retrieve from Genius the DPIDs for the switches, storing them in the graph. In this context, those switches are the ones in the compute nodes each SF is hosted on at the time the chain is rendered.

    _images/sfc-genius-example-auxiliary-graph-logical-sff.png
  2. New transport processor

    Transport processors are classes which calculate and write the correct flows for a chain. Each transport processor specializes on writing the flows for a given combination of transport type and SFC encapsulation.

    A specific transport processor has been created for paths using a Logical SFF. A particularity of this transport processor is that its use is not only determined by the transport / SFC encapsulation combination, but also because the chain is using a Logical SFF. The actual condition evaluated for selecting the Logical SFF transport processor is that the SFs in the chain are using logical interface locators, and that the DPIDs for those locators can be successfully retrieved from Genius.

    _images/transport_processors_class_diagram.png

    The main differences between the Logical SFF transport processor and other processors are the following:

    • Instead of srcSff, dstSff fields in the hops graph (which are all equal in a path using a Logical SFF), the Logical SFF transport processor uses previously stored srcDpnId, dstDpnId fields in order to know whether an actual hop between compute nodes must be performed or not (it is possible that two consecutive SFs are collocated in the same compute node).

    • When a hop between switches really has to be performed, it relies on Genius for getting the actions to perform that hop. The retrieval of those actions involve two steps:

      • First, Genius’ Overlay Tunnel Manager module is used in order to retrieve the target interface for a jump between the source and the destination DPIDs.

      • Then, egress instructions for that interface are retrieved from Genius’s Interface Manager.

    • There are no next hop rules between compute nodes, only egress instructions (the transport zone tunnels have all the required routing information).

    • Next hop information towards SFs uses mac adresses which are also retrieved from the Genius datastore.

    • The Logical SFF transport processor performs NSH decapsulation in the last switch of the chain.

  3. Post-rendering update of the operational data model

    When the rendering of a chain finishes successfully, the Logical SFF Transport Processor perform two operational datastore modifications in order to provide some relevant runtime information about the chain. The exposed information is the following:

    • Rendered Service Path state: when the chain uses a Logical SFF, DPIDs for the switches in the compute nodes on which the SFs participating in the chain are hosted are added to the hop information.

    • SFF state: A new list of all RSPs which use each DPID is has been added. It is updated on each RSP addition / deletion.

Classifier impacts

This section explains the changes made to the SFC classifier, enabling it to be attached to Logical SFFs.

Refer to the following image to better understand the concept, and the required steps to implement the feature.

Classifier integration with Genius

SFC classifier integration with Genius.

As stated in the SFC User Guide, the classifier needs to be provisioned using logical interfaces as attachment points.

When that happens, MDSAL will trigger an event in the odl-sfc-scf-openflow feature (i.e. the sfc-classifier), which is responsible for installing the classifier flows in the classifier switches.

The first step of the process, is to bind the interfaces to classify in Genius, in order for the desired traffic (originating from the VMs having the provisioned attachment-points) to enter the SFC pipeline. This will make traffic reach table 82 (SFC classifier table), coming from table 0 (table managed by Genius, shared by all applications).

The next step, is deciding which flows to install in the SFC classifier table. A table-miss flow will be installed, having a MatchAny clause, whose action is to jump to Genius’s egress dispatcher table. This enables traffic intended for other applications to still be processed.

The flow that allows the SFC pipeline to continue is added next, having higher match priority than the table-miss flow. This flow has two responsabilities:

  1. Push the NSH header, along with its metadata (required within the SFC pipeline)

    Features the specified ACL matches as match criteria, and push NSH along with its metadata into the Action list.

  2. Advance the SFC pipeline

    Forward the traffic to the first Service Function in the RSP. This steers packets into the SFC domain, and how it is done depends on whether the classifier is co-located with the first service function in the specified RSP.

    Should the classifier be co-located (i.e. in the same compute node), a new instruction is appended to the flow, telling all matches to jump to the transport ingress table.

    If not, Genius’s tunnel manager service is queried to get the tunnel interface connecting the classifier node with the compute node where the first Service Function is located, and finally, Genius’s interface manager service is queried asking for instructions on how to reach that tunnel interface.

    These actions are then appended to the Action list already containing push NSH and push NSH metadata Actions, and written in an Apply-Actions Instruction into the datastore.

SNMP4SDN Developer Guide
Overview

We propose a southbound plugin that can control the off-the-shelf commodity Ethernet switches for the purpose of building SDN using Ethernet switches. For Ethernet switches, forwarding table, VLAN table, and ACL are where one can install flow configuration on, and this is done via SNMP and CLI in the proposed plugin. In addition, some settings required for Ethernet switches in SDN, e.g., disabling STP and flooding, are proposed.

SNMP4SDN as an OpenDaylight southbound plugin

SNMP4SDN as an OpenDaylight southbound plugin

Architecture

The modules in the plugin are depicted as the following figure.

Modules in the SNMP4SDN Plugin

Modules in the SNMP4SDN Plugin

  • AclService: add/remove ACL profile and rule on the switches.

  • FdbService: add/modify/remove FDB table entry on the switches.

  • VlanService: add/modify/remove VLAN table entry on the switches.

  • TopologyService: query and acquire the subnet topology.

  • InventoryService: acquire the switches and their ports.

  • DiscoveryService: probe and resolve the underlying switches as well as the port pairs connecting the switches. The probing is realized by SNMP queries. The updates from discovery will also be reflected to the TopologyService.

  • MiscConfigService: do kinds of settings on switches

    • Supported STP and ARP settings such as enable/disable STP, get port’s STP state, get ARP table, set ARP entry, and others

  • VendorSpecificHandler: to assist the flow configuration services to call the switch-talking modules with correct parameters value and order.

  • Switch-talking modules

    • For the services above, when they need to read or configure the underlying switches via SNMP or CLI, these queries are dealt with the modules SNMPHandler and CLIHandler which directly talk with the switches. The SNMPListener is to listen to snmp trap such as link up/down event or switch on/off event.

Design

In terms of the architecture of the SNMP4SDN Plugin’s features, the features include flow configuration, topology discovery, and multi-vendor support. Their architectures please refer to Wiki (Developer Guide - Design).

Installation and Configuration Guide
Tutorial
Programmatic Interface(s)

SNMP4SDN Plugin exposes APIs via MD-SAL with YANG model. The methods (RPC call) and data structures for them are listed below.

TopologyService
  • RPC call

    • get-edge-list

    • get-node-list

    • get-node-connector-list

    • set-discovery-interval (given interval time in seconds)

    • rediscover

  • Data structure

    • node: composed of node-id, node-type

    • node-connector: composed of node-connector-id, node-connector-type, node

    • topo-edge: composed of head-node-connector-id, head-node-connector-type, head-node-id, head-node-type, tail-node-connector-id, tail-node-connector-type, tail-node-id, tail-node-type

VlanService
  • RPC call

    • add-vlan (given node ID, VLAN ID, VLAN name)

    • add-vlan-and-set-ports (given node ID, VLAN ID, VLAN name, tagged ports, untagged ports)

    • set-vlan-ports (given node ID, VLAN ID, tagged ports, untagged ports)

    • delete-vlan (given node ID, VLAN ID)

    • get-vlan-table (given node ID)

AclService
  • RPC call

    • create-acl-profile (given node ID, acl-profile-index, acl-profile)

    • del-acl-profile (given node ID, acl-profile-index)

    • set-acl-rule (given node ID, acl-index, acl-rule)

    • del-acl-rule (given node ID, acl-index)

    • clear-acl-table (given node ID)

  • Data structure

    • acl-profile-index: composed of profile-id, profile name

    • acl-profile: composed of acl-layer, vlan-mask, src-ip-mask, dst-ip-mask

    • acl-layer: IP or ETHERNET

    • acl-index: composed of acl-profile-index, acl-rule-index

    • acl-rule-index: composed of rule-id, rule-name

    • acl-rule: composed of port-list, acl-layer, acl-field, acl-action

    • acl-field: composed of vlan-id, src-ip, dst-ip

    • acl-action: PERMIT or DENY

FdbService
  • RPC call

    • set-fdb-entry (given fdb-entry)

    • del-fdb-entry (given node-id, vlan-id, dest-mac-adddr)

    • get-fdb-entry (given node-id, vlan-id, dest-mac-adddr)

    • get-fdb-table (given node-id)

  • Data structure

    • fdb-entry: composed of node-id, vlan-id, dest-mac-addr, port, fdb-entry-type

    • fdb-entry-type: OTHER/INVALID/LEARNED/SELF/MGMT

MiscConfigService
  • RPC call

    • set-stp-port-state (given node-id, port, is_nable)

    • get-stp-port-state (given node-id, port)

    • get-stp-port-root (given node-id, port)

    • enable-stp (given node-id)

    • disable-stp (given node-id)

    • delete-arp-entry (given node-id, ip-address)

    • set-arp-entry (given node-id, arp-entry)

    • get-arp-entry (given node-id, ip-address)

    • get-arp-table (given node-id)

  • Data structure

    • stp-port-state: DISABLE/BLOCKING/LISTENING/LEARNING/FORWARDING/BROKEN

    • arp-entry: composed of ip-address and mac-address

SwitchDbService
  • RPC call

    • reload-db (The following 4 RPC implemention is TBD)

    • add-switch-entry

    • delete-switch-entry

    • clear-db

    • update-db

  • Data structure

    • switch-info: compose of node-ip, node-mac, community, cli-user-name, cli-password, model

Unified Secure Channel
Overview

The Unified Secure Channel (USC) feature provides REST API, manager, and plugin for unified secure channels. The REST API provides a northbound api. The manager monitors, maintains, and provides channel related services. The plugin handles the lifecycle of channels.

USC Channel Architecture
  • USC Agent

    • The USC Agent provides proxy and agent functionality on top of all standard protocols supported by the device. It initiates call-home with the controller, maintains live connections with with the controller, acts as a demuxer/muxer for packets with the USC header, and authenticates the controller.

  • USC Plugin

    • The USC Plugin is responsible for communication between the controller and the USC agent . It responds to call-home with the controller, maintains live connections with the devices, acts as a muxer/demuxer for packets with the USC header, and provides support for TLS/DTLS.

  • USC Manager

    • The USC Manager handles configurations, high availability, security, monitoring, and clustering support for USC.

USC Channel APIs and Interfaces

This section describes the APIs for interacting with the unified secure channels.

USC Channel Topology API

The USC project maintains a topology that is YANG-based in MD-SAL. These models are available via RESTCONF.

API Reference Documentation

Go to http://localhost:8181/apidoc/explorer/index.html, sign in, and expand the usc-channel panel. From there, users can execute various API calls to test their USC deployment.

YANG Tools Developer Guide
Overview

YANG Tools is set of libraries and tooling providing support for use YANG for Java (or other JVM-based language) projects and applications.

YANG Tools provides following features in OpenDaylight:

  • parsing of YANG sources and semantic inference of relationship across YANG models as defined in RFC6020

  • representation of YANG-modeled data in Java

    • Normalized Node representation - DOM-like tree model, which uses conceptual meta-model more tailored to YANG and OpenDaylight use-cases than a standard XML DOM model allows for.

  • serialization / deserialization of YANG-modeled data driven by YANG models

Architecture

YANG Tools project consists of following logical subsystems:

  • Commons - Set of general purpose code, which is not specific to YANG, but is also useful outside YANG Tools implementation.

  • YANG Model and Parser - YANG semantic model and lexical and semantic parser of YANG models, which creates in-memory cross-referenced represenation of YANG models, which is used by other components to determine their behaviour based on the model.

  • YANG Data - Definition of Normalized Node APIs and Data Tree APIs, reference implementation of these APIs and implementation of XML and JSON codecs for Normalized Nodes.

  • YANG Maven Plugin - Maven plugin which integrates YANG parser into Maven build lifecycle and provides code-generation framework for components, which wants to generate code or other artefacts based on YANG model.

Concepts

Project defines base concepts and helper classes which are project-agnostic and could be used outside of YANG Tools project scope.

Components
  • yang-common

  • yang-data-api

  • yang-data-codec-gson

  • yang-data-codec-xml

  • yang-data-impl

  • yang-data-jaxen

  • yang-data-transform

  • yang-data-util

  • yang-maven-plugin

  • yang-maven-plugin-it

  • yang-maven-plugin-spi

  • yang-model-api

  • yang-model-export

  • yang-model-util

  • yang-parser-api

  • yang-parser-impl

YANG Model API

Class diagram of yang model API

_images/yang-model-api.png

YANG Model API

YANG Parser

Yang Statement Parser works on the idea of statement concepts as defined in RFC6020, section 6.3. We come up here with basic ModelStatement and StatementDefinition, following RFC6020 idea of having sequence of statements, where every statement contains keyword and zero or one argument. ModelStatement is extended by DeclaredStatement (as it comes from source, e.g. YANG source) and EffectiveStatement, which contains other substatements and tends to represent result of semantic processing of other statements (uses, augment for YANG). IdentifierNamespace represents common superclass for YANG model namespaces.

Input of the Yang Statement Parser is a collection of StatementStreamSource objects. StatementStreamSource interface is used for inference of effective model and is required to emit its statements using supplied StatementWriter. Each source (e.g. YANG source) has to be processed in three steps in order to emit different statements for each step. This package provides support for various namespaces used across statement parser in order to map relations during declaration phase process.

Currently, there are two implementations of StatementStreamSource in Yangtools:

  • YangStatementSourceImpl - intended for yang sources

  • YinStatementSourceImpl - intended for yin sources

YANG Data API

Class diagram of yang data API

_images/yang-data-api.png

YANG Data API

YANG Data Codecs

Codecs which enable serialization of NormalizedNodes into YANG-modeled data in XML or JSON format and deserialization of YANG-modeled data in XML or JSON format into NormalizedNodes.

YANG Maven Plugin

Maven plugin which integrates YANG parser into Maven build lifecycle and provides code-generation framework for components, which wants to generate code or other artefacts based on YANG model.

How to / Tutorials
Working with YANG Model

First thing you need to do if you want to work with YANG models is to instantiate a SchemaContext object. This object type describes one or more parsed YANG modules.

In order to create it you need to utilize YANG statement parser which takes one or more StatementStreamSource objects as input and then produces the SchemaContext object.

StatementStreamSource object contains the source file information. It has two implementations, one for YANG sources - YangStatementSourceImpl, and one for YIN sources - YinStatementSourceImpl.

Here is an example of creating StatementStreamSource objects for YANG files, providing them to the YANG statement parser and building the SchemaContext:

StatementStreamSource yangModuleSource == new YangStatementSourceImpl("/example.yang", false);
StatementStreamSource yangModuleSource2 == new YangStatementSourceImpl("/example2.yang", false);

CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild();
reactor.addSources(yangModuleSource, yangModuleSource2);

SchemaContext schemaContext == reactor.buildEffective();

First, StatementStreamSource objects with two constructor arguments should be instantiated: path to the yang source file (which is a regular String object) and a boolean which determines if the path is absolute or relative.

Next comes the initiation of new yang parsing cycle - which is represented by CrossSourceStatementReactor.BuildAction object. You can get it by calling method newBuild() on CrossSourceStatementReactor object (RFC6020_REACTOR) in YangInferencePipeline class.

Then you should feed yang sources to it by calling method addSources() that takes one or more StatementStreamSource objects as arguments.

Finally you call the method buildEffective() on the reactor object which returns EffectiveSchemaContext (that is a concrete implementation of SchemaContext). Now you are ready to work with contents of the added yang sources.

Let us explain how to work with models contained in the newly created SchemaContext. If you want to get all the modules in the schemaContext, you have to call method getModules() which returns a Set of modules. If you want to get all the data definitions in schemaContext, you need to call method getDataDefinitions, etc.

Set<Module> modules == schemaContext.getModules();
Set<DataSchemaNodes> dataSchemaNodes == schemaContext.getDataDefinitions();

Usually you want to access specific modules. Getting a concrete module from SchemaContext is a matter of calling one of these methods:

  • findModuleByName(),

  • findModuleByNamespace(),

  • findModuleByNamespaceAndRevision().

In the first case, you need to provide module name as it is defined in the yang source file and module revision date if it specified in the yang source file (if it is not defined, you can just pass a null value). In order to provide the revision date in proper format, you can use a utility class named SimpleDateFormatUtil.

Module exampleModule == schemaContext.findModuleByName("example-module", null);
// or
Date revisionDate == SimpleDateFormatUtil.getRevisionFormat().parse("2015-09-02");
Module exampleModule == schemaContext.findModuleByName("example-module", revisionDate);

In the second case, you have to provide module namespace in form of an URI object.

Module exampleModule == schema.findModuleByNamespace(new URI("opendaylight.org/example-module"));

In the third case, you provide both module namespace and revision date as arguments.

Once you have a Module object, you can access its contents as they are defined in YANG Model API. One way to do this is to use method like getIdentities() or getRpcs() which will give you a Set of objects. Otherwise you can access a DataSchemaNode directly via the method getDataChildByName() which takes a QName object as its only argument. Here are a few examples.

Set<AugmentationSchema> augmentationSchemas == exampleModule.getAugmentations();
Set<ModuleImport> moduleImports == exampleModule.getImports();

ChoiceSchemaNode choiceSchemaNode == (ChoiceSchemaNode) exampleModule.getDataChildByName(QName.create(exampleModule.getQNameModule(), "example-choice"));

ContainerSchemaNode containerSchemaNode == (ContainerSchemaNode) exampleModule.getDataChildByName(QName.create(exampleModule.getQNameModule(), "example-container"));

The YANG statement parser can work in three modes:

  • default mode

  • mode with active resolution of if-feature statements

  • mode with active semantic version processing

The default mode is active when you initialize the parsing cycle as usual by calling the method newBuild() without passing any arguments to it. The second and third mode can be activated by invoking the newBuild() with a special argument. You can either activate just one of them or both by passing proper arguments. Let us explain how these modes work.

Mode with active resolution of if-features makes yang statements containing an if-feature statement conditional based on the supported features. These features are provided in the form of a QName-based java.util.Set object. In the example below, only two features are supported: example-feature-1 and example-feature-2. The Set which contains this information is passed to the method newBuild() and the mode is activated.

Set<QName> supportedFeatures = ImmutableSet.of(
    QName.create("example-namespace", "2016-08-31", "example-feature-1"),
    QName.create("example-namespace", "2016-08-31", "example-feature-2"));

CrossSourceStatementReactor.BuildAction reactor = YangInferencePipeline.RFC6020_REACTOR.newBuild(supportedFeatures);

In case when no features should be supported, you should provide an empty Set<QName> object.

Set<QName> supportedFeatures = ImmutableSet.of();

CrossSourceStatementReactor.BuildAction reactor = YangInferencePipeline.RFC6020_REACTOR.newBuild(supportedFeatures);

When this mode is not activated, all features in the processed YANG sources are supported.

Mode with active semantic version processing changes the way how YANG import statements work - each module import is processed based on the specified semantic version statement and the revision-date statement is ignored. In order to activate this mode, you have to provide StatementParserMode.SEMVER_MODE enum constant as argument to the method newBuild().

CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild(StatementParserMode.SEMVER_MODE);

Before you use a semantic version statement in a YANG module, you need to define an extension for it so that the YANG statement parser can recognize it.

module semantic-version {
    namespace "urn:opendaylight:yang:extension:semantic-version";
    prefix sv;
    yang-version 1;

    revision 2016-02-02 {
        description "Initial version";
    }
    sv:semantic-version "0.0.1";

    extension semantic-version {
        argument "semantic-version" {
            yin-element false;
        }
    }
}

In the example above, you see a YANG module which defines semantic version as an extension. This extension can be imported to other modules in which we want to utilize the semantic versioning concept.

Below is a simple example of the semantic versioning usage. With semantic version processing mode being active, the foo module imports the bar module based on its semantic version. Notice how both modules import the module with the semantic-version extension.

module foo {
    namespace foo;
    prefix foo;
    yang-version 1;

    import semantic-version { prefix sv; revision-date 2016-02-02; sv:semantic-version "0.0.1"; }
    import bar { prefix bar; sv:semantic-version "0.1.2";}

    revision "2016-02-01" {
        description "Initial version";
    }
    sv:semantic-version "0.1.1";

    ...
}
module bar {
    namespace bar;
    prefix bar;
    yang-version 1;

    import semantic-version { prefix sv; revision-date 2016-02-02; sv:semantic-version "0.0.1"; }

    revision "2016-01-01" {
        description "Initial version";
    }
    sv:semantic-version "0.1.2";

    ...
}

Every semantic version must have the following form: x.y.z. The x corresponds to a major version, the y corresponds to a minor version and the z corresponds to a patch version. If no semantic version is specified in a module or an import statement, then the default one is used - 0.0.0.

A major version number of 0 indicates that the model is still in development and is subject to change.

Following a release of major version 1, all modules will increment major version number when backwards incompatible changes to the model are made.

The minor version is changed when features are added to the model that do not impact current clients use of the model.

The patch version is incremented when non-feature changes (such as bugfixes or clarifications of human-readable descriptions that do not impact model functionality) are made that maintain backwards compatibility.

When importing a module with activated semantic version processing mode, only the module with the newest (highest) compatible semantic version is imported. Two semantic versions are compatible when all of the following conditions are met:

  • the major version in the import statement and major version in the imported module are equal. For instance, 1.5.3 is compatible with 1.5.3, 1.5.4, 1.7.2, etc., but it is not compatible with 0.5.2 or 2.4.8, etc.

  • the combination of minor version and patch version in the import statement is not higher than the one in the imported module. For instance, 1.5.2 is compatible with 1.5.2, 1.5.4, 1.6.8 etc. In fact, 1.5.2 is also compatible with versions like 1.5.1, 1.4.9 or 1.3.7 as they have equal major version. However, they will not be imported because their minor and patch version are lower (older).

If the import statement does not specify a semantic version, then the default one is chosen - 0.0.0. Thus, the module is imported only if it has a semantic version compatible with the default one, for example 0.0.0, 0.1.3, 0.3.5 and so on.

Working with YANG Data

If you want to work with YANG Data you are going to need NormalizedNode objects that are specified in the YANG Data API. NormalizedNode is an interface at the top of the YANG Data hierarchy. It is extended through sub-interfaces which define the behaviour of specific NormalizedNode types like AnyXmlNode, ChoiceNode, LeafNode, ContainerNode, etc. Concrete implemenations of these interfaces are defined in yang-data-impl module. Once you have one or more NormalizedNode instances, you can perform CRUD operations on YANG data tree which is an in-memory database designed to store normalized nodes in a tree-like structure.

In some cases it is clear which NormalizedNode type belongs to which yang statement (e.g. AnyXmlNode, ChoiceNode, LeafNode). However, there are some normalized nodes which are named differently from their yang counterparts. They are listed below:

  • LeafSetNode - leaf-list

  • OrderedLeafSetNode - leaf-list that is ordered-by user

  • LeafSetEntryNode - concrete entry in a leaf-list

  • MapNode - keyed list

  • OrderedMapNode - keyed list that is ordered-by user

  • MapEntryNode - concrete entry in a keyed list

  • UnkeyedListNode - unkeyed list

  • UnkeyedListEntryNode - concrete entry in an unkeyed list

In order to create a concrete NormalizedNode object you can use the utility class Builders or ImmutableNodes. These classes can be found in yang-data-impl module and they provide methods for building each type of normalized node. Here is a simple example of building a normalized node:

// example 1
ContainerNode containerNode == Builders.containerBuilder().withNodeIdentifier(new YangInstanceIdentifier.NodeIdentifier(QName.create(moduleQName, "example-container")).build();

// example 2
ContainerNode containerNode2 == Builders.containerBuilder(containerSchemaNode).build();

Both examples produce the same result. NodeIdentifier is one of the four types of YangInstanceIdentifier (these types are described in the javadoc of YangInstanceIdentifier). The purpose of YangInstanceIdentifier is to uniquely identify a particular node in the data tree. In the first example, you have to add NodeIdentifier before building the resulting node. In the second example it is also added using the provided ContainerSchemaNode object.

ImmutableNodes class offers similar builder methods and also adds an overloaded method called fromInstanceId() which allows you to create a NormalizedNode object based on YangInstanceIdentifier and SchemaContext. Below is an example which shows the use of this method.

YangInstanceIdentifier.NodeIdentifier contId == new YangInstanceIdentifier.NodeIdentifier(QName.create(moduleQName, "example-container");

NormalizedNode<?, ?> contNode == ImmutableNodes.fromInstanceId(schemaContext, YangInstanceIdentifier.create(contId));

Let us show a more complex example of creating a NormalizedNode. First, consider the following YANG module:

module example-module {
    namespace "opendaylight.org/example-module";
    prefix "example";

    container parent-container {
        container child-container {
            list parent-ordered-list {
                ordered-by user;

                key "parent-key-leaf";

                leaf parent-key-leaf {
                    type string;
                }

                leaf parent-ordinary-leaf {
                    type string;
                }

                list child-ordered-list {
                    ordered-by user;

                    key "child-key-leaf";

                    leaf child-key-leaf {
                        type string;
                    }

                    leaf child-ordinary-leaf {
                        type string;
                    }
                }
            }
        }
    }
}

In the following example, two normalized nodes based on the module above are written to and read from the data tree.

TipProducingDataTree inMemoryDataTree ==     InMemoryDataTreeFactory.getInstance().create(TreeType.OPERATIONAL);
inMemoryDataTree.setSchemaContext(schemaContext);

// first data tree modification
MapEntryNode parentOrderedListEntryNode == Builders.mapEntryBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifierWithPredicates(
parentOrderedListQName, parentKeyLeafQName, "pkval1"))
.withChild(Builders.leafBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(parentOrdinaryLeafQName))
.withValue("plfval1").build()).build();

OrderedMapNode parentOrderedListNode == Builders.orderedMapBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(parentOrderedListQName))
.withChild(parentOrderedListEntryNode).build();

ContainerNode parentContainerNode == Builders.containerBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(parentContainerQName))
.withChild(Builders.containerBuilder().withNodeIdentifier(
new NodeIdentifier(childContainerQName)).withChild(parentOrderedListNode).build()).build();

YangInstanceIdentifier path1 == YangInstanceIdentifier.of(parentContainerQName);

DataTreeModification treeModification == inMemoryDataTree.takeSnapshot().newModification();
treeModification.write(path1, parentContainerNode);

// second data tree modification
MapEntryNode childOrderedListEntryNode == Builders.mapEntryBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifierWithPredicates(
childOrderedListQName, childKeyLeafQName, "chkval1"))
.withChild(Builders.leafBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(childOrdinaryLeafQName))
.withValue("chlfval1").build()).build();

OrderedMapNode childOrderedListNode == Builders.orderedMapBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(childOrderedListQName))
.withChild(childOrderedListEntryNode).build();

ImmutableMap.Builder<QName, Object> builder == ImmutableMap.builder();
ImmutableMap<QName, Object> keys == builder.put(parentKeyLeafQName, "pkval1").build();

YangInstanceIdentifier path2 == YangInstanceIdentifier.of(parentContainerQName).node(childContainerQName)
.node(parentOrderedListQName).node(new NodeIdentifierWithPredicates(parentOrderedListQName, keys)).node(childOrderedListQName);

treeModification.write(path2, childOrderedListNode);
treeModification.ready();
inMemoryDataTree.validate(treeModification);
inMemoryDataTree.commit(inMemoryDataTree.prepare(treeModification));

DataTreeSnapshot snapshotAfterCommits == inMemoryDataTree.takeSnapshot();
Optional<NormalizedNode<?, ?>> readNode == snapshotAfterCommits.readNode(path1);
Optional<NormalizedNode<?, ?>> readNode2 == snapshotAfterCommits.readNode(path2);

First comes the creation of in-memory data tree instance. The schema context (containing the model mentioned above) of this tree is set. After that, two normalized nodes are built. The first one consists of a parent container, a child container and a parent ordered list which contains a key leaf and an ordinary leaf. The second normalized node is a child ordered list that also contains a key leaf and an ordinary leaf.

In order to add a child node to a node, method withChild() is used. It takes a NormalizedNode as argument. When creating a list entry, YangInstanceIdentifier.NodeIdentifierWithPredicates should be used as its identifier. Its arguments are the QName of the list, QName of the list key and the value of the key. Method withValue() specifies a value for the ordinary leaf in the list.

Before writing a node to the data tree, a path (YangInstanceIdentifier) which determines its place in the data tree needs to be defined. The path of the first normalized node starts at the parent container. The path of the second normalized node points to the child ordered list contained in the parent ordered list entry specified by the key value “pkval1”.

Write operation is performed with both normalized nodes mentioned earlier. It consist of several steps. The first step is to instantiate a DataTreeModification object based on a DataTreeSnapshot. DataTreeSnapshot gives you the current state of the data tree. Then comes the write operation which writes a normalized node at the provided path in the data tree. After doing both write operations, method ready() has to be called, marking the modification as ready for application to the data tree. No further operations within the modification are allowed. The modification is then validated - checked whether it can be applied to the data tree. Finally we commit it to the data tree.

Now you can access the written nodes. In order to do this, you have to create a new DataTreeSnapshot instance and call the method readNode() with path argument pointing to a particular node in the tree.

Serialization / deserialization of YANG Data

If you want to deserialize YANG-modeled data which have the form of an XML document, you can use the XML parser found in the module yang-data-codec-xml. The parser walks through the XML document containing YANG-modeled data based on the provided SchemaContext and emits node events into a NormalizedNodeStreamWriter. The parser disallows multiple instances of the same element except for leaf-list and list entries. The parser also expects that the YANG-modeled data in the XML source are wrapped in a root element. Otherwise it will not work correctly.

Here is an example of using the XML parser.

InputStream resourceAsStream == ExampleClass.class.getResourceAsStream("/example-module.yang");

XMLInputFactory factory == XMLInputFactory.newInstance();
XMLStreamReader reader == factory.createXMLStreamReader(resourceAsStream);

NormalizedNodeResult result == new NormalizedNodeResult();
NormalizedNodeStreamWriter streamWriter == ImmutableNormalizedNodeStreamWriter.from(result);

XmlParserStream xmlParser == XmlParserStream.create(streamWriter, schemaContext);
xmlParser.parse(reader);

NormalizedNode<?, ?> transformedInput == result.getResult();

The XML parser utilizes the javax.xml.stream.XMLStreamReader for parsing an XML document. First, you should create an instance of this reader using XMLInputFactory and then load an XML document (in the form of InputStream object) into it.

In order to emit node events while parsing the data you need to instantiate a NormalizedNodeStreamWriter. This writer is actually an interface and therefore you need to use a concrete implementation of it. In this example it is the ImmutableNormalizedNodeStreamWriter, which constructs immutable instances of NormalizedNodes.

There are two ways how to create an instance of this writer using the static overloaded method from(). One version of this method takes a NormalizedNodeResult as argument. This object type is a result holder in which the resulting NormalizedNode will be stored. The other version takes a NormalizedNodeContainerBuilder as argument. All created nodes will be written to this builder.

Next step is to create an instance of the XML parser. The parser itself is represented by a class named XmlParserStream. You can use one of two versions of the static overloaded method create() to construct this object. One version accepts a NormalizedNodeStreamWriter and a SchemaContext as arguments, the other version takes the same arguments plus a SchemaNode. Node events are emitted to the writer. The SchemaContext is used to check if the YANG data in the XML source comply with the provided YANG model(s). The last argument, a SchemaNode object, describes the node that is the parent of nodes defined in the XML data. If you do not provide this argument, the parser sets the SchemaContext as the parent node.

The parser is now ready to walk through the XML. Parsing is initiated by calling the method parse() on the XmlParserStream object with XMLStreamReader as its argument.

Finally you can access the result of parsing - a tree of NormalizedNodes containg the data as they are defined in the parsed XML document - by calling the method getResult() on the NormalizedNodeResult object.

Introducing schema source repositories
Writing YANG driven generators
Introducing specific extension support for YANG parser
Diagnostics