Welcome to OpenDaylight Documentation

The OpenDaylight documentation site acts as a central clearinghouse for OpenDaylight project and release documentation. If you would like to contribute to documentation, refer to the Documentation Guide.

Getting Started with OpenDaylight

OpenDaylight Downloads

Supported Releases

Sodium-SR4

(Current Release)

Announcement

Sodium Release

Original Release Date

September 24, 2019

Service Release Date

August 28, 2020

Downloads
Documentation
Neon-SR3
Announcement

Neon Release: Most Pervasive Open Source SDN Controller

Original Release Date

March 26, 2019

Service Release Date

December 20, 2019

Downloads
Documentation

Release Notes

Execution

OpenDaylight includes Karaf containers, OSGi (Open Service Gateway Initiative) bundles, and Java class files, which are portable and can run on any Java 8-compliant JVM (Java virtual machine). Any add-on project or feature of a specific project may have additional requirements.

Development

OpenDaylight is written in Java and utilizes Maven as a build tool. Therefore, the only requirements needed to develop projects within OpenDaylight include:

If an application or tool is built on top of OpenDaylight’s REST APIs, it does not have any special requirement beyond what is necessary to run the application or tool to make REST calls.

In some instances, OpenDaylight uses the Xtend lamguage. Even though Maven downloads all appropriate tools to build applications; additional plugins may be required to support IDE.

Projects with additional requirements for execution typically have similar or additional requirements for development. See the platforms release notes for details.

Platform Release Notes
Sodium Platform Upgrade

This document describes the steps to help users upgrade to the Sodium planned platform. Refer to Managed Release Integrated (MRI) project for more information.

Preparation
Version Bump

Before performing platform upgrade, do the following to bump the odlparent versions (for example, bump-odl-version):

  1. Update the odlparent version from 4.0.9 to 5.0.4. There should not be any reference to org.opendaylight.odlparent, except for other 5.0.4, including the custom feature.xml template (src/main/feature/feature.xml). The version range there should be “[5,6)” instead of “[4,5]”, “[4.0.5,5]” or any other variation.

bump-odl-version: bump-odl-version odlparent 4.0.9 5.0.4
  1. Update the direct yangtools version references from 2.1.8 to 3.0.7. There should not be any reference to org.opendaylight.yangtools, except for 3.0.7, including the custom feature.xml templates (src/main/feature/feature.xml). The version range there should be “[3,4)” instead of “[2.1,3).”

  2. Update the MDSAL version from 3.0.6 to 4.0.8. There should not be any reference to org.opendaylight.mdsal, except for 4.0.8.

rpl -R 3.0.6 4.0.8
Dependent Projects

Before performing platform upgrade, users must also install any dependent project. To locally install a dependent project, pull and install the respective sodium-mri changes for any dependent project. At the minimum, pull and install controller, AAA and NETCONF.

Perform the following steps to save time when locally installing any dependent project:

  • For quick install:

mvn -Pq clean install
  • If previously installed, go offline and/or use the no-snapshot-update option.

mvn -Pq -o -nsu clean install
Upgrade the ODL Parent

The following sub-section describes how to upgrade to the ODL Parent version 5. Refer to the ODL Parent Release Notes for more information.

Features

The following features are required to be replaced:

  • Change any version range referencing version 4 of ODL Parent to “[5,6]” for ODL Parent 5, for example:

<feature name="odl-infrautils-caches">
     <feature version="[5,6)">odl-guava</feature>
 </feature>
JSR305 (javax.annotation.Nullable and friends)

JSR305 annotations are no longer pulled into a project by default. Users have the option of migrating annotations to JDT (@Nullable et al), Checker Framework (@GuardedBy), SpotBugs (@CheckReturnValue) or by simply pulling in the JSR305 dependency into a project by adding the following to each pom.xml the use these annotations.:

<dependency>
   <groupId>com.google.code.findbugs</groupId>
   <artifactId>jsr305</artifactId>
   <optional>true</optional>
 </dependency>
FindBugs

The findbugs-maven-plugin is no longer supported by odlparent, so upgrade to the spotbugs by changing the following:

<groupId>org.codehaus.mojo</groupId>
<artifactId>findbugs-maven-plugin</artifactId>

To:

<groupId>com.github.spotbugs</groupId>
<artifactId>spotbugs-maven-plugin</artifactId>
JUnit 4.11 and Hamcrest 2.1

Before declaring dependencies on Hamcrest, make sure to update the order of Junit and Hamcrest references to match the required order http://hamcrest.org/JavaHamcrest/distributables#maven-upgrade-example. Alternatively, remove the declarations completely, since odlparent provides them by default (at scope=test).

Powermockito

An unfortunate interaction exists between powermock-2.0.0 and mockito-2.25.0 where the latter requires a newer byte-buddy library. This leads to an odd exception when powermock tests are run. For example:

13:15:50 Underlying exception : java.lang.IllegalArgumentException: Could not create type
13:15:50     at org.opendaylight.genius.itm.tests.ItmTestModule.configureBindings(ItmTestModule.java:97)
13:15:50     at org.opendaylight.infrautils.inject.guice.testutils.AbstractGuiceJsr250Module.checkedConfigure(AbstractGuiceJsr250Module.java:23)
13:15:50     at org.opendaylight.infrautils.inject.guice.testutils.AbstractCheckedModule.configure(AbstractCheckedModule.java:35)
13:15:50     ... 27 more
13:15:50 Caused by: java.lang.IllegalArgumentException: Could not create type
13:15:50     at net.bytebuddy.TypeCache.findOrInsert(TypeCache.java:154)
13:15:50     at net.bytebuddy.TypeCache$WithInlineExpunction.findOrInsert(TypeCache.java:365)
13:15:50     at net.bytebuddy.TypeCache.findOrInsert(TypeCache.java:174)
13:15:50     at net.bytebuddy.TypeCache$WithInlineExpunction.findOrInsert(TypeCache.java:376)
13:15:50     at org.mockito.internal.creation.bytebuddy.TypeCachingBytecodeGenerator.mockClass(TypeCachingBytecodeGenerator.java:32)
13:15:50     at org.mockito.internal.creation.bytebuddy.SubclassByteBuddyMockMaker.createMockType(SubclassByteBuddyMockMaker.java:71)
13:15:50     at org.mockito.internal.creation.bytebuddy.SubclassByteBuddyMockMaker.createMock(SubclassByteBuddyMockMaker.java:42)
13:15:50     at org.mockito.internal.creation.bytebuddy.ByteBuddyMockMaker.createMock(ByteBuddyMockMaker.java:25)
13:15:50     at org.powermock.api.mockito.mockmaker.PowerMockMaker.createMock(PowerMockMaker.java:41)
13:15:50     at org.mockito.internal.util.MockUtil.createMock(MockUtil.java:35)
13:15:50     at org.mockito.internal.MockitoCore.mock(MockitoCore.java:62)
13:15:50     at org.mockito.Mockito.mock(Mockito.java:1907)
13:15:50     at org.mockito.Mockito.mock(Mockito.java:1816)
13:15:50     ... 30 more
13:15:50 Caused by: java.lang.NoSuchMethodError: net.bytebuddy.dynamic.loading.MultipleParentClassLoader$Builder.appendMostSpecific(Ljava/util/Collection;)Lnet/bytebuddy/dynamic/loading/MultipleParentClassLoader$Builder;
13:15:50     at org.mockito.internal.creation.bytebuddy.SubclassBytecodeGenerator.mockClass(SubclassBytecodeGenerator.java:83)
13:15:50     at org.mockito.internal.creation.bytebuddy.TypeCachingBytecodeGenerator$1.call(TypeCachingBytecodeGenerator.java:37)
13:15:50     at org.mockito.internal.creation.bytebuddy.TypeCachingBytecodeGenerator$1.call(TypeCachingBytecodeGenerator.java:34)
13:15:50     at net.bytebuddy.TypeCache.findOrInsert(TypeCache.java:152)
13:15:50     ... 42 more

The solution is to declare a dependency on mockito-core before the powermock dependency. For example:

<dependency>
   <groupId>org.mockito</groupId>
   <artifactId>mockito-core</artifactId>
   <scope>test</scope>
 </dependency>
 <dependency>
   <groupId>org.powermock</groupId>
   <artifactId>powermock-api-mockito2</artifactId>
   <scope>test</scope>
 </dependency>
 <dependency>
   <groupId>org.powermock</groupId>
   <artifactId>powermock-module-junit4</artifactId>
   <scope>test</scope>
 </dependency>
 <dependency>
   <groupId>org.powermock</groupId>
   <artifactId>powermock-reflect</artifactId>
   <scope>test</scope>
 </dependency>
 <dependency>
   <groupId>org.powermock</groupId>
   <artifactId>powermock-core</artifactId>
   <scope>test</scope>
 </dependency>
Blueprint-maven-plugin

The default configuration of blueprint-maven-plugin was tightened to only consider classes within ${project.groupId}. For classes outside of an assigned namespace, such as netconf has in org.opendaylight.restconf (instead of org.opendaylight.netconf), users must override this configuration:

<plugin>
     <groupId>org.apache.aries.blueprint</groupId>
     <artifactId>blueprint-maven-plugin</artifactId>
     <configuration>
       <scanPaths>
         <scanPath>org.opendaylight.restconf</scanPath>
       </scanPaths>
     </configuration>
   </plugin>
javadoc-maven-plugin

The Default configuration of javadoc-maven-plugin was updated. Now, the javadoc generation defaults to HTML5 when built with JDK9+. This can result in a javadoc failures. for example:

/w/workspace/autorelease-release-sodium-mvn35-openjdk11/openflowplugin/extension/openflowplugin-extension-api/src/main/java/org/opendaylight/openflowplugin/extension/api/GroupingLooseResolver.java:71: error: tag not supported in the generated HTML version: tt
 * @param data expected to match <T extends Augmentable<T>>

To fix this, there are the following two options:

  • Fix the Javadoc. This is preferred, since it is simple to do.

  • Add an override for an artifact by creating (and committing to git) an empty file named “odl-javadoc-html5-optout” in an artifact’s root directory (that is, where its pom.xml is located).

YANG Tools Impacts
YANG Parser

To comply with RFC7950, the default YANG parser configuration validates the following construct. This is not a random XPath, and the prefixes must be validly imported.

leaf foo {
    type leafref {
        path "/foo:bar";
    }
}
Other Changes

Beside from the above issue, the following bugs, enhancements and features were delivered to Sodium Simultaneous Release.

MD-SAL Impacts
Empty-type Mapping

Java mapping for “type empty” construct was changed to the following:

leaf foo {
    type empty;
}

Changed from:

java.lang.Boolean isFoo();

to:

org.opendaylight.yangtools.yang.common.Empty getFoo();

In addition, code interacting with these models must be be updated to the following: ProtocolUtile.

DataContainer.getImplementedInterface() Renamed

The DataContainer.getImplementedInterface() method was renamed to just implementedInterface(). In addition, it is now correctly type-narrowed in generated interfaces, which also provides a default implementation. When implementing a type registry, update the references to point to this new implementedInterface() method.

For hand-crafting interfaces or providing mock implementations, provide a proper implementedInterface() implementation such as this one.

DataContainer.implementedInterface() is type-narrowed in DataObjects

The replacement for getImplementedInterface(), implementedInterface() was narrowed when generated intermediate interfaces. This allows groupings to provide a default implementation in container-like interfaces. For example:

public interface Grp
     extends
     DataObject
  {
     @Override
     Class<? extends Grp> implementedInterface();
  }

The users are like this:

public interface Cont
     extends
     ChildOf<Mdsal437Data>,
     Augmentable<Cont>,
     Grp
 {
     @Override
     default Class<Cont> implementedInterface() {
         return Cont.class;
     }
 }

The preceding command works, but unfortunately was seen to trigger a Javac bug (or something forbidden by JLS, the information is not available nor digestible), where the following construct involving two unrelated groupings fails to compile:

<T extends Grp1 & Grp2> void doSomething(Builder<T>);

The intent is to say, “require a Builder of a type T, which extends both Grp1 and Grp2”. It seems javac (tested with JDK8, JDK11) internally performs the equivalent of the following, which fails to compile (with the same error as javac reports in the <T ..> case), since T must do the equivalent of what Cont does; narrow implementedInterface() to solve the ambiguity. That is not a reason to not allow it. For example, Eclipse (that is, JDT compiler) will accept this construct without any issues.

interface T extends Grp1, Grp2 {
  }
MD-SAL PingPongDataBroker Does Not Separate

Both binding and DOM definitions of DataBroker was updated to include a createMergingTransactionChain() method, which integrates the functionality formerly provided by the odl:type=”pingpong” data broker instance. In addition, the downstream will need to update to use the default instance to create the appropriate transaction chain manually. Note this impacts only the org.opendaylight.mdsal interfaces, not just the org.opendaylight.controller.

An example of changes can be found AppPeerBenchmark and bgp-app-peer. Note the same broker can be used both ways; thus, the proper place to change the createTransactionChain() call must be updated.

Project Release Notes
AAA
Overview

AAA (Authentication, Authorization, and Accounting) are services that help improve the security posture of an OpenDaylight deployment. By default, the majority of OpenDaylight’s northbound APIs (and all RESTCONF APIs) are protected by AAA after installing the +odl-restconf+ feature. When an API is not protected by AAA, it will be noted in the release notes.

Major Features
odl-aaa-shiro
  • Feature URL: ODL Shiro

  • Feature Description: ODL Shiro-based AAA implementation

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: CSIT

odl-aaa-cert
  • Feature URL: ODL Cert

  • Feature Description: MD-SAL based encrypted certificate management

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: CSIT

odl-aaa-cli
  • Feature URL: ODL CLI

  • Feature Description: Basic karaf CLI commands for interacting with AAA

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: CSIT

Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • No

  • Other security issues?

    • No

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, no specific steps needed.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Behavior Changes

Bug ID

Description

AAA-173

Eliminate the Oauth2 Provider Implementation that was based on Apache Oltu.

Bug Fixes
Known Issues
End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • None

Standards
  • List of standards implemented and to what extent.

    • N/A

Release Mechanics
  • N/A

BGP LS PCEP
BGP Plugin

The OpenDaylight controller provides an implementation of BGP (Border Gateway Protocol), which is based on RFC 4271) as a south-bound protocol plugin. The implementation renders all basic BGP speaker capabilities, including:

  • inter/intra-AS peering

  • routes advertising

  • routes originating

  • routes storage

The plugin’s north-bound API (REST/Java) provides to user:

  • fully dynamic runtime standardized BGP configuration

  • read-only access to all RIBs

  • read-write programmable RIBs

  • read-only reachability/linkstate topology view

PCEP Plugin

The OpenDaylight Path Computation Element Communication Protocol (PCEP) plugin provides all basic service units necessary to build-up a PCE-based controller. Defined by rfc8231, PCEP offers LSP management functionality for Active Stateful PCE, which is the cornerstone for majority of PCE-enabled SDN solutions. It consists of the following components:

  • Protocol library

  • PCEP session handling

  • Stateful PCE LSP-DB

  • Active Stateful PCE LSP Operations

Major Features
odl-bgpcep-bgp
  • Feature URL: BGPCEP BGP

  • Feature Description: OpenDaylight Border Gateway Protocol (BGP) plugin.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: CSIT

odl-bgpcep-bmp
  • Feature URL: BGPCEP BMP

  • Feature Description: OpenDaylight BGP Monitoring Protocol (BMP) plugin.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: CSIT

odl-bgpcep-pcep
  • Feature URL: BGPCEP PCEP

  • Feature Description: OpenDaylight Path Computation Element Configuration Protocol (PCEP) plugin.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: CSIT

Documentation
User Guide(s):
  • N/A

Developer Guide(s):
  • N/A

Security Considerations
  • None Known: All protocol implements the TCP Authentication Option (TCP MD5)

Quality Assurance

The BGP extensions were tested manually with a vendor’s BGP router implementation or other software implementations (exaBGP, bagpipeBGP). Also, they are covered by the unit tests and automated system tests.

Migration

No additional migration steps needed.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • BGP CSS configuration is not longer supported. BMP CSS configuration is not longer supported. PCEP CSS configuration is not longer supported.

New and Modified Features

This releases provides the following new and modified features:

  • BGPCEP-871: RPC to provide PCEP session statistics

  • BGPCEP-868: Support for draft-ietf-idr-ext-opt-param

Bug Fixes
Known Issues
End-of-life
  • BGP CSS Configuration.

  • PCEP CSS Configuration.

  • BMP CSS Configuration.

Standards
  • N/A

Release Mechanics
Data Export/Import
Overview

Data Export/Import (Daexim) feature allows OpenDaylight administrators to export the current system state to the file system or to import the state from the file system.

Major Features

This release provides the following features:

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • No

  • Other security issues?

    • None

Quality Assurance
  • Sonar Report

  • Code coverage is 78.8%

  • There are extensive unit-tests in the code.

Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Migration should work across all releases.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bug Fixes

The following table lists the resolved issues fixed in this release.

Key

Summary

General commit

Address Sonar warnings found in the code. No behavior changes.

Known Issues
End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release

    • None

Standards
  • List of standards implemented.

    • None

Release Mechanics
  • Describe any major shifts in release schedule from the release plan.

    • None

Distribution
Overview

The Distribution project is the placeholder for the ODL karaf distribution. The project currently generates 2 artifacts:

  • The Managed distribution (e.g. karaf-<version>.tar.gz): This includes the Managed projects in OpenDaylight (See Managed Release).

  • The Common distribution (e.g. opendaylight-<version>.tar.gz): This includes Managed and Self-Managed projects (See Managed Release).

The distribution project is also the placeholder for the distribution scripts. Example of these scripts:

Major Features
Managed Distribution Archive
  • Gitweb URL: Managed Archive

  • Description: Zip or tar.gz; when extracted, a self-consistent ODL installation with Managed projects is created.

  • Top Level: Yes.

  • User Facing: Yes.

  • Experimental: No.

  • CSIT Test: CSIT

Full Distribution Archive
  • Gitweb URL: Distribution Archive

  • Description: Zip or tar.gz; when extracted, a self-consistent ODL installation with all projects is created.

  • Top Level: Yes.

  • User Facing: Yes.

  • Experimental: No.

  • CSIT Test: CSIT

Documentation
Security Considerations
  • CSIT job

  • No additional manual testing was needed.

Migration

Every distribution major release comes with new and deprecated project features, as well as new Karaf version. Because of this it is recommend to perform a new ODL installation.

Compatibility

Test features change every release, but these are only intended for distribution test.

Bugs Fixed

No issues were resolved in this release.

Known Issues
  • ODLPARENT-110

    Successive feature installation from karaf4 console causes bundles refresh.

    Workaround:

    • Use –no-auto-refresh option in the karaf feature install command.

      feature:install --no-auto-refresh odl-netconf-topology
      
    • List all the features you need in the karaf config boot file.

    • Install all features at once in console, for example:

      feature:install odl-restconf odl-netconf-mdsal odl-mdsal-apidocs odl-clustering-test-app odl-netconf-topology
      
  • ODLPARENT-113

    The ssh-dss method is used by Karaf SSH console, but no longer supported by clients such as OpenSSH.

    Workaround:

    • Use the bin/client script, which uses karaf:karaf as the default credentials.

    • Use this ssh option:

      ssh -oHostKeyAlgorithms=+ssh-dss -p 8101 karaf@localhost
      

    After restart, Karaf is unable to re-use the generated host.key file.

    Workaround: Delete the etc/host.key file before starting Karaf again.

Standards

No standard implemented directly (see upstream projects).

Release Mechanics
Genius
Overview

Genius project provides Generic Network Interfaces, Utilities & Services. Any ODL application can use these to achieve interference-free co-existence with other applications using Genius. OpendayLight Genius provides following modules:

Module

Description

Interface (logical port) Manager

Allows bindings/registration of multiple services to logical ports/interfaces.

Overlay Tunnel Manager

Creates and maintains overlay tunnels between configured tunnel endpoints.

Aliveness Monitor

Provides tunnel/nexthop aliveness monitoring services

ID Manager

Generates cluster-wide persistent unique integer IDs

MD-SAL Utils

Provides common generic APIs for interaction with MD-SAL

Resource Manager

Provides a resource sharing framework for applications sharing common resources e.g. table-ids, group-ids etc.

FCAPS Application

Generates various alarms and counters for the different genius modules

FCAPS Framework

Module collectively fetches all data generated by fcaps application. Any underlying infrastructure can subscribe for its events to have a generic overview of the various alarms and counters.

Major Features
odl-genius-api
  • Feature URL: ODL API

  • Feature Description: This feature includes API for all the functionalities provided by Genius.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Tests:

odl-genius
  • Feature URL: ODL

  • Feature Description: This feature provides all functionalities provided by genius modules, including interface manager, tunnel manager, resource manager and ID manager and MDSAL Utils. It includes Genius APIs and implementation.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Tests:

In addition, the feature is well tested by the netvirt CSIT suites.

odl-genius-rest
  • Feature URL: REST

  • Feature Description: This feature includes RESTCONF with ‘odl-genius’ feature.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test:

odl-genius-fcaps-application
  • Feature URL: FCAPS Application

  • Feature Description: includes genius FCAPS application.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: None

odl-genius-fcaps-framework
  • Feature URL: FCAPS Framework

  • Feature Description: Includes genius FCAPS framework.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: None

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • No

  • Other security issues?

    • N/A

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, a normal upgrade of the software should work.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bug Fixes
Known Issues
End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • None

Standards
  • List of standards implemented.

    • N/A

Release Mechanics
Infrautils

Infrautils project provides low level utilities for use by other OpenDaylight projects, including:

  • @Inject DI

  • Utils incl. org.opendaylight.infrautils.utils.concurrent

  • Test Utilities

  • Job Coordinator

  • Ready Service

  • Integration Test Utilities (itestutils)

  • Caches

  • Diagstatus

  • Metrics

Major Features
odl-infrautils-all
  • Feature URL: All features

  • Feature Description: This feature exposes all infrautils framework features.

  • Top Level: Yes

  • User Facing: No

  • Experimental: Yes

  • CSIT Test:

odl-infrautils-jobcoordinator
  • Feature URL: Jobcoordinator

  • Feature Description: This feature provides technical utilities and infrastructures for other projects to use.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: Covered by Netvirt and Genius CSITs

odl-infrautils-metrics
  • Feature URL: Metrics

  • Feature Description: This feature exposes the new infrautils.metrics API with labels and first implementation based on Dropwizard incl. thread watcher.

  • Top Level: Yes

  • User Facing: No

  • Experimental: Yes

  • CSIT Test: Covered by Netvirt and Genius CSITs.

odl-infrautils-ready
  • Feature URL: Ready

  • Feature Description: This feature exposes the system readiness framework.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Covered by Netvirt and Genius CSITs.

odl-infrautils-caches
  • Feature URL: Cache

  • Feature Description: This feature exposes new infrautils.caches API, CLI commands for monitoring, and first implementation based on Guava.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: Covered by Netvirt and Genius CSITs.

odl-infrautils-diagstatus
  • Feature URL: Diagstatus

  • Feature Description: This feature exposes the status and diagnostics framework.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Covered by Netvirt and Genius CSITs.

odl-infrautils-metrics-prometheus
  • Feature URL: Prometheus

  • Feature Description: This feature exposes metrics by HTTP on /metrics/prometheus from the local ODL to an external Prometheus setup.

  • Top Level: Yes

  • User Facing: No

  • Experimental: Yes

  • CSIT Test: None

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • No

  • Other security issues?

    • N/A

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, a normal upgrade of the software should work.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bugs Fixed
  • There were no significant bugs fixed since the previous release.

Known Issues
End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • Counters infrastructure (replaced by metrics).

Standards
  • List of standards implemented and to what extent.

    • N/A

Release Mechanics
LISP Flow Mapping
Overview

LISP (Locator ID Separation Protocol) Flow Mapping service provides mapping services, including LISP Map-Server and LISP Map-Resolver services that store and serve mapping data to dataplane nodes and to OpenDaylight applications. Mapping data can include mapping of virtual addresses to physical network addresses where the virtual nodes are reachable or hosted. Mapping data can also include a variety of routing policies including traffic engineering and load balancing. To leverage this service, OpenDaylight applications and services can use the northbound REST API to define the mappings and policies in the LISP Mapping Service. Dataplane devices capable of LISP control protocol can leverage this service through a southbound LISP plugin. LISP-enabled devices must be configured to use this OpenDaylight service, since their Map- Server and/or Map-Resolver.

Southbound LISP plugin supports the LISP control protocol (that is, Map-Register, Map-Request, Map-Reply messages). It can also be used to register mappings in the OpenDaylight mapping service.

Major Features
odl-lispflowmapping-msmr
  • Feature URL: MSMR

  • Feature Description: This is the core feature that provides the Mapping Services and includes the LISP southbound plugin feature as well.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: CSIT

odl-lispflowmapping-neutron
  • Feature URL: Neutron

  • Feature Description: This feature provides Neutron integration.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • Yes, the southbound plugin.

  • If so, how are they secure?

    • LISP southbound plugin follows LISP RFC6833 security guidelines.

  • What port numbers do they use?

    • Port used: 4342

  • Other security issues?

    • None

Quality Assurance
  • Sonar Report (59.6%)

  • CSIT Jobs

  • All modules have been unit tested. Integration tests have been performed for all major features. System tests have been performed on most major features.

  • Registering and retrieval of basic mappings have been tested more thoroughly. More complicated mapping policies have gone through less testing.

Migration
  • Is it possible to migrate from the previous release? If so, how?

    • LISP Flow Mapping service will auto-populate the data structures from existing MD-SAL data upon service start if the data has already been migrated separately. No automated way for transferring the data is provided in this release.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bugs Fixed
  • None

Known Issues
End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • N/A

Standards
  • The LISP implementation module and southbound plugin conforms to the IETF RFC6830 and RFC6833, with the following exceptions:

    • In Map-Request message, M bit(Map-Reply Record exist in the MapRequest) is processed, but any mapping data at the bottom of a Map-Request are discarded.

    • LISP LCAFs are limited to only up to one level of recursion, as described in the IETF LISP YANG draft.

    • No standards exist for the LISP Mapping System northbound API as of this date.

Release Mechanics
NETCONF
Major Features

For each top-level feature, identify the name, URL, description, etc. User-facing features are used directly by end users.

odl-netconf-topology
  • Feature URL: NETCONF Topology

  • Feature Description: NETCONF southbound plugin single-node, configuration through MD-SAL.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: NETCONF CSIT

odl-netconf-clustered-topology
  • Feature URL: Clustered Topology

  • Feature Description: NETCONF southbound plugin clustered, configuration through MD-SAL.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: Cluster CSIT

odl-netconf-console
  • Feature URL: Console

  • Feature Description: NETCONF southbound configuration with Karaf CLI.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

odl-netconf-mdsal
  • Feature URL: MD-SAL

  • Feature Description: NETCONF server for MD-SAL.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: MD-SAL CSIT

odl-restconf
  • Feature URL: RESTCONF

  • Feature Description: RESTCONF

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Tested by any suite that uses RESTCONF.

odl-mdsal-apidocs
  • Feature URL: API Docs

  • Feature Description: MD-SAL - apidocs

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

odl-yanglib
  • Feature URL: YANG Lib

  • Feature Description: Yanglib server.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

odl-netconf-callhome-ssh
  • Feature URL: Call Home SSH

  • Feature Description: NETCONF Call Home.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Call Home CSIT.

New and Modified Features

The following list are the new and modified features introduced in this release:

  • An option was provided in YANG tools that preserves the ordering of requests as defined in the YANG file when formulating the NETCONF payload. This help devices that are strict on the ordering of elements. To do this, the JAVA parameter “org.opendaylight.yangtools.yang.data.impl.schema.builder.retain-child-order” needs to be set to true before starting Karaf.

  • NETCONF-608: Change NETCONF keepalives are not sent during any large payload reply. Stop to send the keepalive RPC to device, while ODL is waiting/processing the response from the device.

  • An item was added to optionally not issue lock/unlock for NETCONF edit-config issues. This is only for devices that can handle multiple requests through a queue. Please contact the vendor before enabling this option, since all transaction semantics are off by default if this option is set for a device. This option can be set by issuing a PUT RESTCONF call. For example:

    /restconf/config/netconf-node-optional:netconf-node-fields-optional/topology/topology-netconf/node/{node-id}/datastore-lock
    
    {
      "netconf-node-optional:datastore-lock"  : {
      "datastore-lock-allowed" : false
      }
    }
    
  • An option was added at the device mount time to lock or unlock the datastore before issuing an edit-config command. Default value is true. If set to false, then do not issue a lock/unlock before issuing edit-config.

  • The get-config RPC functionality of the ietf-netconf.yang file is available for mounted NETCONF devices. This functionality enables users to get around not supported features on Restconf, such as NETCONF filtering. Using this method, users can custom construct any NETCONF request.

  • A flexible mount point naming strategy was added, so that users can now configure mount point names to either contain IP address and port (default), or just the IP address. This feature was added for the NETCONF call-home feature.

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • Yes, we have MD-SAL and CSS NETCONF servers. Also, a server for NETCONF Call Home.

  • If so, how are they secure?

    • NETCONF over SSH

  • What port numbers do they use?

    • Refer to Ports. NETCONF Call Home uses TCP port 6666.

  • Other security issues?

    • None

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes. No additional steps required.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bugs Fixed

Bug ID

Description

NETCONF-24

There is an assumption that a RESTCONF URL behaves just as an HTTP does by squashing multiple slashes into one. However, an error is still thrown when there is an empty element in this case.

NETCONF-320

The query parameter field does not work when there is more than one nested field.

NETCONF-366

An output-less RPC must either return an output element or status code 204. Currently, this does not occur.

NETCONF-448

Support for a YANG1.1 action should be added to MDSAL.

NETCONF-527

Currently, netconf-testtool uses /tmp directory to save temporary key file. However, writing temporary data to a file system must be avoided, because it makes some test tool deployments difficult.

NETCONF-528

The netconf-testtool configuration should accept Set<YangModuleInfo> as a model list. Currently, this does not occur.

NETCONF-608

Currently, NETCONF keepalives are sent during large payload replies. This should not occur.

NETCONF-609

In corner cases, there is a security issue when logging passwords in plain text.

NETCONF-611

In some cases, an attempt is made by NETCONF to remount regardless of the error-type.

NETCONF-612

In corner cases, a NETCONF mount failed in the master.

NETCONF-613

In rare cases, adding a device configuration using POST failed in Sodium.

NETCONF-614

The NETCONF callhome server does not display the disconnect cause.

NETCONF-615

Callhome will throw NPEs in DTCL.

NETCONF-616

Yangtools does not process the output of get-config RPC in the ietf-netconf YANG model.

NETCONF-619

Implementing code changed for YANG1.1 action for Restconf Layer.

NETCONF-620

An action contained in an augment-prepare of a request failed.

NETCONF-622

Starting Karaf in latest distribution failed with an exception.

NETCONF-623

Currently, it is not possible to receive notifications through the RESTCONF RFC8040 implementation.

NETCONF-624

In corner cases, the NETCONF testtool did not connect to OpenDaylight.

NETCONF-629

Currently, there is no support for disabling of the lock/unlock feature for NETCONF requests.

NETCONF-630

The aacceptance/E2E test needs to be added to the testtool.

NETCONF-633

Updates are required for the user guide with the information on how to use custom RPC with test-tool.

NETCONF-637

In some cases, RESTCONF does not initialize when the used models have deviations.

Known Issues

Bug ID

Description

NETCONF-644

In some cases, the standard edit-config failed when the module augmenting base NETCONF was retrieved from a device.

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release:

    • N/A

Standards
Release Mechanics
NetVirt
Major Features
Feature Name
  • Feature Name: odl-netvirt-openstack

  • Feature URL: odl-netvirt-openstack

  • Feature Description: NetVirt is a network virtualization solution that includes the following components:

    • Open vSwitch based virtualization for software switches.

    • Hardware VTEP for hardware switches.

    • Service Function Chaining support within a virtualized environment.

    • Support for OVS and DPDK-accelerated.

    • OVS data paths, L3VPN (BGPVPN), EVPN, ELAN, distributed L2 and L3, NAT and Floating IPs, IPv6, Security Groups, MAC and IP learning.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: NetVirt CSIT

Documentation
Security Considerations
  • No known issues.

Quality Assurance
Migration
  • Nothing beyond general migration requirements.

Compatibility
  • Nothing beyond general compatibility requirements.

Bugs Fixed
Known Issues
End-of-life

Both SFC Netvirt and COE Netvirt Integration are reaching an EOL due to lack of support from their respective projects. COE Netvirt CSIT jobs are already disabled, and SFC is deprecated for Sodium and will be removed for Magnesium if support does not come from the SFC project.

Standards
  • N/A

Release Mechanics
OpenFlow Plugin
Overview

The OpenFlow Plugin project provides the following functionality:

  • OpenFlow 1.0/1.3 Implementation Project provides the implementation of the OpenFlow 1.0 and OpenFlow 1.3 specification.

  • ONF Approved extensions Project provides the implementation of following ONF OpenFlow 1.4 feature, which is approved as extensions for the OpenFlow 1.3 specification.

  • OpenFlow 1.4 Bundle Feature:

    • Nicira Extensions Project provides the implementation of the Nicira Extensions. Some of the important extensions implemented are Connection Tracking Extension and Group Add-Mod Extension

  • OpenFlow-Based Applications Project provides the following applications that user can leverage out-of-the-box in developing their application or as a direct end consumer:

    • Forwarding Rules Manager: Application provides functionality to add/remove/update flow/groups/meters.

    • LLDP Speaker: Application sends periodic LLDP packet out on each OpenFlow switch port for link discovery.

    • Topology LLDP Discovery: Application intercept the LLDP packets and discover the link information.

    • Topology Manager: Application receives the discovered links information from Topology LLDP Discovery application and stores in the topology yang model datastore.

    • Reconciliation Framework: Framework that exposes the APIs that consumer application (in-controller) can leverage to participate in the switch reconciliation process in the event of switch connection/reconnection.

    • Arbitrator Reconciliation: Application exposes the APIs that consumer application or direct user can leverage to trigger the device configuration reconciliation.

    • OpenFlow Java Library Project provides the OpenFlow Java Library that converts the data based on OpenFlow plugin data models to the OpenFlow java models before sending it down the wire to the device.

New and Modified Features

This release provides the following new and modified features:

  • Feature: OVS based NA Responder for IPv6 default gateway.

  • Feature Description: Feature implements an OVS based service that respond to Neighbor Advertisement request for IPv6 default gateway.

odl-openflowjava-protocol
  • Feature URL: JAVA Protocol

  • Feature Description: OpenFlow protocol implementation.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: JAVA CSIT

odl-openflowplugin-app-config-pusher
  • Feature URL: Config Pusher

  • Feature Description: Pushes node configuration changes to OpenFlow device.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: Pusher CSIT

odl-openflowplugin-app-forwardingrules-manager
  • Feature URL: Forwarding Rules Manager

  • Feature Description: Sends changes in config datastore to OpenFlow device incrementally. forwardingrules-manager can be replaced with forwardingrules-sync and vice versa.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: FR Manager CSIT

odl-openflowplugin-app-forwardingrules-sync
  • Feature URL: Forwarding Rules Sync

  • Feature Description: Sends changes in config datastore to OpenFlow devices taking previous state in account and doing diffs between previous and new state. forwardingrules-sync can be replaced with forwardingrules-manager and vice versa.

  • Top Level: Yes

  • User Facing: No

  • Experimental: Yes

  • CSIT Test: FR Sync CSIT

odl-openflowplugin-app-table-miss-enforcer
  • Feature URL: Miss Enforcer

  • Feature Description: Sends table miss flows to OpenFlow device when it connects.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: Enforcer CSIT

odl-openflowplugin-app-topology
  • Feature URL: App Topology

  • Feature Description: Discovers topology of connected OpenFlow devices. It a wrapper feature that loads the following features:

    • odl-openflowplugin-app-lldp-speaker

    • odl-openflowplugin-app-topology-lldp-discovery

    • odl-openflowplugin-app-topology-manager).

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: App Topology CSIT

odl-openflowplugin-app-lldp-speaker
  • Feature URL: LLDP Speaker

  • Feature Description: Send periodic LLDP packets on all the ports of all the connected OpenFlow devices.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: LLDP Speaker CSIT

odl-openflowplugin-app-topology-lldp-discovery
  • Feature URL: LLDP Discovery

  • Feature Description: Receives the LLDP packet sent by LLDP speaker service and generate the link information and publish to the downstream services looking for link notifications.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: LLDP Discovery CSIT

odl-openflowplugin-app-topology-manager
  • Feature URL: Topology Manager

  • Feature Description: Listen to the link added/removed notification and node connect/disconnection notification and update the link information in the OpenFlow topology.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: Topology Manager CSIT

odl-openflowplugin-nxm-extensions
  • Feature URL: NXM Extensions

  • Feature Description: Support for OpenFlow Nicira Extensions.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: NXM Extensions CSIT

odl-openflowplugin-onf-extensions
  • Feature URL: ONF Extensions

  • Feature Description: Support for Open Networking Foundation Extensions.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: No

odl-openflowplugin-flow-services
  • Feature URL: Flow Services

  • Feature Description: Wrapper feature for standard applications.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Flow Services CSIT

odl-openflowplugin-flow-services-rest
odl-openflowplugin-flow-services-ui
  • Feature URL: Serices UI

  • Feature Description: Wrapper + REST interface + UI.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Flow Services UI CSIT

odl-openflowplugin-nsf-model
  • Feature URL: NSF Model

  • Feature Description: OpenFlowPlugin YANG models.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: NSF CSIT

odl-openflowplugin-southbound
  • Feature URL: Southbound

  • Feature Description: Southbound API implementation.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test: Southbound CSIT

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • Yes, OpenFlow devices

  • Other security issues?

    N/A

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, APIs from the previous release are supported in the Sodium release.

Compatibility
  • Is this release compatible with the previous release? Yes

Bugs Fixed

List of bugs fixed since the previous release.

Known Issues
  • List key known issues with workarounds:

Bug ID

Description

OPNFLWPLUG-1075

Group tx-chain closed by port event thread.

OPNFLWPLUG-1074

Table stats not available after a switch flap.

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • None

Standards

OpenFlow versions:

Release Mechanics
OVSDB Project
Overview

The OVSDB Project provides the following functionality:

  • OVSDB Southbound Plugin handles the OVS device that supports the OVSDB schema and uses the OVSDB protocol. This feature provides the implementation of the defined YANG models. Developers developing the in-controller application and want to leverage OVSDB for device configuration can leverage this functionality.

  • HWvTep Southbound Plugin handles the OVS device that supports the OVSDB Hardware vTEP schema and uses OVSDB protocol. This feature provides the implementation of the project defined YANG models. Developers developing the in-controller application and want to leverage OVSDB Hardware vTEP plugin for device configuration can leverage this functionality.

Major Features
odl-ovsdb-southbound-api
  • Feature URL: Southbound API

  • Feature Description: This feature provides the YANG models for northbound users to configure the OVSDB device. These YANG models are designed based on the OVSDB schema. This feature does not provide the implementation of YANG models. If user/developer prefer to write their own implementation they can use this feature to load the YANG models in the controller.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test:

odl-ovsdb-southbound-impl
  • Feature URL: Southbound IMPL

  • Feature Description: This feature is the main feature of the OVSDB Southbound plugin. This plugin handles the OVS device that supports the OVSDB schema and uses the OVSDB protocol. This feature provides the implementation of the defined YANG models. Developers developing the in-controller application that want to leverage OVSDB for device configuration can add a dependency on this feature and all the required modules will be loaded.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test:

odl-ovsdb-southbound-impl-rest
  • Feature URL: Southbound IMPL Rest

  • Feature Description: This feature is the wrapper feature that installs the odl-ovsdb-southbound-api & odl-ovsdb-southbound-impl feature with other required features for restconf access to provide a functional OVSDB southbound plugin. Users who want to develop applications that manage the OVSDB supported devices but want to run the application outside of the OpenDaylight controller must install this feature.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test:

odl-ovsdb-hwvtepsouthbound-api
  • Feature URL: HWVT Southbound API

  • Feature Description: This feature provides the YANG models for northbound users to configure the device that supports OVSDB Hardware vTEP schema. These YANG models are designed based on the OVSDB Hardware vTEP schema. This feature does not provide the implementation of YANG models. If user/developer prefer to write their own implementation of the defined YANG model, they can use this feature to install the YANG models in the controller.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: Minimal set of CSIT test is already in place. More work is in progress and will have better functional coverage in future release: CSIT

odl-ovsdb-hwvtepsouthbound
  • Feature URL: HWVTEP Southbound

  • Feature Description: This feature is the main feature of the OVSDB Hardware vTep Southbound plugin. This plugin handles the OVS device that supports the OVSDB Hardware vTEP schema and uses the OVSDB protocol. This feature provides the implementation of the defined YANG models. Developers developing the in-controller application that want to leverage OVSDB Hardware vTEP plugin for device configuration can add a dependency on this feature, and all the required modules will be loaded.

  • Top Level: Yes

  • User Facing: No

  • Experimental: Yes

  • CSIT Test: Minimal set of CSIT test is already in place. More work is in progress and will have better functional coverage in future release. CSIT

odl-ovsdb-hwvtepsouthbound-rest
  • Feature URL: HWVTEP Southbound Rest

  • Feature Description: This feature is the wrapper feature that installs the odl-ovsdb-hwvtepsouthbound-api & odl-ovsdb-hwvtepsouthbound features with other required features for restconf access to provide a functional OVSDB Hardware vTEP plugin. Users who want to develop applications that manage the Hardware vTEP supported devices but want to run the applications outside of the OpenDaylight controller must install this feature.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: Yes

  • CSIT Test: Minimal set of CSIT test is already in place. More work is in progress and will have better functional coverage in future release. CSIT

odl-ovsdb-library
  • Feature URL: Library

  • Feature Description: Encode/decoder library for OVSDB and Hardware vTEP schema.

  • Top Level: Yes

  • User Facing: No

  • Experimental: No

  • CSIT Test:

Documentation
  • N/A

Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • Yes, Southbound Connection to OVSDB/Hardware vTEP devices.

  • Other security issues?

    • Plugin’s connection to device is by default unsecured. Users need to explicitly enable the TLS support through ovsdb library configuration file. Users can refer to the wiki page here for the instructions.

Quality Assurance
  • Sonar Report (57%)

  • CSIT Jobs

  • OVSDB southbound plugin is extensively tested through Unit Tests, IT test and system tests. OVSDB southbound plugin is tested in both a single-node and three-node cluster setup. Hardware vTEP plugin is currently tested through:

    • Unit testing

    • CSIT testing

    • NetVirt project L2 Gateway features CSIT tests

    • Manual testing

Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes. User facing features and interfaces are not changed, only enhancements are done.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No changes in the YANG models from previous release.

  • Any configuration changes?

    • No

Bugs Fixed
  • There were no significant issues resolved in the sodium release.

Known Issues
End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • N/A

Release Mechanics
SERVICEUTILS

The ServiceUtils infrastructure project provides the utilities that assist in the operation and maintenance of different services that are provided by OpenDaylight. A service is a functionality provided by the ODL controller. These services can be categorized as Networking services (that is, L2, L3/VPN, NAT, etc.) and Infra services (that is, Openflow). These services are provided by different ODL projects, such as Netvirt, Genius and the Openflow plugin. They are comprised of a set of Java Karaf bundles and associated MD-SAL datastores.

Major Features
odl-serviceutils-srm
  • Feature URL: SRM

  • Feature Description: This feature provides service recovery functionality for ODL services.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test:

odl-serviceutils-tools
  • Feature URL: Tools

  • Feature Description: This feature currently has utilities for datatree listeners, as well as Upgrade support.

  • Top Level: Yes

  • User Facing: Yes

  • Experimental: No

  • CSIT Test: Does not have CSIT on its own, but heavily tested by Genius and Netvirt CSITs.

Documentation
Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • No

  • Other security issues?

    • N/A

Quality Assurance
Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, a normal upgrade of the software should work.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bugs Fixed
  • There were no significant issues resolved in the sodium release.

Known Issues
  • There were no significant issues known in the sodium release.

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release.

    • None

Standards
  • List of standards implemented.

    • N/A

Release Mechanics
Transport PCE
Major Features

Bug ID

Description

Service Handler

Translates WDM optical services creation requests, so they can be treated by the different modules - northbound API based on Open ROADM service models.

Topology management

Provides topology management.

Path Calculation Engine (PCE)

Provides a different meaning than the BGPCEP project, since it is not based on (G)MPLS)

Renderer

Responsible for the path configuration through optical equipment, based on the NETCONF protocol and Open ROADM specifications. Southbound plugin.

Optical Line Management (OLM)

Provides an optical fiber line monitoring and management.

Documentation
Security Considerations
  • There are no security issues found.

Quality Assurance
  • Sonar Report

  • CSIT Jobs

  • functional tests: look at the jenkins releng tox job or download sources and launch tox from the root folder.

Improvements
  • Supports the OpenROADM device version 2.2.1 (this support was experimental in Neon)

  • Openroadm and transport PCE are now based on IETF RFC8345 standard official network models (contrary to Fluorine which was relying on IETF I2RS draft).

  • Discrepancies between the topology db and the portmapping has been fixed in this release.

  • Transport PCE uses flexmap since Neon. The sodium release fixes a bug in the map formula used by Neon. https://git.opendaylight.org/gerrit/c/transportpce/+/84197

  • Transport PCE now relies on the new ODL databroker implementation instead of the deprecated controller one: 83996

  • Others deprecated functions related to Transaction services have also been migrated, refer to 83839

Documentation
  • N/A

Security Considerations
  • Do you have any external interfaces other than RESTCONF?

    • No

  • Other security issues?

    • N/A

Quality Assurance
  • N/A

Migration
  • Is it possible to migrate from the previous release? If so, how?

    • Yes, a normal upgrade of the software should work.

Compatibility
  • Is this release compatible with the previous release?

    • Yes

  • Any API changes?

    • No

  • Any configuration changes?

    • No

Bugs Fixed
  • N/A

Known Issues
  • N/A

End-of-life
  • List of features/APIs that were EOLed, deprecated, and/or removed from this release

    • N/A

Standards
  • List of standards implemented.

    • N/A

Release Mechanics
  • N/A

Service Release Notes
Sodium-SR1 Release Notes

This page details changes and bug fixes between the Sodium Release and the Sodium Stability Release 1 (Sodium-SR1) of OpenDaylight.

Projects with No Noteworthy Changes
aaa
bgpcep
controller
coe
daexim
  • 2e68793 : Update docs header to Sodium in stable/sodium

  • fa7b403 : Bump mdsal to 4.0.6

  • 686cd3f : Bump odlparent to 5.0.2

genius
infrautils
integration/distribution
  • 7935dc0 : Update common dist version after Sodium GA

  • da75b04 : Bump MRI versions

  • bb4a10c : Enable TPCE and JSON-RPC in sodium distribution

lispflowmapping
netconf
netvirt
neutron
openflowplugin
ovsdb
serviceutils
sfc
Sodium-SR2 Release Notes

This page details changes and bug fixes between the Sodium Stability Release 1 (Sodium-SR1) and the Sodium Stability Release 2 (Sodium-SR2) of OpenDaylight.

Projects with No Noteworthy Changes
aaa
  • ad7885e2 : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11

  • 46427ae1 AAA-193 : Catch missing arguments in python3

  • 5b45485a : Drop dependencies on commons-text

  • e2bf56b7 : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8

  • f5320468 : Updated the user guide after testing

  • 9d954789 : Remove comons-beanutils overrides

  • ef4c856e AAA-114 : Fix idmtool.py for handling errors

  • aed88fc4 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7

  • 6790a800 : Remove install/deploy plugin configuration

  • 653a7430 : Fixup aaa-cert-mdsal pyang warnings

  • 3df33ea7 : Update docs header to Sodium in stable/sodium

bgpcep
controller
coe
  • 50aa22b : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11

  • 2f98aaf : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8

  • 803497e : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7

  • 2643b77 : Update docs header to Sodium in stable/sodium

daexim
  • e7eb029 : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11

  • ede78ed : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8

  • 916cf30 DAEXIM-15 : On daexim boot import, check models only if models file is present

  • beae3f8 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7

genius
  • 089f256f : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11

  • 65901167 : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8

  • a247a697 MDSAL-389 : Expose TypedReadTransaction.exists(InstanceIdentifier)

  • 9b3bd610 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7

infrautils
integration/distribution
  • d575a48 : Bump odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11

  • 366f17f : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8

  • 822fc17 : Update version after sodium SR1

  • 15acae6 : Add missing packaging pom

  • f5f03af INTDIST-106 : Add Sodium ONAP distribution

  • def120f : Re-add TPCE to sodium

  • 527ca66 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7

  • 29f7c07 : Fixup platform versions

lispflowmapping
  • f4f2fab8 : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11

  • aef02e81 : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8

  • 66bffbec : Fix junit-addons scope

  • d844b607 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7

netconf
  • fc011b75e : Fixed wrong exception types

  • dde16f406 : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11

  • 4500c9cbb NETCONF-652 : Add namespace to action request XML

  • ad3308e23 : Remove jsr173-ri from dependencies

  • 75908d20b : Remove websocket-server override

  • 42366fd3b : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8

  • 60da4823e : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7

  • 9d3a276b7 : Update for sshd-2.3.0 changes

  • 8f20fa402 : Correctly close NormalizedNodeStreamWriters

  • f4cee0dda : Properly close stream writer

  • 189d139d9 : Do not use toString() in looging messages

  • 2442f207c : Fix config/oper reconciliation for leaf-lists

  • 98620c855 : Lower visibility to package

  • bbaf1cca0 : Acquire RFC8528 mount point map

  • 27887ec99 : Apply modernizations

  • 349af093f : Untangle NetconfDevice setup

  • 6fad3d14d : Convert to using requireNonNull()

netvirt
neutron
  • d2d845ff : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11

  • ccee8dd8 : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8

  • bc91bd81 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7

openflowplugin
  • e10c2f298 : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11

  • 226e45a26 : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8

  • 2fe595fdd : Failed to cancel service reconciliation, When controller become slave.

  • f50ff6361 OPNFLWPLUG-1078 : OPNFLWPLUG-1078: Notify device TLS authentication failure messages

  • 48475e2dc OPNFLWPLUG-1075 : OPNFLWPLUG-1075: Making Device Oper transactions atomic

  • bb626f8e7 : Read action throwing NPE

  • 0a7f87bd5 : Use String(byte[], Charset)

  • 0690fb0ce : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7

  • 2c10245e2 : Fix meter-id overlap

ovsdb
  • e71e31449 OVSDB-454 : Get rid of useless (Hwvtep)SouthboundProvider thread

  • 75ca1ad0c OVSDB-454 : Migrate OvsdbDataTreeChangeListenerTest

  • 90961ba06 OVSDB-454 : Eliminate server startup threads

  • 9b597af70 OVSDB-331 : Add support for using epoll Netty transport

  • 85b6d1a08 OVSDB-411 : Add NettyBootstrapFactory to hold OVSDB network threads

  • fd925bf08 OVSDB-428 : Eliminate TransactionInvokerImpl.successfulTransactionQueue

  • 20012c21f OVSDB-428 : Speed up inputQueue interaction

  • 8310eabe7 : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11

  • a0f2e7018 : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8

  • 9930827c4 : Rework TypedRowInvocationHandler invocation path

  • c6f7bc7bc : Migrate TyperUtils.getTableSchema() users

  • dfb657b23 : Simplify exception instantiation

  • 9af87d9b0 : Migrate TyperUtils methods to TypedDatabaseSchemaImpl

  • 5ee9ed22e : Make OvsdbClient return TypedDatabaseSchemas

  • c1c79b70c : Extract TypedRowInvocationHandler

  • 7a6fe0e5c : Eliminate OvsdbClientImpl duplication

  • 82723d831 : De-confuse InvocationHandler and target methods

  • e57992121 : Hide TyperUtils.extractRowUpdates()

  • 8a8f8cfdf : Add TypedReflections

  • d97430282 : Add @NonNull annotation to OvsdbConnectionListener.connected()

  • 9f030b429 : Add TypedDatabaseSchema

  • 8115ecf71 : Turn DatabaseSchema into an interface

  • 562d45084 : Make TableSchema/DatabaseSchema immutable

  • 32d9f1ad9 : Split out BaseTypeFactories

  • 11f8540ae : Use singleton BaseType instances for simple definitions

  • 91b242822 : Split out BaseTypes

  • db4b48270 : Do not use reflection in TransactCommandAggregator

  • f9ba04906 : Reuse StringEncoders for all connections

  • 4424150e6 : Reuse MappingJsonFactory across all sessions

  • 2e9ba8f8b : Cleanup HwvtepConnectionManager.getHwvtepGlobalTableEntry()

  • eb330aac7 : Do not allow DatabaseSchema name/version to be mutated

  • 88adf2528 : Do not allow TableSchema columns to be directly set

  • 0ff47ed78 : Refactor ColumnType

  • aac8875db : Cleanup ColumnSchema

  • cb6c0ea4e : Add generated serialVersionUUID to exceptions

  • 1ee2e4bfe : Make GenericTableSchema.fromJson() a factory method

  • d306338b5 : Move ObjectMapper to JsonRpcEndpoint

  • 2c95ccc22 : Improve schemas population

  • 16ff45fde : Turn JsonRpcEndpoint into a proper OvsdbRPC implementation

  • e8adc8639 : Reuse ObjectMapper across all connections

  • 12a1c60ae : Use a constant ObjectMapper in UpdateNotificationDeser

  • 4650cff9a : Use proper constant in JsonUtils

  • de91d31e7 : Do not reconfigure ObjectMapper in FutureTransformUtils

  • 1c06606a7 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7

  • c2919d47d : Do not use Foo.toString() when logging

serviceutils
  • ecc8fbb : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11

  • 68d2bec : Expose reference implementations downstream

  • a70a6c1 : Add tools-testutils declaration

  • 195bcbd : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8

  • fa66fb6 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7

sfc
  • 47c49529 : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11

  • 267a08f6 : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8

  • c294cbae : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7

Sodium-SR3 Release Notes

This page details changes and bug fixes between the Sodium Stability Release 2 (Sodium-SR2) and the Sodium Stability Release 3 (Sodium-SR3) of OpenDaylight.

Projects with No Noteworthy Changes
aaa
  • 701c04d9 : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • 28c6a5ff AAA-194 : AAA-194 Fix for Pattern Matching in Shiro

  • 1bd4f300 : Remove jetty-servlet-tester references

  • 44a4cc40 : Migrate OSGi compendium references

  • 092b77c9 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

  • cbb4ae35 AAA-191 : Fix NPE when loading certificate

  • 5b35f181 AAA-180 : AAA-180: Fix Dynamic authorization

  • 2dfd1182 : Fix variable name s/newUser/new_user/

bgpcep
  • ae2e14242 : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • 97eafeff8 : Upgrade compendium dependency

  • 246fb0e27 BGPCEP-900 : Handle race-conditions in BGP shutdown code

  • 99fa6030b : Remove use of projectinfo property

  • 7abbf30ff : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

  • 5893a9396 : Use HashMap.computIfAbsent() in getNode()

controller
coe
  • 90751d0 : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • 5110ad4 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

daexim
  • 847e7f0 : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • d947c5d : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

genius
  • e257e206 : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • 1987bd1c : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

infrautils
integration/distribution
  • 6b3fe87 : Enable SM projects for Sodium SR3

  • 06130ca : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • 4567e52 : Add cluster scripts to ONAP distribution

  • 15fcd55 : Update common versions for Sodium SR3

  • da082b6 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

  • 484309c : Update platform versions

  • d6bcd4e : Add dlux for Sodium SR2

  • c09fd58 : Bump TPCE project

lispflowmapping
  • 97929021 : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • 2571933f : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

netconf
netvirt
  • d1764df56 : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • 18e3f6383 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

neutron
  • 3a8fe6ea : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • 738c668e : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

openflowplugin
  • a63030b7c : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • c63e6d659 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

  • 4e6394c5e OPNFLWPLUG-1086 : OPNFLWPLUG-1086: Reconciliation framework failure when starting cbench tool for the first time

  • 79477e580 OPNFLWPLUG-1084 : OPNFLWPLUG-1084 Device operational is not getting created if device reconciliation is not enabled

  • 2d5f53916 OPNFLWPLUG-1074 : OPNFLWPLUG-1074: table stats not available after a switch flap

  • b21d86660 OPNFLWPLUG-1083 : OPNFLWPLUG-1083: Stats frozen after applying 2 sec delay in OF channel

ovsdb
  • 0ef966a47 : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • 967cab664 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

serviceutils
  • fd579bd : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • 0e05eb3 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

sfc
  • 6eea8b3f : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14

  • 663dded6 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13

Sodium-SR4 Release Notes

This page details changes and bug fixes between the Sodium Stability Release 3 (Sodium-SR3) and the Sodium Stability Release 4 (Sodium-SR4) of OpenDaylight.

Projects with No Noteworthy Changes
aaa
  • 1a864373 : Do not fail on warnings for docs-linkcheck

  • 4757c947 : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17

  • b745c1a7 : Update dependency-check

bgpcep
controller
coe
  • 9f2ff6e : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17

daexim
  • 89b4c3e : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17

genius
  • 7b8d136f : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17

infrautils
integration/distribution
  • e2e22ce : Bump odlparent/yangtools/mdsal

  • d24dc17 : Post-Sodium SR3 documentation update

  • 52d6c34 : Do not fail on warnings for docs-linkcheck

  • b9bcf1e : Remove dlux from Sodium SR3 distribution

lispflowmapping
  • d2a07707 : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17

  • bdf2528d : Do not fail on warnings for docs-linkcheck

netconf
netvirt
  • 1e7b37576 : Fix json code blocks with valid json

  • 48c716ee3 : Bump odlparent/yangtools/mdsal

neutron
  • e298b06d : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17

openflowplugin
  • a80c69e79 : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17

ovsdb
  • 87618d1a3 : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17

serviceutils
  • c409b7d : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17

sfc
  • a4e359f9 : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17

About this Document

This document is a verbatim copy of the Project Lifecycle &0 Releases Document <http://www.opendaylight.org/project-lifecycle-releases#MatureReleaseProcess> section, which has information about the release review document.

Both the release plan and release review document are intended to be short and simple. They are both posted publicly on the ODL wiki to assist in project coordination.

Important

When copying, remove the entire “About this Document” section and fill out the next sections. In addition, do not remove any section. Also, use short sentences and not “n/a” or “none,” since it is confusing to the reader as to whether that means there are no issues or you did not address the issue.

Project Name
Overview

The overview section is for users to identify and describe the features that will be used by end-users (remove this paragraph).

Behavior Changes

This release introduces the following behavior changes:

New and Modified Features

This releases provides the following new and modified features:

Deprecated Features

This releases removed the following features:

Resolved Issues

The following table lists the resolved issues fixed this release.

Key

Summary

<bug ID>

Known Issues

The following table lists the known issues that exist in this release.

Key

Summary

<bug ID>

Getting Started Guide

Introduction

The OpenDaylight project is an open source platform for Software Defined Networking (SDN) that uses open protocols to provide centralized, programmatic control and network device monitoring.

Much as your operating system provides an interface for the devices that comprise your computer, OpenDaylight provides an interface that allows you to control and manage network devices.

What’s different about OpenDaylight

Major distinctions of OpenDaylight’s SDN compared to other SDN options are the following:

  • A microservices architecture, in which a “microservice” is a particular protocol or service that a user wants to enable within their installation of the OpenDaylight controller, for example:

    • A plugin that provides connectivity to devices via the OpenFlow protocols (openflowplugin).

    • A platform service such as Authentication, Authorization, and Accounting (AAA).

    • A network service providing VM connectivity for OpenStack (netvirt).

  • Support for a wide and growing range of network protocols: OpenFlow, P4 BGP, PCEP, LISP, NETCONF, OVSDB, SNMP and more.

  • Model Driven Service Abstraction Layer (MD-SAL). Yang models play a key role in OpenDaylight and are used for:

    • Creating datastore schemas (tree based structure).

    • Generating application REST API (RESTCONF).

    • Automatic code generation (Java interfaces and Data Transfer Objects).

OpenDaylight concepts and tools

In this section we discuss some of the concepts and tools you encounter with basic use of OpenDaylight. The guide walks you through the installation process in a subsequent section, but for now familiarize yourself with the information below.

  • To date, OpenDaylight developers have formed more than 50 projects to address ways to extend network functionality. The projects are a formal structure for developers from the community to meet, document release plans, code, and release the functionality they create in an OpenDaylight release.

    The typical OpenDaylight user will not join a project team, but you should know what projects are as we refer to their activities and the functionality they create. The Karaf features to install that functionality often share the project team’s name.

  • Apache Karaf provides a lightweight runtime to install the Karaf features you want to implement and is included in the OpenDaylight platform software. By default, OpenDaylight has no pre-installed features.

    Features and feature repositories can be managed in the Karaf configuration file etc/org.apache.karaf.features.cfg using the featuresRepositories and featuresBoot variables.

  • Model-Driven Service Abstraction Layer (MD-SAL) is the OpenDaylight framework that allows developers to create new Karaf features in the form of services and protocol drivers and connects them to one another. You can think of the MD-SAL as having the following two components:

    1. A shared datastore that maintains the following tree-based structures:

      1. The Config Datastore, which maintains a representation of the desired network state.

      2. The Operational Datastore, which is a representation of the actual network state based on data from the managed network elements.

    2. A message bus that provides a way for the various services and protocol drivers to notify and communicate with one another.

  • If you’re interacting with OpenDaylight through the REST APIs while using the OpenDaylight interfaces, the microservices architecture allows you to select available services, protocols, and REST APIs.

Installing OpenDaylight

You complete the following steps to install your networking environment, with specific instructions provided in the subsections below.

Before detailing the instructions for these, we address the following: Java Runtime Environment (JRE) and operating system information Target environment Known issues and limitations

Install OpenDaylight
Downloading and installing OpenDaylight

The default distribution can be found on the OpenDaylight software download page: https://docs.opendaylight.org/en/latest/downloads.html

The Karaf distribution has no features enabled by default. However, all of the features are available to be installed.

Note

For compatibility reasons, you cannot enable all the features simultaneously. We try to document known incompatibilities in the Install the Karaf features section below.

Running the karaf distribution

To run the Karaf distribution:

  1. Unzip the zip file.

  2. Navigate to the directory.

  3. run ./bin/karaf.

For Example:

$ ls karaf-0.8.x-Oxygen.zip
karaf-0.8.x-Oxygen.zip
$ unzip karaf-0.8.x-Oxygen.zip
Archive:  karaf-0.8.x-Oxygen.zip
   creating: karaf-0.8.x-Oxygen/
   creating: karaf-0.8.x-Oxygen/configuration/
   creating: karaf-0.8.x-Oxygen/data/
   creating: karaf-0.8.x-Oxygen/data/tmp/
   creating: karaf-0.8.x-Oxygen/deploy/
   creating: karaf-0.8.x-Oxygen/etc/
   creating: karaf-0.8.x-Oxygen/externalapps/
   ...
   inflating: karaf-0.8.x-Oxygen/bin/start.bat
   inflating: karaf-0.8.x-Oxygen/bin/status.bat
   inflating: karaf-0.8.x-Oxygen/bin/stop.bat
$ cd distribution-karaf-0.8.x-Oxygen
$ ./bin/karaf

    ________                       ________                .__  .__       .__     __
    \_____  \ ______   ____   ____ \______ \ _____  ___.__.\|  \| \|__\| ____ \|  \|___/  \|_
     /   \|   \\____ \_/ __ \ /    \ \|    \|  \\__  \<   \|  \|\|  \| \|  \|/ ___\\|  \|  \   __\
    /    \|    \  \|_> >  ___/\|   \|  \\|    `   \/ __ \\___  \|\|  \|_\|  / /_/  >   Y  \  \|
    \_______  /   __/ \___  >___\|  /_______  (____  / ____\|\|____/__\___  /\|___\|  /__\|
            \/\|__\|        \/     \/        \/     \/\/            /_____/      \/
  • Press tab for a list of available commands

  • Typing [cmd] --help will show help for a specific command.

  • Press ctrl-d or type system:shutdown or logout to shutdown OpenDaylight.

Note

Please take a look at the Deployment Recommendations and following sections under Security Considerations if you’re planning on running OpenDaylight outside of an isolated test lab environment.

Install the Karaf features

To install a feature, use the following command, where feature1 is the feature name listed in the table below:

feature:install <feature1>

You can install multiple features using the following command:

feature:install <feature1> <feature2> ... <featureN-name>

Note

For compatibility reasons, you cannot enable all Karaf features simultaneously. The table below documents feature installation names and known incompatibilities.Compatibility values indicate the following:

  • all - the feature can be run with other features.

  • self+all - the feature can be installed with other features with a value of all, but may interact badly with other features that have a value of self+all. Not every combination has been tested.

Uninstalling features

To uninstall a feature, you must shut down OpenDaylight, delete the data directory, and start OpenDaylight up again.

Important

Uninstalling a feature using the Karaf feature:uninstall command is not supported and can cause unexpected and undesirable behavior.

Listing available features

To find the complete list of Karaf features, run the following command:

feature:list

To list the installed Karaf features, run the following command:

feature:list -i

The decription of these features is in the project specific release notes Project-specific Release Notes section.

Karaf running on Windows 10

Windows 10 cannot be identify by Karaf (equinox). Issue occurs during installation of karaf features e.g.:

opendaylight-user@root>feature:install odl-restconf
Error executing command: Can't install feature odl-restconf/0.0.0:
Could not start bundle mvn:org.fusesource.leveldbjni/leveldbjni-all/1.8-odl in feature(s) odl-akka-leveldb-0.7: The bundle "org.fusesource.leveldbjni.leveldbjni-all_1.8.0 [300]" could not be resolved. Reason: No match found for native code: META-INF/native/windows32/leveldbjni.dll; processor=x86; osname=Win32, META-INF/native/windows64/leveldbjni.dll; processor=x86-64; osname=Win32, META-INF/native/osx/libleveldbjni.jnilib; processor=x86; osname=macosx, META-INF/native/osx/libleveldbjni.jnilib; processor=x86-64; osname=macosx, META-INF/native/linux32/libleveldbjni.so; processor=x86; osname=Linux, META-INF/native/linux64/libleveldbjni.so; processor=x86-64; osname=Linux, META-INF/native/sunos64/amd64/libleveldbjni.so; processor=x86-64; osname=SunOS, META-INF/native/sunos64/sparcv9/libleveldbjni.so; processor=sparcv9; osname=SunOS

Workaround is to add

org.osgi.framework.os.name = Win32

to the karaf file

etc/system.properties

The workaround and further info are in this thread: https://stackoverflow.com/questions/35679852/karaf-exception-is-thrown-while-installing-org-fusesource-leveldbjni

Setting Up Clustering

Clustering Overview

Clustering is a mechanism that enables multiple processes and programs to work together as one entity. For example, when you search for something on google.com, it may seem like your search request is processed by only one web server. In reality, your search request is processed by may web servers connected in a cluster. Similarly, you can have multiple instances of OpenDaylight working together as one entity.

Advantages of clustering are:

  • Scaling: If you have multiple instances of OpenDaylight running, you can potentially do more work and store more data than you could with only one instance. You can also break up your data into smaller chunks (shards) and either distribute that data across the cluster or perform certain operations on certain members of the cluster.

  • High Availability: If you have multiple instances of OpenDaylight running and one of them crashes, you will still have the other instances working and available.

  • Data Persistence: You will not lose any data stored in OpenDaylight after a manual restart or a crash.

The following sections describe how to set up clustering on both individual and multiple OpenDaylight instances.

Multiple Node Clustering

The following sections describe how to set up multiple node clusters in OpenDaylight.

Deployment Considerations

To implement clustering, the deployment considerations are as follows:

  • To set up a cluster with multiple nodes, we recommend that you use a minimum of three machines. You can set up a cluster with just two nodes. However, if one of the two nodes fail, the cluster will not be operational.

    Note

    This is because clustering in OpenDaylight requires a majority of the nodes to be up and one node cannot be a majority of two nodes.

  • Every device that belongs to a cluster needs to have an identifier. OpenDaylight uses the node’s role for this purpose. After you define the first node’s role as member-1 in the akka.conf file, OpenDaylight uses member-1 to identify that node.

  • Data shards are used to contain all or a certain segment of a OpenDaylight’s MD-SAL datastore. For example, one shard can contain all the inventory data while another shard contains all of the topology data.

    If you do not specify a module in the modules.conf file and do not specify a shard in module-shards.conf, then (by default) all the data is placed in the default shard (which must also be defined in module-shards.conf file). Each shard has replicas configured. You can specify the details of where the replicas reside in module-shards.conf file.

  • If you have a three node cluster and would like to be able to tolerate any single node crashing, a replica of every defined data shard must be running on all three cluster nodes.

    Note

    This is because OpenDaylight’s clustering implementation requires a majority of the defined shard replicas to be running in order to function. If you define data shard replicas on two of the cluster nodes and one of those nodes goes down, the corresponding data shards will not function.

  • If you have a three node cluster and have defined replicas for a data shard on each of those nodes, that shard will still function even if only two of the cluster nodes are running. Note that if one of those remaining two nodes goes down, the shard will not be operational.

  • It is recommended that you have multiple seed nodes configured. After a cluster member is started, it sends a message to all of its seed nodes. The cluster member then sends a join command to the first seed node that responds. If none of its seed nodes reply, the cluster member repeats this process until it successfully establishes a connection or it is shut down.

  • After a node is unreachable, it remains down for configurable period of time (10 seconds, by default). Once a node goes down, you need to restart it so that it can rejoin the cluster. Once a restarted node joins a cluster, it will synchronize with the lead node automatically.

Clustering Scripts

OpenDaylight includes some scripts to help with the clustering configuration.

Note

Scripts are stored in the OpenDaylight distribution/bin folder, and maintained in the distribution project repository in the folder distribution-karaf/src/main/assembly/bin/.

Configure Cluster Script

This script is used to configure the cluster parameters (e.g. akka.conf, module-shards.conf) on a member of the controller cluster. The user should restart the node to apply the changes.

Note

The script can be used at any time, even before the controller is started for the first time.

Usage:

bin/configure_cluster.sh <index> <seed_nodes_list>
  • index: Integer within 1..N, where N is the number of seed nodes. This indicates which controller node (1..N) is configured by the script.

  • seed_nodes_list: List of seed nodes (IP address), separated by comma or space.

The IP address at the provided index should belong to the member executing the script. When running this script on multiple seed nodes, keep the seed_node_list the same, and vary the index from 1 through N.

Optionally, shards can be configured in a more granular way by modifying the file “custom_shard_configs.txt” in the same folder as this tool. Please see that file for more details.

Example:

bin/configure_cluster.sh 2 192.168.0.1 192.168.0.2 192.168.0.3

The above command will configure the member 2 (IP address 192.168.0.2) of a cluster made of 192.168.0.1 192.168.0.2 192.168.0.3.

Setting Up a Multiple Node Cluster

To run OpenDaylight in a three node cluster, perform the following:

First, determine the three machines that will make up the cluster. After that, do the following on each machine:

  1. Copy the OpenDaylight distribution zip file to the machine.

  2. Unzip the distribution.

  3. Open the following .conf files:

    • configuration/initial/akka.conf

    • configuration/initial/module-shards.conf

  4. In each configuration file, make the following changes:

    Find every instance of the following lines and replace _127.0.0.1_ with the hostname or IP address of the machine on which this file resides and OpenDaylight will run:

    netty.tcp {
      hostname = "127.0.0.1"
    

    Note

    The value you need to specify will be different for each node in the cluster.

  5. Find the following lines and replace _127.0.0.1_ with the hostname or IP address of any of the machines that will be part of the cluster:

    cluster {
      seed-nodes = ["akka.tcp://opendaylight-cluster-data@${IP_OF_MEMBER1}:2550",
                    <url-to-cluster-member-2>,
                    <url-to-cluster-member-3>]
    
  6. Find the following section and specify the role for each member node. Here we assign the first node with the member-1 role, the second node with the member-2 role, and the third node with the member-3 role:

    roles = [
      "member-1"
    ]
    

    Note

    This step should use a different role on each node.

  7. Open the configuration/initial/module-shards.conf file and update the replicas so that each shard is replicated to all three nodes:

    replicas = [
        "member-1",
        "member-2",
        "member-3"
    ]
    

    For reference, view a sample config files <<_sample_config_files,below>>.

  8. Move into the +<karaf-distribution-directory>/bin+ directory.

  9. Run the following command:

    JAVA_MAX_MEM=4G JAVA_MAX_PERM_MEM=512m ./karaf
    
  10. Enable clustering by running the following command at the Karaf command line:

    feature:install odl-mdsal-clustering
    

OpenDaylight should now be running in a three node cluster. You can use any of the three member nodes to access the data residing in the datastore.

Sample Config Files

Sample akka.conf file:

odl-cluster-data {
  bounded-mailbox {
    mailbox-type = "org.opendaylight.controller.cluster.common.actor.MeteredBoundedMailbox"
    mailbox-capacity = 1000
    mailbox-push-timeout-time = 100ms
  }

  metric-capture-enabled = true

  akka {
    loglevel = "DEBUG"
    loggers = ["akka.event.slf4j.Slf4jLogger"]

    actor {

      provider = "akka.cluster.ClusterActorRefProvider"
      serializers {
                java = "akka.serialization.JavaSerializer"
                proto = "akka.remote.serialization.ProtobufSerializer"
              }

              serialization-bindings {
                  "com.google.protobuf.Message" = proto

              }
    }
    remote {
      log-remote-lifecycle-events = off
      netty.tcp {
        hostname = "10.194.189.96"
        port = 2550
        maximum-frame-size = 419430400
        send-buffer-size = 52428800
        receive-buffer-size = 52428800
      }
    }

    cluster {
      seed-nodes = ["akka.tcp://opendaylight-cluster-data@10.194.189.96:2550",
                    "akka.tcp://opendaylight-cluster-data@10.194.189.98:2550",
                    "akka.tcp://opendaylight-cluster-data@10.194.189.101:2550"]

      auto-down-unreachable-after = 10s

      roles = [
        "member-2"
      ]

    }
  }
}

odl-cluster-rpc {
  bounded-mailbox {
    mailbox-type = "org.opendaylight.controller.cluster.common.actor.MeteredBoundedMailbox"
    mailbox-capacity = 1000
    mailbox-push-timeout-time = 100ms
  }

  metric-capture-enabled = true

  akka {
    loglevel = "INFO"
    loggers = ["akka.event.slf4j.Slf4jLogger"]

    actor {
      provider = "akka.cluster.ClusterActorRefProvider"

    }
    remote {
      log-remote-lifecycle-events = off
      netty.tcp {
        hostname = "10.194.189.96"
        port = 2551
      }
    }

    cluster {
      seed-nodes = ["akka.tcp://opendaylight-cluster-rpc@10.194.189.96:2551"]

      auto-down-unreachable-after = 10s
    }
  }
}

Sample module-shards.conf file:

module-shards = [
    {
        name = "default"
        shards = [
            {
                name="default"
                replicas = [
                    "member-1",
                    "member-2",
                    "member-3"
                ]
            }
        ]
    },
    {
        name = "topology"
        shards = [
            {
                name="topology"
                replicas = [
                    "member-1",
                    "member-2",
                    "member-3"
                ]
            }
        ]
    },
    {
        name = "inventory"
        shards = [
            {
                name="inventory"
                replicas = [
                    "member-1",
                    "member-2",
                    "member-3"
                ]
            }
        ]
    },
    {
         name = "toaster"
         shards = [
             {
                 name="toaster"
                 replicas = [
                    "member-1",
                    "member-2",
                    "member-3"
                 ]
             }
         ]
    }
]
Cluster Monitoring

OpenDaylight exposes shard information via MBeans, which can be explored with JConsole, VisualVM, or other JMX clients, or exposed via a REST API using Jolokia, provided by the odl-jolokia Karaf feature. This is convenient, due to a significant focus on REST in OpenDaylight.

The basic URI that lists a schema of all available MBeans, but not their content itself is:

GET  /jolokia/list

To read the information about the shards local to the queried OpenDaylight instance use the following REST calls. For the config datastore:

GET  /jolokia/read/org.opendaylight.controller:type=DistributedConfigDatastore,Category=ShardManager,name=shard-manager-config

For the operational datastore:

GET  /jolokia/read/org.opendaylight.controller:type=DistributedOperationalDatastore,Category=ShardManager,name=shard-manager-operational

The output contains information on shards present on the node:

{
  "request": {
    "mbean": "org.opendaylight.controller:Category=ShardManager,name=shard-manager-operational,type=DistributedOperationalDatastore",
    "type": "read"
  },
  "value": {
    "LocalShards": [
      "member-1-shard-default-operational",
      "member-1-shard-entity-ownership-operational",
      "member-1-shard-topology-operational",
      "member-1-shard-inventory-operational",
      "member-1-shard-toaster-operational"
    ],
    "SyncStatus": true,
    "MemberName": "member-1"
  },
  "timestamp": 1483738005,
  "status": 200
}

The exact names from the “LocalShards” lists are needed for further exploration, as they will be used as part of the URI to look up detailed info on a particular shard. An example output for the member-1-shard-default-operational looks like this:

{
  "request": {
    "mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-default-operational,type=DistributedOperationalDatastore",
    "type": "read"
  },
  "value": {
    "ReadWriteTransactionCount": 0,
    "SnapshotIndex": 4,
    "InMemoryJournalLogSize": 1,
    "ReplicatedToAllIndex": 4,
    "Leader": "member-1-shard-default-operational",
    "LastIndex": 5,
    "RaftState": "Leader",
    "LastCommittedTransactionTime": "2017-01-06 13:19:00.135",
    "LastApplied": 5,
    "LastLeadershipChangeTime": "2017-01-06 13:18:37.605",
    "LastLogIndex": 5,
    "PeerAddresses": "member-3-shard-default-operational: akka.tcp://opendaylight-cluster-data@192.168.16.3:2550/user/shardmanager-operational/member-3-shard-default-operational, member-2-shard-default-operational: akka.tcp://opendaylight-cluster-data@192.168.16.2:2550/user/shardmanager-operational/member-2-shard-default-operational",
    "WriteOnlyTransactionCount": 0,
    "FollowerInitialSyncStatus": false,
    "FollowerInfo": [
      {
        "timeSinceLastActivity": "00:00:00.320",
        "active": true,
        "matchIndex": 5,
        "voting": true,
        "id": "member-3-shard-default-operational",
        "nextIndex": 6
      },
      {
        "timeSinceLastActivity": "00:00:00.320",
        "active": true,
        "matchIndex": 5,
        "voting": true,
        "id": "member-2-shard-default-operational",
        "nextIndex": 6
      }
    ],
    "FailedReadTransactionsCount": 0,
    "StatRetrievalTime": "810.5 μs",
    "Voting": true,
    "CurrentTerm": 1,
    "LastTerm": 1,
    "FailedTransactionsCount": 0,
    "PendingTxCommitQueueSize": 0,
    "VotedFor": "member-1-shard-default-operational",
    "SnapshotCaptureInitiated": false,
    "CommittedTransactionsCount": 6,
    "TxCohortCacheSize": 0,
    "PeerVotingStates": "member-3-shard-default-operational: true, member-2-shard-default-operational: true",
    "LastLogTerm": 1,
    "StatRetrievalError": null,
    "CommitIndex": 5,
    "SnapshotTerm": 1,
    "AbortTransactionsCount": 0,
    "ReadOnlyTransactionCount": 0,
    "ShardName": "member-1-shard-default-operational",
    "LeadershipChangeCount": 1,
    "InMemoryJournalDataSize": 450
  },
  "timestamp": 1483740350,
  "status": 200
}

The output helps identifying shard state (leader/follower, voting/non-voting), peers, follower details if the shard is a leader, and other statistics/counters.

The ODLTools team is maintaining a Python based tool, that takes advantage of the above MBeans exposed via Jolokia.

Geo-distributed Active/Backup Setup

An OpenDaylight cluster works best when the latency between the nodes is very small, which practically means they should be in the same datacenter. It is however desirable to have the possibility to fail over to a different datacenter, in case all nodes become unreachable. To achieve that, the cluster can be expanded with nodes in a different datacenter, but in a way that doesn’t affect latency of the primary nodes. To do that, shards in the backup nodes must be in “non-voting” state.

The API to manipulate voting states on shards is defined as RPCs in the cluster-admin.yang file in the controller project, which is well documented. A summary is provided below.

Note

Unless otherwise indicated, the below POST requests are to be sent to any single cluster node.

To create an active/backup setup with a 6 node cluster (3 active and 3 backup nodes in two locations) there is an RPC to set voting states of all shards on a list of nodes to a given state:

POST  /restconf/operations/cluster-admin:change-member-voting-states-for-all-shards

This RPC needs the list of nodes and the desired voting state as input. For creating the backup nodes, this example input can be used:

{
  "input": {
    "member-voting-state": [
      {
        "member-name": "member-4",
        "voting": false
      },
      {
        "member-name": "member-5",
        "voting": false
      },
      {
        "member-name": "member-6",
        "voting": false
      }
    ]
  }
}

When an active/backup deployment already exists, with shards on the backup nodes in non-voting state, all that is needed for a fail-over from the active “sub-cluster” to backup “sub-cluster” is to flip the voting state of each shard (on each node, active AND backup). That can be easily achieved with the following RPC call (no parameters needed):

POST  /restconf/operations/cluster-admin:flip-member-voting-states-for-all-shards

If it’s an unplanned outage where the primary voting nodes are down, the “flip” RPC must be sent to a backup non-voting node. In this case there are no shard leaders to carry out the voting changes. However there is a special case whereby if the node that receives the RPC is non-voting and is to be changed to voting and there’s no leader, it will apply the voting changes locally and attempt to become the leader. If successful, it persists the voting changes and replicates them to the remaining nodes.

When the primary site is fixed and you want to fail back to it, care must be taken when bringing the site back up. Because it was down when the voting states were flipped on the secondary, its persisted database won’t contain those changes. If brought back up in that state, the nodes will think they’re still voting. If the nodes have connectivity to the secondary site, they should follow the leader in the secondary site and sync with it. However if this does not happen then the primary site may elect its own leader thereby partitioning the 2 clusters, which can lead to undesirable results. Therefore it is recommended to either clean the databases (i.e., journal and snapshots directory) on the primary nodes before bringing them back up or restore them from a recent backup of the secondary site (see section Backing Up and Restoring the Datastore).

If is also possible to gracefully remove a node from a cluster, with the following RPC:

POST  /restconf/operations/cluster-admin:remove-all-shard-replicas

and example input:

{
  "input": {
    "member-name": "member-1"
  }
}

or just one particular shard:

POST  /restconf/operations/cluster-admin:remove-shard-replica

with example input:

{
  "input": {
    "shard-name": "default",
    "member-name": "member-2",
    "data-store-type": "config"
  }
}

Now that a (potentially dead/unrecoverable) node was removed, another one can be added at runtime, without changing the configuration files of the healthy nodes (requiring reboot):

POST  /restconf/operations/cluster-admin:add-replicas-for-all-shards

No input required, but this RPC needs to be sent to the new node, to instruct it to replicate all shards from the cluster.

Note

While the cluster admin API allows adding and removing shards dynamically, the module-shard.conf and modules.conf files are still used on startup to define the initial configuration of shards. Modifications from the use of the API are not stored to those static files, but to the journal.

Extra Configuration Options

Name

Type

Default

Description

max-shard-data-change-executor-queue-size

uint32 (1..max)

1000

The maximum queue size for each shard’s data store data change notification executor.

max-shard-data-change-executor-pool-size

uint32 (1..max)

20

The maximum thread pool size for each shard’s data store data change notification executor.

max-shard-data-change-listener-queue-size

uint32 (1..max)

1000

The maximum queue size for each shard’s data store data change listener.

max-shard-data-store-executor-queue-size

uint32 (1..max)

5000

The maximum queue size for each shard’s data store executor.

shard-transaction-idle-timeout-in-minutes

uint32 (1..max)

10

The maximum amount of time a shard transaction can be idle without receiving any messages before it self-destructs.

shard-snapshot-batch-count

uint32 (1..max)

20000

The minimum number of entries to be present in the in-memory journal log before a snapshot is to be taken.

shard-snapshot-data-threshold-percentage

uint8 (1..100)

12

The percentage of Runtime.totalMemory() used by the in-memory journal log before a snapshot is to be taken

shard-hearbeat-interval-in-millis

uint16 (100..max)

500

The interval at which a shard will send a heart beat message to its remote shard.

operation-timeout-in-seconds

uint16 (5..max)

5

The maximum amount of time for akka operations (remote or local) to complete before failing.

shard-journal-recovery-log-batch-size

uint32 (1..max)

5000

The maximum number of journal log entries to batch on recovery for a shard before committing to the data store.

shard-transaction-commit-timeout-in-seconds

uint32 (1..max)

30

The maximum amount of time a shard transaction three-phase commit can be idle without receiving the next messages before it aborts the transaction

shard-transaction-commit-queue-capacity

uint32 (1..max)

20000

The maximum allowed capacity for each shard’s transaction commit queue.

shard-initialization-timeout-in-seconds

uint32 (1..max)

300

The maximum amount of time to wait for a shard to initialize from persistence on startup before failing an operation (eg transaction create and change listener registration).

shard-leader-election-timeout-in-seconds

uint32 (1..max)

30

The maximum amount of time to wait for a shard to elect a leader before failing an operation (eg transaction create).

enable-metric-capture

boolean

false

Enable or disable metric capture.

bounded-mailbox-capacity

uint32 (1..max)

1000

Max queue size that an actor’s mailbox can reach

persistent

boolean

true

Enable or disable data persistence

shard-isolated-leader-check-interval-in-millis

uint32 (1..max)

5000

the interval at which the leader of the shard will check if its majority followers are active and term itself as isolated

These configuration options are included in the etc/org.opendaylight.controller.cluster.datastore.cfg configuration file.

Persistence and Backup

Set Persistence Script

This script is used to enable or disable the config datastore persistence. The default state is enabled but there are cases where persistence may not be required or even desired. The user should restart the node to apply the changes.

Note

The script can be used at any time, even before the controller is started for the first time.

Usage:

bin/set_persistence.sh <on/off>

Example:

bin/set_persistence.sh off

The above command will disable the config datastore persistence.

Backing Up and Restoring the Datastore

The same cluster-admin API described in the cluster guide for managing shard voting states has an RPC allowing backup of the datastore in a single node, taking only the file name as a parameter:

POST  /restconf/operations/cluster-admin:backup-datastore

RPC input JSON:

{
  "input": {
    "file-path": "/tmp/datastore_backup"
  }
}

Note

This backup can only be restored if the YANG models of the backed-up data are identical in the backup OpenDaylight instance and restore target instance.

To restore the backup on the target node the file needs to be placed into the $KARAF_HOME/clustered-datastore-restore directory, and then the node restarted. If the directory does not exist (which is quite likely if this is a first-time restore) it needs to be created. On startup, ODL checks if the journal and snapshots directories in $KARAF_HOME are empty, and only then tries to read the contents of the clustered-datastore-restore directory, if it exists. So for a successful restore, those two directories should be empty. The backup file name itself does not matter, and the startup process will delete it after a successful restore.

The backup is node independent, so when restoring a 3 node cluster, it is best to restore it on each node for consistency. For example, if restoring on one node only, it can happen that the other two empty nodes form a majority and the cluster comes up with no data.

Security Considerations

This document discusses the various security issues that might affect OpenDaylight. The document also lists specific recommendations to mitigate security risks.

This document also contains information about the corrective steps you can take if you discover a security issue with OpenDaylight, and if necessary, contact the Security Response Team, which is tasked with identifying and resolving security threats.

Overview of OpenDaylight Security

There are many different kinds of security vulnerabilities that could affect an OpenDaylight deployment, but this guide focuses on those where (a) the servers, virtual machines or other devices running OpenDaylight have been properly physically (or virtually in the case of VMs) secured against untrusted individuals and (b) individuals who have access, either via remote logins or physically, will not attempt to attack or subvert the deployment intentionally or otherwise.

While those attack vectors are real, they are out of the scope of this document.

What remains in scope is attacks launched from a server, virtual machine, or device other than the one running OpenDaylight where the attack does not have valid credentials to access the OpenDaylight deployment.

The rest of this document gives specific recommendations for deploying OpenDaylight in a secure manner, but first we highlight some high-level security advantages of OpenDaylight.

  • Separating the control and management planes from the data plane (both logically and, in many cases, physically) allows possible security threats to be forced into a smaller attack surface.

  • Having centralized information and network control gives network administrators more visibility and control over the entire network, enabling them to make better decisions faster. At the same time, centralization of network control can be an advantage only if access to that control is secure.

    Note

    While both previous advantages improve security, they also make an OpenDaylight deployment an attractive target for attack making understanding these security considerations even more important.

  • The ability to more rapidly evolve southbound protocols and how they are used provides more and faster mechanisms to enact appropriate security mitigations and remediations.

  • OpenDaylight is built from OSGi bundles and the Karaf Java container. Both Karaf and OSGi provide some level of isolation with explicit code boundaries, package imports, package exports, and other security-related features.

  • OpenDaylight has a history of rapidly addressing known vulnerabilities and a well-defined process for reporting and dealing with them.

OpenDaylight Security Resources
Deployment Recommendations

We recommend that you follow the deployment guidelines in setting up OpenDaylight to minimize security threats.

  • The default credentials should be changed before deploying OpenDaylight.

  • OpenDaylight should be deployed in a private network that cannot be accessed from the internet.

  • Separate the data network (that connects devices using the network) from the management network (that connects the network devices to OpenDaylight).

    Note

    Deploying OpenDaylight on a separate, private management network does not eliminate threats, but only mitigates them. By construction, some messages must flow from the data network to the management network, e.g., OpenFlow packet_in messages, and these create an attack surface even if it is a small one.

  • Implement an authentication policy for devices that connect to both the data and management network. These are the devices which bridge, likely untrusted, traffic from the data network to the management network.

Securing OSGi bundles

OSGi is a Java-specific framework that improves the way that Java classes interact within a single JVM. It provides an enhanced version of the java.lang.SecurityManager (ConditionalPermissionAdmin) in terms of security.

Java provides a security framework that allows a security policy to grant permissions, such as reading a file or opening a network connection, to specific code. The code maybe classes from the jarfile loaded from a specific URL, or a class signed by a specific key. OSGi builds on the standard Java security model to add the following features:

  • A set of OSGi-specific permission types, such as one that grants the right to register an OSGi service or get an OSGi service from the service registry.

  • The ability to dynamically modify permissions at runtime. This includes the ability to specify permissions by using code rather than a text configuration file.

  • A flexible predicate-based approach to determining which rules are applicable to which ProtectionDomain. This approach is much more powerful than the standard Java security policy which can only grant rights based on a jarfile URL or class signature. A few standard predicates are provided, including selecting rules based upon bundle symbolic-name.

  • Support for bundle local permissions policies with optional further constraints such as DENY operations. Most of this functionality is accessed by using the OSGi ConditionalPermissionAdmin service which is part of the OSGi core and can be obtained from the OSGi service registry. The ConditionalPermissionAdmin API replaces the earlier PermissionAdmin API.

For more information, refer to https://www.osgi.org

Securing the Karaf container

Apache Karaf is a OSGi-based runtime platform which provides a lightweight container for OpenDaylight and applications. Apache Karaf uses either Apache Felix Framework or Eclipse Equinox OSGi frameworks, and provide additional features on top of the framework.

Apache Karaf provides a security framework based on Java Authentication and Authorization Service (JAAS) in compliance with OSGi recommendations, while providing RBAC (Role-Based Access Control) mechanism for the console and Java Management Extensions (JMX).

The Apache Karaf security framework is used internally to control the access to the following components:

  • OSGi services

  • console commands

  • JMX layer

  • WebConsole

The remote management capabilities are present in Apache Karaf by default, however they can be disabled by using various configuration alterations. These configuration options may be applied to the OpenDaylight Karaf distribution.

Note

Refer to the following list of publications for more information on implementing security for the Karaf container.

Disabling the remote shutdown port

You can lock down your deployment post installation. Set karaf.shutdown.port=-1 in etc/custom.properties or etc/config.properties to disable the remote shutdown port.

Securing Southbound Plugins

Many individual southbound plugins provide mechanisms to secure their communication with network devices. For example, the OpenFlow plugin supports TLS connections with bi-directional authentication and the NETCONF plugin supports connecting over SSH. Meanwhile, the Unified Secure Channel plugin provides a way to form secure, remote connections for supported devices.

When deploying OpenDaylight, you should carefully investigate the secure mechanisms to connect to devices using the relevant plugins.

Securing OpenDaylight using AAA

AAA stands for Authentication, Authorization, and Accounting. All three of these services can help improve the security posture of an OpenDaylight deployment.

The vast majority of OpenDaylight’s northbound APIs (and all RESTCONF APIs) are protected by AAA by default when installing the +odl-restconf+ feature. In the cases that APIs are not protected by AAA, this will be noted in the per-project release notes.

By default, OpenDaylight has only one user account with the username and password admin. This should be changed before deploying OpenDaylight.

Securing RESTCONF using HTTPS

To secure Jetty RESTful services, including RESTCONF, you must configure the Jetty server to utilize SSL by performing the following steps.

  1. Issue the following command sequence to create a self-signed certificate for use by the ODL deployment.

    keytool -keystore .keystore -alias jetty -genkey -keyalg RSA
     Enter keystore password:  123456
    What is your first and last name?
      [Unknown]:  odl
    What is the name of your organizational unit?
      [Unknown]:  odl
    What is the name of your organization?
      [Unknown]:  odl
    What is the name of your City or Locality?
      [Unknown]:
    What is the name of your State or Province?
      [Unknown]:
    What is the two-letter country code for this unit?
      [Unknown]:
    Is CN=odl, OU=odl, O=odl,
    L=Unknown, ST=Unknown, C=Unknown correct?
      [no]:  yes
    
  2. After the key has been obtained, make the following changes to the etc/custom.properties file to set a few default properties.

    org.osgi.service.http.secure.enabled=true
    org.osgi.service.http.port.secure=8443
    org.ops4j.pax.web.ssl.keystore=./etc/.keystore
    org.ops4j.pax.web.ssl.password=123456
    org.ops4j.pax.web.ssl.keypassword=123456
    
  3. Then edit the etc/jetty.xml file with the appropriate HTTP connectors.

    For example:

    <?xml version="1.0"?>
    <!--
     Licensed to the Apache Software Foundation (ASF) under one
     or more contributor license agreements.  See the NOTICE file
     distributed with this work for additional information
     regarding copyright ownership.  The ASF licenses this file
     to you under the Apache License, Version 2.0 (the
     "License"); you may not use this file except in compliance
     with the License.  You may obtain a copy of the License at
    
       http://www.apache.org/licenses/LICENSE-2.0
    
    Unless required by applicable law or agreed to in writing,
    software distributed under the License is distributed on an
    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
     KIND, either express or implied.  See the License for the
     specific language governing permissions and limitations
     under the License.
    -->
    <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//
    DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd">
    
    <Configure id="Server" class="org.eclipse.jetty.server.Server">
    
        <!-- Use this connector for many frequently idle connections and for
            threadless continuations. -->
        <New id="http-default" class="org.eclipse.jetty.server.HttpConfiguration">
            <Set name="secureScheme">https</Set>
            <Set name="securePort">
                <Property name="jetty.secure.port" default="8443" />
            </Set>
            <Set name="outputBufferSize">32768</Set>
            <Set name="requestHeaderSize">8192</Set>
            <Set name="responseHeaderSize">8192</Set>
    
            <!-- Default security setting: do not leak our version -->
            <Set name="sendServerVersion">false</Set>
    
            <Set name="sendDateHeader">false</Set>
            <Set name="headerCacheSize">512</Set>
        </New>
    
        <Call name="addConnector">
            <Arg>
                <New class="org.eclipse.jetty.server.ServerConnector">
                    <Arg name="server">
                        <Ref refid="Server" />
                    </Arg>
                    <Arg name="factories">
                        <Array type="org.eclipse.jetty.server.ConnectionFactory">
                            <Item>
                                <New class="org.eclipse.jetty.server.HttpConnectionFactory">
                                    <Arg name="config">
                                        <Ref refid="http-default"/>
                                    </Arg>
                                </New>
                            </Item>
                        </Array>
                    </Arg>
                    <Set name="host">
                        <Property name="jetty.host"/>
                    </Set>
                    <Set name="port">
                        <Property name="jetty.port" default="8181"/>
                    </Set>
                    <Set name="idleTimeout">
                        <Property name="http.timeout" default="300000"/>
                    </Set>
                    <Set name="name">jetty-default</Set>
                </New>
            </Arg>
        </Call>
    
        <!-- =========================================================== -->
        <!-- Configure Authentication Realms -->
        <!-- Realms may be configured for the entire server here, or -->
        <!-- they can be configured for a specific web app in a context -->
        <!-- configuration (see $(jetty.home)/contexts/test.xml for an -->
        <!-- example). -->
        <!-- =========================================================== -->
        <Call name="addBean">
            <Arg>
                <New class="org.eclipse.jetty.jaas.JAASLoginService">
                    <Set name="name">karaf</Set>
                    <Set name="loginModuleName">karaf</Set>
                    <Set name="roleClassNames">
                        <Array type="java.lang.String">
                            <Item>org.apache.karaf.jaas.boot.principal.RolePrincipal</Item>
                        </Array>
                    </Set>
                </New>
            </Arg>
        </Call>
        <Call name="addBean">
            <Arg>
               <New class="org.eclipse.jetty.jaas.JAASLoginService">
                    <Set name="name">default</Set>
                    <Set name="loginModuleName">karaf</Set>
                    <Set name="roleClassNames">
                        <Array type="java.lang.String">
                            <Item>org.apache.karaf.jaas.boot.principal.RolePrincipal</Item>
                        </Array>
                    </Set>
                </New>
            </Arg>
        </Call>
    </Configure>
    

The configuration snippet above adds a connector that is protected by SSL on port 8443. You can test that the changes have succeeded by restarting Karaf, issuing the following curl command, and ensuring that the 2XX HTTP status code appears in the returned message.

curl -u admin:admin -v -k https://localhost:8443/restconf/modules
Security Considerations for Clustering

While OpenDaylight clustering provides many benefits including high availability, scale-out performance, and data durability, it also opens a new attack surface in the form of the messages exchanged between the various instances of OpenDaylight in the cluster. In the current OpenDaylight release, these messages are neither encrypted nor authenticated meaning that anyone with access to the management network where OpenDaylight exchanges these clustering messages can forge and/or read the messages. This means that if clustering is enabled, it is even more important that the management network be kept secure from any untrusted entities.

What to Do with OpenDaylight

OpenDaylight (ODL) is a modular open platform for customizing and automating networks of any size and scale.

The following section provides links to documentation with examples of OpenDaylight deployment use cases.

Note

If you are an OpenDaylight contributor, we encourage you to add links to documentation with examples of interesting OpenDaylight deployment use cases in this section.

How to Get Help

Users and developers can get support from the OpenDaylight community through the mailing lists, IRC and forums.

  1. Create your question on ServerFault or Stackoverflow with the tag #opendaylight.

    Note

    It is important to tag questions correctly to ensure that the questions reach individuals subscribed to the tag.

  2. Mail discuss@lists.opendaylight.org or dev@lists.opendaylight.org.

  3. Directly mail the PTL as indicated on the specific projects page.

  4. IRC: Connect to #opendaylight or #opendaylight-meeting channel on freenode. The Linux Foundation’s IRC guide may be helpful. You’ll need an IRC client, or can use the freenode webchat, or perhaps you’ll like IRCCloud.

  5. For infrastructure and release engineering queries, mail helpdesk@opendaylight.org. IRC: Connect to #lf-releng channel on freenode.

Developing Apps on the OpenDaylight controller

This section provides information that is required to develop apps on the OpenDaylight controller.

You can either develop apps within the controller using the model-driven SAL (MD-SAL) archetype or develop external apps and use the RESTCONF to communicate with the controller.

Overview

This section enables you to get started with app development within the OpenDaylight controller. In this example, you perform the following steps to develop an app.

  1. Create a local repository for the code using a simple build process.

  2. Start the OpenDaylight controller.

  3. Test a simple remote procedure call (RPC) which you have created based on the principle of hello world.

Pre requisites

This example requires the following.

  • A development environment with following set up and working correctly from the shell:

    • Maven 3.5.2 or later

    • Java 8-compliant JDK

    • An appropriate Maven settings.xml file. A simple way to get the default OpenDaylight settings.xml file is:

      cp -n ~/.m2/settings.xml{,.orig} ; wget -q -O - https://raw.githubusercontent.com/opendaylight/odlparent/master/settings.xml > ~/.m2/settings.xml
      

Note

If you are using Linux or Mac OS X as your development OS, your local repository is ~/.m2/repository. For other platforms the local repository location will vary.

Building an example module

To develop an app perform the following steps.

  1. Create an Example project using Maven and an archetype called the opendaylight-startup-archetype. If you are downloading this project for the first time, then it will take sometime to pull all the code from the remote repository.

    mvn archetype:generate -DarchetypeGroupId=org.opendaylight.archetypes -DarchetypeArtifactId=opendaylight-startup-archetype \
    -DarchetypeCatalog=remote -DarchetypeVersion=<VERSION>
    

    The correct VERSION depends on desired Simultaneous Release:

    Archetype versions

    OpenDaylight Simultaneous Release

    opendaylight-startup-archetype version

    Sodium

    1.2.0

    Sodium SR1

    1.2.1

    Sodium SR2

    1.2.2

    Sodium SR3 Development

    1.2.3-SNAPSHOT

2. Update the properties values as follows. Ensure that the values for the groupId and the artifactId are in lower case.

Define value for property 'groupId': : org.opendaylight.example
Define value for property 'artifactId': : example
Define value for property 'version':  1.0-SNAPSHOT: : 1.0.0-SNAPSHOT
Define value for property 'package':  org.opendaylight.example: :
Define value for property 'classPrefix':  ${artifactId.substring(0,1).toUpperCase()}${artifactId.substring(1)}
Define value for property 'copyright': : Copyright (c) 2015 Yoyodyne, Inc.
  1. Accept the default value of classPrefix that is, (${artifactId.substring(0,1).toUpperCase()}${artifactId.substring(1)}). The classPrefix creates a Java Class Prefix by capitalizing the first character of the artifactId.

    Note

    In this scenario, the classPrefix used is “Example”. Create a top-level directory for the archetype.

    ${artifactId}/
    example/
    cd example/
    api/
    artifacts/
    features/
    impl/
    karaf/
    pom.xml
    
  2. Build the example project.

    Note

    Depending on your development machine’s specification this might take a little while. Ensure that you are in the project’s root directory, example/, and then issue the build command, shown below.

    mvn clean install
    
  3. Start the example project for the first time.

    cd karaf/target/assembly/bin
    ls
    ./karaf
    
  4. Wait for the karaf cli that appears as follows. Wait for OpenDaylight to fully load all the components. This can take a minute or two after the prompt appears. Check the CPU on your dev machine, specifically the Java process to see when it calms down.

    opendaylight-user@root>
    
  5. Verify if the “example” module is built and search for the log entry which includes the entry ExampleProvider Session Initiated.

    log:display | grep Example
    
  6. Shutdown OpenDaylight through the console by using the following command.

    shutdown -f
    

Defining a Simple Hello World RPC

  1. Build a hello example from the Maven archetype opendaylight-startup-archetype, same as above.
  2. Now view the entry point to understand where the log line came from. The entry point is in the impl project:

    impl/src/main/java/org/opendaylight/hello/impl/HelloProvider.java
    
  3. Add any new things that you are doing in your implementation by using the HelloProvider.init method. It’s analogous to an Activator.

    /**
    * Method called when the blueprint container is created.
    */
    public void init() {
        LOG.info("HelloProvider Session Initiated");
    }
    

Add a simple HelloWorld RPC API

  1. Navigate to the file.

    api/src/main/yang/hello.yang
    
  2. Edit this file as follows. In the following example, we are adding the code in a YANG module to define the hello-world RPC:

    module hello {
        yang-version 1;
        namespace "urn:opendaylight:params:xml:ns:yang:hello";
        prefix "hello";
        revision "2019-11-27" {
            description "Initial revision of hello model";
        }
        rpc hello-world {
            input {
                leaf name {
                    type string;
                }
            }
            output {
                leaf greeting {
                    type string;
                }
            }
        }
    }
    
  3. Return to the hello/api directory and build your API as follows.

    cd ../../../
    mvn clean install
    

Implement the HelloWorld RPC API

  1. Define the HelloService, which is invoked through the hello-world API.

    cd ../impl/src/main/java/org/opendaylight/hello/impl/
    
  2. Create a new file called HelloWorldImpl.java and add in the code below.

    package org.opendaylight.hello.impl;
    
    import com.google.common.util.concurrent.ListenableFuture;
     import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloService;
     import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloWorldInput;
     import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloWorldOutput;
     import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloWorldOutputBuilder;
     import org.opendaylight.yangtools.yang.common.RpcResult;
     import org.opendaylight.yangtools.yang.common.RpcResultBuilder;
    
     public class HelloWorldImpl implements HelloService {
         @Override
         public ListenableFuture<RpcResult<HelloWorldOutput>> helloWorld(HelloWorldInput input) {
             HelloWorldOutputBuilder helloBuilder = new HelloWorldOutputBuilder();
             helloBuilder.setGreeting("Hello " + input.getName());
             return RpcResultBuilder.success(helloBuilder.build()).buildFuture();
         }
     }
    
  3. The HelloProvider.java file is in the current directory. Register the RPC that you created in the hello.yang file in the HelloProvider.java file. You can either edit the HelloProvider.java to match what is below or you can simple replace it with the code below.

    /*
     * Copyright(c) Yoyodyne, Inc. and others.  All rights reserved.
     *
     * This program and the accompanying materials are made available under the
     * terms of the Eclipse Public License v1.0 which accompanies this distribution,
     * and is available at http://www.eclipse.org/legal/epl-v10.html
     */
    package org.opendaylight.hello.impl;
    
    import org.opendaylight.mdsal.binding.api.DataBroker;
     import org.opendaylight.mdsal.binding.api.RpcProviderService;
     import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloService;
     import org.opendaylight.yangtools.concepts.ObjectRegistration;
     import org.slf4j.Logger;
     import org.slf4j.LoggerFactory;
    
     public class HelloProvider {
    
         private static final Logger LOG = LoggerFactory.getLogger(HelloProvider.class);
    
         private final DataBroker dataBroker;
         private ObjectRegistration<HelloService> helloService;
         private RpcProviderService rpcProviderService;
    
         public HelloProvider(final DataBroker dataBroker, final RpcProviderService rpcProviderService) {
             this.dataBroker = dataBroker;
             this.rpcProviderService = rpcProviderService;
         }
    
         /**
         * Method called when the blueprint container is created.
         */
         public void init() {
             LOG.info("HelloProvider Session Initiated");
             helloService = rpcProviderService.registerRpcImplementation(HelloService.class, new HelloWorldImpl());
         }
    
         /**
         * Method called when the blueprint container is destroyed.
         */
         public void close() {
             LOG.info("HelloProvider Closed");
             if (helloService != null) {
                 helloService.close();
             }
         }
     }
    
  4. Optionally, you can also build the Java classes which will register the new RPC. This is useful to test the edits you have made to HelloProvider.java and HelloWorldImpl.java.

    cd ../../../../../../../
    mvn clean install
    
  5. Return to the top level directory

    cd ../
    
  6. Build the entire hello again, which will pickup the changes you have made and build them into your project:

    mvn clean install
    

Execute the hello project for the first time

  1. Run karaf

    cd ../karaf/target/assembly/bin
    ./karaf
    
  2. Wait for the project to load completely. Then view the log to see the loaded Hello Module:

    log:display | grep Hello
    

Test the hello-world RPC via REST

There are a lot of ways to test your RPC. Following are some examples.

  1. Using the API Explorer through HTTP

  2. Using a browser REST client

Using the API Explorer through HTTP
  1. Navigate to apidoc UI with your web browser.
    NOTE: In the URL mentioned above, Change localhost to the IP/Host name to reflect your development machine’s network address.
  2. Select

    hello(2015-01-05)
    
  3. Select

    POST /operations/hello:hello-world
    
  4. Provide the required value.

    {"hello:input": { "name":"Your Name"}}
    
  5. Click the button.

6. Enter the username and password. By default the credentials are admin/admin.

  1. In the response body you should see.

    {
      "output": {
        "greeting": "Hello Your Name"
      }
    }
    
Using a browser REST client
For example, use the following information in the Firefox plugin RESTClient https://github.com/chao/RESTClient
POST: http://localhost:8181/restconf/operations/hello:hello-world

Header:

Accept: application/json
Content-Type: application/json
Authorization: Basic admin admin

Body:

{"input": {
    "name": "Andrew"
  }
}

In the response body you should see:

{
  "output": {
    "greeting": "Hello Your Name"
  }
}

Troubleshooting

If you get a response code 501 while attempting to POST /operations/hello:hello-world, check the file: HelloProvider.java and make sure the helloService member is being set. By not invoking “session.addRpcImplementation()” the REST API will be unable to map /operations/hello:hello-world url to HelloWorldImpl.

OpenDaylight Contributor Guides

Documentation Guide

This guide provides details on how to contribute to the OpenDaylight documentation. OpenDaylight currently uses reStructuredText for documentation and Sphinx to build it. These documentation tools are widely used in open source communities to produce both HTML and PDF documentation and can be easily versioned alongside the code. reStructuredText also offers similar syntax to Markdown, which is familiar to many developers.

Style Guide

This section serves two purposes:

  1. A guide for those writing documentation.

  2. A guide for those reviewing documentation.

Note

When reviewing content, assuming that the content is usable, the documentation team is biased toward merging the content rather than blocking it due to relatively minor editorial issues.

Formatting Preferences

In general, when reviewing content, the documentation team ensures that it is comprehensible but tries not to be overly pedantic. Along those lines, while it is preferred that the following formatting preferences are followed, they are generally not an exclusive reason to give a “-1” reply to a patch in Gerrit:

  • No trailing whitespace

  • Line wrapping at something reasonable, that is, 72–100 characters

Key terms
  • Functionality: something useful a project provides abstractly

  • Feature: a Karaf feature that somebody could install

  • Project: a project within OpenDaylight; projects ship features to provide functionality

  • OpenDaylight: this refers to the software we release; use this in place of OpenDaylight controller, the OpenDaylight controller, not ODL, not ODC

    • Since there is a controller project within OpenDaylight, using other terms is hard.

Common writing style mistakes
  • In per-project user documentation, you should never say git clone, but should assume people have downloaded and installed the controller per the getting started guide and start with feature:install <something>

  • Avoid statements which are true about part of OpenDaylight, but not generally true.

    • For example: “OpenDaylight is a NETCONF controller.” It is, but that is not all it is.

  • In general, developer documentation should target external developers to your project so should talk about what APIs you have and how they could use them. It should not document how to contribute to your project.

Grammar Preferences
  • Avoid contractions: Use “cannot” instead of “can’t”, “it is” instead of “it’s”, and so on.

Word Choice

Note

The following word choice guidelines apply when using these terms in text. If these terms are used as part of a URL, class name, or any instance where modifying the case would create issues, use the exact capitalization and spacing associated with the URL or class name.

  • ACL: not Acl or acl

  • API: not api

  • ARP: not Arp or arp

  • datastore: not data store, Data Store, or DataStore (unless it is a class/object name)

  • IPsec, not IPSEC or ipsec

  • IPv4 or IPv6: not Ipv4, Ipv6, ipv4, ipv6, IPV4, or IPV6

  • Karaf: not karaf

  • Linux: not LINUX or linux

  • NETCONF: not Netconf or netconf

  • Neutron: not neutron

  • OSGi: not osgi or OSGI

  • Open vSwitch: not OpenvSwitch, OpenVSwitch, or Open V Switch.

  • OpenDaylight: not Opendaylight, Open Daylight, or OpenDayLight.

    Note

    Also, avoid Opendaylight abbreviations like ODL and ODC.

  • OpenFlow: not Openflow, Open Flow, or openflow.

  • OpenStack: not Open Stack or Openstack

  • QoS: not Qos, QOS, or qos

  • RESTCONF: not Restconf or restconf

  • RPC not Rpc or rpc

  • URL not Url or url

  • VM: not Vm or vm

  • YANG: not Yang or yang

reStructuredText-based Documentation

When using reStructuredText, follow the Python documentation style guidelines. See: https://devguide.python.org/documenting/

One of the best references for reStrucutedText syntax is the Sphinx Primer on reStructuredText.

To build and review the reStructuredText documentation locally, you must have the following packages installed locally:

  • python

  • python-tox

Note

Both packages should be available in most distribution package managers.

Then simply run tox and open the HTML produced by using your favorite web browser as follows:

git clone https://git.opendaylight.org/gerrit/docs
cd docs
git submodule update --init
tox
firefox docs/_build/html/index.html
Directory Structure

The directory structure for the reStructuredText documentation is rooted in the docs directory inside the docs git repository.

Note

There are guides hosted directly in the docs git repository and there are guides hosted in remote git repositories. Documentation hosted in remote git repositories are generally provided for project-specific information.

For example, here is the directory layout on June, 28th 2016:

$ tree -L 2
.
├── Makefile
├── conf.py
├── documentation.rst
├── getting-started-guide
│   ├── api.rst
│   ├── concepts_and_tools.rst
│   ├── experimental_features.rst
│   ├── index.rst
│   ├── installing_opendaylight.rst
│   ├── introduction.rst
│   ├── karaf_features.rst
│   ├── other_features.rst
│   ├── overview.rst
│   └── who_should_use.rst
├── index.rst
├── make.bat
├── opendaylight-with-openstack
│   ├── images
│   ├── index.rst
│   ├── openstack-with-gbp.rst
│   ├── openstack-with-ovsdb.rst
│   └── openstack-with-vtn.rst
└── submodules
    └── releng
        └── builder

The getting-started-guide and opendaylight-with-openstack directories correspond to two guides hosted in the docs repository, while the submodules/releng/builder directory houses documentation for the RelEng/Builder project.

Each guide includes an index.rst file, which uses a toctree directive that includes the other files associated with the guide. For example:

.. toctree::
   :maxdepth: 1

   getting-started-guide/index
   opendaylight-with-openstack/index
   submodules/releng/builder/docs/index

This example creates a table of contents on that page where each heading of the table of contents is the root of the files that are included.

Note

When including .rst files using the toctree directive, omit the .rst file extension at the end of the file name.

Adding a submodule

If you want to import a project underneath the documentation project so that the docs can be kept in the separate repo, you can do it by using the git submodule add command as follows:

git submodule add -b master ../integration/packaging docs/submodules/integration/packaging
git commit -s

Note

Most projects will not want to use -b master, but instead use the branch ., which tracks whatever branch of the documentation project you happen to be on.

Unfortunately, -b . does not work, so you have to manually edit the .gitmodules file to add branch = . and then commit it. For example:

<edit the .gitmodules file>
git add .gitmodules
git commit --amend

When you’re done you should have a git commit something like:

$ git show
commit 7943ce2cb41cd9d36ce93ee9003510ce3edd7fa9
Author: Daniel Farrell <dfarrell@redhat.com>
Date:   Fri Dec 23 14:45:44 2016 -0500

    Add Int/Pack to git submodules for RTD generation

    Change-Id: I64cd36ca044b8303cb7fc465b2d91470819a9fe6
    Signed-off-by: Daniel Farrell <dfarrell@redhat.com>

diff --git a/.gitmodules b/.gitmodules
index 91201bf6..b56e11c8 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -38,3 +38,7 @@
        path = docs/submodules/ovsdb
        url = ../ovsdb
        branch = .
+[submodule "docs/submodules/integration/packaging"]
+       path = docs/submodules/integration/packaging
+       url = ../integration/packaging
+       branch = master
diff --git a/docs/submodules/integration/packaging b/docs/submodules/integration/packaging
new file mode 160000
index 00000000..fd5a8185
--- /dev/null
+++ b/docs/submodules/integration/packaging
@@ -0,0 +1 @@
+Subproject commit fd5a81853e71d45945471d0f91bbdac1a1444386

As usual, you can push it to Gerrit with git review.

Important

It is critical that the Gerrit patch be merged before the git commit hash of the submodule changes. Otherwise, Gerrit is not able to automatically keep it up-to-date for you.

Documentation Layout and Style

As mentioned previously, OpenDaylight aims to follow the Python documentation style guidelines, which defines a few types of sections:

# with overline, for parts
* with overline, for chapters
=, for sections
-, for subsections
^, for subsubsections
", for paragraphs

OpenDaylight documentation is organized around the following structure based on that recommendation:

docs/index.rst                 -> entry point
docs/____-guide/index.rst      -> part
docs/____-guide/<chapter>.rst  -> chapter

In the ____-guide/index.rst we use the # with overline at the very top of the file to determine that it is a part and then within each chapter file we start the document with a section using * with overline to denote that it is the chapter heading and then everything in the rest of the chapter should use:

=, for sections
-, for subsections
^, for subsubsections
", for paragraphs
Referencing Sections

This section provides a quick primer for creating references in OpenDaylight documentation. For more information, refer to Cross-referencing documents.

Within a single document, you can reference another section simply by:

This is a reference to `The title of a section`_

Assuming that somewhere else in the same file, there a is a section title something like:

The title of a section
^^^^^^^^^^^^^^^^^^^^^^

It is typically better to use :ref: syntax and labels to provide links as they work across files and are resilient to sections being renamed. First, you need to create a label something like:

.. _a-label:

The title of a section
^^^^^^^^^^^^^^^^^^^^^^

Note

The underscore (_) before the label is required.

Then you can reference the section anywhere by simply doing:

This is a reference to :ref:`a-label`

or:

This is a reference to :ref:`a section I really liked <a-label>`

Note

When using :ref:-style links, you don’t need a trailing underscore (_).

Because the labels have to be unique, a best practice is to prefix the labels with the project name to help share the label space; for example, use sfc-user-guide instead of just user-guide.

Troubleshooting
Nested formatting does not work

As stated in the reStructuredText guide, inline markup for bold, italic, and fixed-width font cannot be nested. Furthermore, inline markup cannot be mixed with hyperlinks, so you cannot have a link with bold text.

This is tracked in a Docutils FAQ question, but there is no clear current plan to fix this.

Make sure you have cloned submodules

If you see an error like this:

./build-integration-robot-libdoc.sh: line 6: cd: submodules/integration/test/csit/libraries: No such file or directory
Resource file '*.robot' does not exist.

It means that you have not pulled down the git submodule for the integration/test project. The fastest way to do that is:

git submodule update --init

In some cases, you might wind up with submodules which are somehow out-of-sync. In that case, the easiest way to fix them is to delete the submodules directory and then re-clone the submodules:

rm -rf docs/submodules/
git submodule update --init

Warning

These steps delete any local changes or information you made in the submodules, which would only occur if you manually edited files in that directory.

Clear your tox directory and try again

Sometimes, tox will not detect when your requirements.txt file has changed and so will try to run things without the correct dependencies. This issue usually manifests as No module named X errors or an ExtensionError and can be fixed by deleting the .tox directory and building again:

rm -rf .tox
tox
Builds on Read the Docs

Read the Docs builds do not automatically clear the file structure between builds and clones. The result is that you may have to clean up the state of old runs of the build script.

As an example, refer to the following patch: https://git.opendaylight.org/gerrit/c/docs/+/41679/

This patch fixed the issue that caused builds to fail because they were taking too long removing directories associated with generated javadoc files that were present from previous runs.

Errors from Coala

As part of running tox, two environments run: coala which does a variety of reStructuredText (and other) linting, and docs, which runs Sphinx to build HTML and PDF documentation. You can run them independently by doing tox -ecoala or tox -edocs.

The coala linter for reStructuredText is not always the most helpful in explaining why it failed. So, here are some common ones. There should also be Jenkins Failure Cause Management rules that will highlight these for you.

Git Commit Message Errors

Coala checks that git commit messages adhere to the following rules:

  • Shortlog (1st line of commit message) is less than 50 characters

  • Shortlog (1st line of commit message) is in the imperative mood. For example, “Add foo unit test” is good, but “Adding foo unit test is bad”“

  • Body (all lines but 1st line of commit message) are less than 72 characters. Some exceptions seem to exist, such as for long URLs.

Some examples of those being logged are:

::

Project wide: | | [NORMAL] GitCommitBear: | | Shortlog of HEAD commit isn’t in imperative mood! Bad words are ‘Adding’

::

Project wide: | | [NORMAL] GitCommitBear: | | Body of HEAD commit contains too long lines. Commit body lines should not exceed 72 characters.

Error in “code-block” directive

If you see an error like this:

::

docs/gerrit.rst | 89| ···..·code-block::·bash | | [MAJOR] RSTcheckBear: | | (ERROR/3) Error in “code-block” directive:

It means that the relevant code-block is not valid for the language specified, in this case bash.

Note

If you do not specify a language, the default language is Python. If you want the code-block to not be an any particular language, instead use the :: directive. For example:

::
::

This is a code block that will not be parsed in any particluar langauge

Project Documentation Requirements

Submitting Documentation Outlines (M2)
  1. Determine the features your project will have and which ones will be ‘’user-facing’‘.

    • In general, a feature is user-facing if it creates functionality that a user would directly interact with.

    • For example, odl-openflowplugin-flow-services-ui is likely user-facing since it installs user-facing OpenFlow features, while odl-openflowplugin-flow-services is not because it provides only developer-facing features.

  2. Determine pieces of documentation that you need to provide based on the features your project will have and which ones will be user-facing.

    Note

    You might need to create multiple documents for the same kind of documentation. For example, the controller project will likely want to have a developer section for the config subsystem as well as for the MD-SAL.

  3. Clone the docs repo: git clone https://git.opendaylight.org/gerrit/docs

  4. For each piece of documentation find the corresponding template in the docs repo.

    • For user documentation: docs.git/docs/templates/template-user-guide.rst

    • For developer documentation: ddocs/templates/template-developer-guide.rst

    • For installation documentation (if any): docs/templates/template-install-guide.rst

    Note

    You can find the rendered templates here:

    <Feature> User Guide

    Refer to this template to identify the required sections and information that you should provide for a User Guide. The user guide should contain configuration, administration, management, using, and troubleshooting sections for the feature.

    Overview

    Provide an overview of the feature and the use case. Also include the audience who will use the feature. For example, audience can be the network administrator, cloud administrator, network engineer, system administrators, and so on.

    <Feature> Architecture

    Provide information about feature components and how they work together. Also include information about how the feature integrates with OpenDaylight. An architecture diagram could help.

    Note

    Please do not include detailed internals that somebody using the feature wouldn’t care about. For example, the fact that there are four layers of APIs between a user command and a message being sent to a device is probably not useful to know unless they have some way to influence how those layers work and a reason to do so.

    Configuring <feature>

    Describe how to configure the feature or the project after installation. Configuration information could include day-one activities for a project such as configuring users, configuring clients/servers and so on.

    Administering or Managing <feature>

    Include related command reference or operations that you could perform using the feature. For example viewing network statistics, monitoring the network, generating reports, and so on.

    For example:

    To configure L2switch components perform the following steps.

    1. Step 1:

    2. Step 2:

    3. Step 3:

    Tutorials

    optional

    If there is only one tutorial, you skip the “Tutorials” section and instead just lead with the single tutorial’s name. If you do, also increase the header level by one, i.e., replace the carets (^^^) with dashes (- - -) and the dashes with equals signs (===).

    <Tutorial Name>

    Ensure that the title starts with a gerund. For example using, monitoring, creating, and so on.

    Overview

    An overview of the use case.

    Prerequisites

    Provide any prerequisite information, assumed knowledge, or environment required to execute the use case.

    Target Environment

    Include any topology requirement for the use case. Ideally, provide visual (abstract) layout of network diagrams and any other useful visual aides.

    Instructions

    Use case could be a set of configuration procedures. Including screenshots to help demonstrate what is happening is especially useful. Ensure that you specify them separately. For example:

    Setting up the VM

    To set up a VM perform the following steps.

    1. Step 1

    2. Step 2

    3. Step 3

    Installing the feature

    To install the feature perform the following steps.

    1. Step 1

    2. Step 2

    3. Step 3

    Configuring the environment

    To configure the system perform the following steps.

    1. Step 1

    2. Step 2

    3. Step 3

    <Feature> Developer Guide
    Overview

    Provide an overview of the feature, what it logical functionality it provides and why you might use it as a developer. To be clear the target audience for this guide is a developer who will be using the feature to build something separate, but not somebody who will be developing code for this feature itself.

    Note

    More so than with user guides, the guide may cover more than one feature. If that is the case, be sure to list all of the features this covers.

    <Feature> Architecture

    Provide information about feature components and how they work together. Also include information about how the feature integrates with OpenDaylight. An architecture diagram could help. This may be the same as the diagram used in the user guide, but it should likely be less abstract and provide more information that would be applicable to a developer.

    Key APIs and Interfaces

    Document the key things a user would want to use. For some features, there will only be one logical grouping of APIs. For others there may be more than one grouping.

    Assuming the API is MD-SAL- and YANG-based, the APIs will be available both via RESTCONF and via Java APIs. Giving a few examples using each is likely a good idea.

    API Group 1

    Provide a description of what the API does and some examples of how to use it.

    API Group 2

    Provide a description of what the API does and some examples of how to use it.

    API Reference Documentation

    Provide links to JavaDoc, REST API documentation, etc.

    <Feature> Installation Guide

    Note

    Only use this template if installation is more complicated than simply installing a feature in the Karaf distribution. Otherwise simply provide the names of all user-facing features in your M3 readout.

    This is a template for installing a feature or a project developed in the ODL project. The feature could be interfaces, protocol plug-ins, or applications.

    Overview

    Add overview of the feature. Include Architecture diagram and the positioning of this feature in overall controller architecture. Highlighting the feature in a different color within the overall architecture must help. Include information to describe if the project is within ODL installation package or to be installed separately.

    Pre Requisites for Installing <Feature>
    • Hardware Requirements

    • Software Requirements

    Preparing for Installation

    Include any pre configuration, database, or other software downloads required to install <feature>.

    Installing <Feature>

    Include if you have separate procedures for Windows and Linux

    Verifying your Installation

    Describe how to verify the installation.

    Troubleshooting

    optional

    Text goes here.

    Post Installation Configuration

    Post Installation Configuration section must include some basic (must-do) procedures if any, to get started.

    Mandatory instructions to get started with the product.

    • Logging in

    • Getting Started

    • Integration points with controller

    Upgrading From a Previous Release

    Text goes here.

    Uninstalling <Feature>

    Text goes here.

  5. Copy the template into the appropriate directory for your project.

    • For user documentation: docs.git/docs/user-guide/${feature-name}-user-guide.rst

    • For developer documentation: docs.git/docs/developer-guide/${feature-name}-developer-guide.rst

    • For installation documentation (if any): docs.git/docs/getting-started-guide/project-specific-guides/${project-name}.rst

    Note

    These naming conventions are not set in stone, but are used to maintain a consistent document taxonomy. If these conventions are not appropriate or do not make sense for a document in development, use the convention that you think is more appropriate and the documentation team will review it and give feedback on the gerrit patch.

  6. Edit the template to fill in the outline of what you will provide using the suggestions in the template. If you feel like a section is not needed, feel free to omit it.

  7. Link the template into the appropriate core .rst file.

    • For user documentation: docs.git/docs/user-guide/index.rst

    • For developer documentation: docs.git/docs/developer-guide/index.rst

    • For installation documentation (if any): docs.git/docs/getting-started-guide/project-specific-guides/index.rst

    • In each file, it should be pretty clear what line you need to add. In general if you have an .rst file project-name.rst, you include it by adding a new line project-name without the .rst at the end.

  8. Make sure the documentation project still builds.

  9. Commit and submit the patch.

    1. Commit using:

      git add --all && git commit -sm "Documentation outline for ${project-shortname}"
      
    2. Submit using:

      git review
      

      See the Git-review Workflow page if you don’t have git-review installed.

  10. Wait for the patch to be merged or to get feedback

    • If you get feedback, make the requested changes and resubmit the patch.

    • When you resubmit the patch, it is helpful if you also post a “+0” reply to the patch in Gerrit, stating what patch set you just submitted and what you fixed in the patch set.

Expected Output From Documentation Project

The expected output is (at least) 3 PDFs and equivalent web-based documentation:

  • User/Operator Guide

  • Developer Guide

  • Installation Guide

These guides will consist of “front matter” produced by the documentation group and the per-project/per-feature documentation provided by the projects.

Note

This requirement is intended for the person responsible for the documentation and should not be interpreted as preventing people not normally in the documentation group from helping with front matter nor preventing people from the documentation group from helping with per-project/per-feature documentation.

Project Documentation Requirements
Content Types

These are the expected kinds of documentation and target audiences for each kind.

  • User/Operator: for people looking to use the feature without writing code

    • Should include an overview of the project/feature

    • Should include description of availble configuration options and what they do

  • Developer: for people looking to use the feature in code without modifying it

    • Should include API documentation, such as, enunciate for REST, Javadoc for Java, ??? for RESTCONF/models

  • Contributor: for people looking to extend or modify the feature’s source code

    Note

    You can find this information on the wiki.

  • Installation: for people looking for instructions to install the feature after they have downloaded the ODL release

    Note

    The audience for this content is the same as User/Operator docs

    • For most projects, this will be just a list of top-level features and options

      • As an example, l2switch-switch as the top-level feature with the -rest and -ui options

      • Features should also note if the options should be checkboxes (that is, they can each be turned on/off independently) or a drop down (that is, at most one can be selected)

      • What other top-level features in the release are incompatible with each feature

      • This will likely be presented as a table in the documentation and the data will likely also be consumed by automated installers/configurators/downloaders

    • For some projects, there is extra installation instructions (for external components) and/or configuration

      • In that case, there will be a (sub)section in the documentation describing this process.

  • HowTo/Tutorial: walk throughs and examples that are not general-purpose documentation

    • Generally, these should be done as a (sub)section of either user/operator or developer documentation.

    • If they are especially long or complex, they may belong on their own

  • Release Notes:

    • Release notes are required as part of each project’s release review. They must also be translated into reStructuredText for inclusion in the formal documentation.

Requirements for projects
  • Projects must provide reStructuredText documentation including:

    • Developer documentation for every feature

      • Most projects will want to logically nest the documentation for individual features under a single project-wide chapter or section

      • The feature documentation can be provided as a single .rst file or multiple .rst files if the features fall into different groups

      • Feature documentation should start with appromimately 300 word overview of the project and include references to any automatically-generated API documentation as well as more general developer information (see Content Types).

    • User/Operator documentation for every every user-facing feature (if any)

      • This documentation should be per-feature, not per-project. Users should not have to know which project a feature came from.

      • Intimately related features can be documented together. For example, l2switch-switch, l2switch-switch-rest, and l2switch-switch-ui, can be documented as one noting the differences.

      • This documentation can be provided as a single .rst file or multiple .rst files if the features fall into different groups

    • Installation documentation

      • Most projects will simply provide a list of user-facing features and options. See Content Types above.

    • Release Notes (both on the wiki and reStructuredText) as part of the release review.

  • Documentation must be contributed to the docs repo (or possibly imported from the project’s own repo with tooling that is under development)

    • Projects may be encouraged to instead provide this from their own repository if the tooling is developed

    • Projects choosing to meet the requirement in this way must provide a patch to docs repo to import the project’s documentation

  • Projects must cooperate with the documentation group on edits and enhancements to documentation

Timeline for Deliverables from Projects
  • M2: Documentation Started

    The following tasks for documentation deliverables must be completed for the M2 readout:

    • The kinds of documentation that will be provided and for what features must be identified.

      Note

      Release Notes are not required until release reviews at RC2

    • The appropriate .rst files must be created in the docs repository (or their own repository if the tooling is available).

    • An outline for the expected documentation must be completed in those .rst files including the relevant (sub)sections and a sentence or two explaining what will be contained in these sections.

      Note

      If an outline is not provided, delivering actual documentation in the (sub)sections meets this requirement.

    • M2 readouts should include

      1. the list of kinds of documentation

      2. the list of corresponding .rst files and their location, including repo and path

      3. the list of commits creating those .rst files

      4. the current word counts of those .rst files

  • M3: Documentation Continues

    • The readout at M3 should include the word counts of all .rst files with links to commits

    • The goal is to have draft documentation complete at the M3 readout so that the documentation group can comment on it.

  • M4: Documentation Complete

    • All (sub)sections in all .rst files have complete, readable, usable content.

    • Ideally, there should have been some interaction with the documentation group about any suggested edits and enhancements

  • RC2: Release notes

    • Projects must provide release notes in .rst format pushed to integration (or locally in the project’s repository if the tooling is developed)

OpenDaylight Release Process Guide

Overview

This guide provides details on the various release processes related to OpenDaylight. It documents the steps used by OpenDaylight release engineers when performing release operations.

Release Planning

Managed Release
Managed Release Summary

The Managed Release Process will facilitate timely, stable OpenDaylight releases by allowing the release team to focus on closely managing a small set of core OpenDaylight projects while not imposing undue requirements on projects that prefer more autonomy.

Managed Release Goals
Reduce Overhead on Release Team

The Managed Release Model will allow the release team to focus their efforts on a smaller set of more stable, more responsive projects.

Reduce Overhead on Projects

The Managed Release Model will reduce the overhead both on projects taking part in the Managed Release and Self-Managed Projects.

Managed Projects will have fewer, smaller checkpoints consisting of only information that is maximally helpful for driving the release process. Much of the information collected at checkpoints will be automatically scraped, requiring minimal to no effort from projects. Additionally, Managed Release projects should have a more stable development environment, as the projects that can break the jobs they depend on will be a smaller set, more stable and more responsive.

Projects that are Self-Managed will have less overhead and reporting. They will be free to develop in their own way, providing their artifacts to include in the final release or choosing to release on their own schedule. They will not be required to submit any checkpoints and will not be expected to work closely with the rest of the OpenDaylight community to resolve breakages.

Enable Timely Releases

The Managed Release Process will reduce the set of projects that must simultaneously become stable at release time. The release and test teams will be able to focus on orchestrating a quality release for a smaller set of more stable, more responsive projects. The release team will also have greater latitude to step in and help projects that are required for dependency reasons but may not have a large set of active contributors.

Managed Projects
Managed Projects Summary

Managed Projects are either required by most applications for dependency reasons or are mature, stable, responsive projects that are consistently able to take part in releases without jeopardizing them. Managed Projects will receive additional support from the test and release teams to further their stability and make sure OpenDaylight releases go out on-time. To enable this extra support, Managed Projects will be given less autonomy than OpenDaylight projects have historically been granted.

Managed Projects for Dependency Reasons

Some projects are required by almost all other OpenDaylight projects. These projects must be in the Managed Release for it to support almost every OpenDaylight use case. Such projects will not have a choice about being in the Managed Release, the TSC will decide they are critical to the OpenDaylight platform and include them. They may not always meet the requirements that would normally be imposed on projects that wish to join the Managed Release. Since they cannot be kicked out of the release, the TSC, test and release teams will do their best to help them meet the Managed Release Requirements. This may involve giving Linux Foundation staff temporary committer rights to merge patches on behalf of unresponsive projects, or appointing committers if projects continue to remain unresponsive. The TSC will strive to work with projects and member companies to proactively keep projects healthy and find active contributors who can become committers in the normal way without the need to appoint them in times of crisis.

Managed Release Integrated Projects

Some Managed Projects may decide to release on their own, not as a part of the Simultaneous Release with Snapshot Integrated Projects. Such Release Integrated projects will still be subject to Managed Release Requirements, but will need to follow a different release process.

For implementation reasons, the projects that are able to release independently must depend only on other projects that release independently. Therefore the Release Integrated Projects will form a tree starting from odlparent. Currently odlparent, yangtools and mdsal are the only Release Integrated Projects, but others may join them in the future.

Requirements for Managed Projects
Healthy Community

Managed Projects should strive to have a healthy community.

Responsiveness

Managed Projects should be responsive over email, IRC, Gerrit, Jira and in regular meetings.

All committers should be subscribed to their project’s mailing list and the release mailing list.

For the following particularly time-sensitive events, an appropriate response is expected within two business days.

  • RC or SR candidate feedback.

  • Major disruptions to other projects where a Jira weather item was not present and the pending breakage was not reported to the release mailing list.

If anyone feels that a Managed Project is not responsive, a grievance process is in place to clearly handle the situation and keep a record for future consideration by the TSC.

Active Committers

Managed Projects should have sufficient active committers to review contributions in a timely manner, support potential contributors, keep CSIT healthy and generally effectively drive the project.

If a project that the TSC deems is critical to the Managed Release is shown to not have sufficient active committers the TSC may step in and appoint additional committers. Projects that can be dropped from the Managed Release will be dropped instead of having additional committers appointed.

Managed Projects should regularly prune their committer list to remove inactive committers, following the Committer Removal Process.

TSC Attendance

Managed Projects are required to send a representative to attend TSC meetings.

To facilitate quickly acting on problems identified during TSC meetings, representatives must be a committer to the project they are representing. A single person can represent any number of projects.

Representatives will make the following entry into the meeting minutes to record their presence:

#project <project ID>

TSC minutes will be scraped per-release to gather attendance statistics. If a project does not provide a representative for at least half of TSC meetings a grievance will be filed for future consideration.

Checkpoints Submitted On-Time

Managed Projects must submit information required for checkpoints on-time. Submissions must be correct and adequate, as judged by the release team and the TSC. Inadequate or missing submissions will result in a grievance.

Jobs Required for Managed Projects Running

Managed Projects are required to have the following jobs running and healthy.

  • Distribution check job (voting)

  • Validate autorelease job (voting)

  • Merge job (non-voting)

  • Sonar job (non-voting)

  • CLM job (non-voting)

Depend only on Managed Projects

Managed Projects should only depend on other Managed Projects.

If a project wants to be Managed but depends on Self-Managed Projects, they should work with their dependencies to become Managed at the same time or drop any Self-Managed dependencies.

Documentation

Managed Projects are required to produce a user guide, developer guide and release notes for each release.

CLM

Managed Projects are required to handle CLM (Component Lifecycle Management) violations in a timely manner.

Managed Release Process
Managed Release Checkpoints

Checkpoints are designed to be mostly automated, to be maximally effective at driving the release process and to impose as little overhead on projects as possible.

There will be an initial checkpoint two weeks after the start of the release, a midway checkpoints one month before code freeze and a final checkpoint at code freeze.

Initial Checkpoint

An initial checkpoint will be collected two weeks after the start of each release. The release team will review the information collected and report it to the TSC at the next TSC meeting.

Projects will need to create the following artifacts:

  • High-level, human-readable description of what the project plans to do in this release. This should be submitted as a Jira Project Plan issue against the TSC project.

    • Select your project in the ODL Project field

    • Select the release version in the ODL Release field

    • Select the appropriate value in the ODL Participation field: SNAPSHOT_Integrated (Managed) or RELEASE_Integrated (Managed)

    • Select the value Initial in the ODL Checkpoint field

    • In the Summary field, put something like: Project-X Fluorine Release Plan

    • In the Description field, fill in the details of your plan:

      This should list a high-level, human-readable summary of what a project
      plans to do in a release. It should cover the project's planned major
      accomplishments during the release, such as features, bugfixes, scale,
      stability or longevity improvements, additional test coverage, better
      documentation or other improvements. It may cover challenges the project
      is facing and needs help with from other projects, the TSC or the LFN
      umbrella. It should be written in a way that makes it amenable to use
      for external communication, such as marketing to users or a report to
      other LFN projects or the LFN Board.
      
  • If a project is transitioning from Self-Managed to Managed or applying for the first time into a Managed release, raise a Jira Project Plan issue against the TSC project highlighting the request.

    • Select your project in the ODL Project field

    • Select the release version in the ODL Release field

    • Select the NOT_Integrated (Self-Managed) value in the ODL Participation field

    • Select the appropriate value in the ODL New Participation field: SNAPSHOT_Integrated (Managed) or RELEASE_Integrated (Managed)

    • In the Summary field, put something like: Project-X joining/moving to Managed Release for Fluorine

    • In the Description field, fill in the details using the template below:

      Summary
      This is an example of a request for a project to move from Self-Managed
      to Managed. It should be submitted no later than the start of the
      release. The request should make it clear that the requesting project
      meets all of the Managed Release Requirements.
      
      Healthy Community
      The request should make it clear that the requesting project has a
      healthy community. The request may also highlight a history of having a
      healthy community.
      
      Responsiveness
      The request should make it clear that the requesting project is
      responsive over email, IRC, Jira and in regular meetings. All committers
      should be subscribed to the project's mailing list and the release
      mailing list. The request may also highlight a history of
      responsiveness.
      
      Active Committers
      The request should make it clear that the requesting project has a
      sufficient number of active committers to review contributions in a
      timely manner, support potential contributors, keep CSIT healthy and
      generally effectively drive the project. The requesting project should
      also make it clear that they have pruned any inactive committers. The
      request may also highlight a history of having sufficient active
      committers and few inactive committers.
      
      TSC Attendance
      The request should acknowledge that the requesting project is required
      to send a committer to represent the project to at least 50% of TSC
      meetings. The request may also highlight a history of sending
      representatives to attend TSC meetings.
      
      Checkpoints Submitted On-Time
      The request should acknowledge that the requesting project is required
      to submit checkpoints on time. The request may also highlight a history
      of providing deliverables on time.
      
      Jobs Required for Managed Projects Running
      The request should show that the requesting project has the required
      jobs for Managed Projects running and healthy. Links should be provided.
      
      Depend only on Managed Projects
      The request should show that the requesting project only depends on
      Managed Projects.
      
      Documentation
      The request should acknowledge that the requesting project is required
      to produce a user guide, developer guide and release notes for each
      release. The request may also highlight a history of providing quality
      documentation.
      
      CLM
      The request should acknowledge that the requesting project is required
      to handle Component Lifecycle Violations in a timely manner. The request
      should show that the project's CLM job is currently healthy. The request
      may also show that the project has a history of dealing with CLM
      violations in a timely manner.
      
  • If a project is transitioning from Managed to Self-Managed, raise a Jira Project Plan issue against the TSC project highlighting the request.

    • Select your project in the ODL Project field

    • Select the release version in the ODL Release field

    • Select the appropriate value in the ODL Participation field: SNAPSHOT_Integrated (Managed) or RELEASE_Integrated (Managed)

    • Select the NOT_Integrated (Self-Managed) value in the ODL New Participation field

    • In the Summary field, put something like: Project-X Fluorine Joining/Moving to Self-Manged for Fluorine

    • In the Description field, fill in the details:

      This is a request for a project to move from Managed to Self-Managed. It
      should be submitted no later than the start of the release. The request
      does not require any additional information, but it may be helpful for
      the requesting project to provide some background and their reasoning.
      
  • Weather items that may impact other projects should be submitted as Jira issues. For a weather item, raise a Jira Weather Item issue against the TSC project highlighting the details.

    • Select your project in the ODL Project field

    • Select the release version in the ODL Release field

    • For the ODL Impacted Projects field, fill in the impacted projects using label values - each label value should correspond to the respective project prefix in Jira, e.g. netvirt is NETVIRT. If all projects are impacted, use the label value ALL.

    • Fill in the expected date of weather event in the ODL Expected Date field

    • Select the appropriate value for ODL Checkpoint (may skip)

    • In the Summary field, summarize the weather event

    • In the Description field, provide the details of the weather event. Provide as much relevant information as possible.

The remaining artifacts will be automatically scraped:

  • Blocker bugs that were raised between the previous code freeze and release.

  • Grievances raised against the project during the last release.

Midway Checkpoint

One month before code freeze, a midway checkpoint will be collected. The release team will review the information collected and report it to the TSC at the next TSC meeting. All information for midway checkpoint will be automatically collected.

  • Open Jira bugs marked as blockers.

  • Open Jira issues tracking weather items.

  • Statistics about jobs. * Autorelease failures per-project. * CLM violations.

  • Grievances raised against the project since the last checkpoint.

Since the midway checkpoint is fully automated, the release team may collect this information more frequently, to provide trends over time.

Final Checkpoint

At 2 weeks after code freeze a final checkpoint will be collected by the release team and presented to the TSC at the next TSC meeting.

Projects will need to create the following artifacts:

  • High-level, human-readable description of what the project did in this release. This should be submitted as a Jira Project Plan issue against the TSC project. This will be reused for external communication/marketing for the release.

    • Select your project in the ODL Project field

    • Select the release version in the ODL Release field

    • Select the appropriate value in the ODL Participation field: SNAPSHOT_Integrated (Managed) or RELEASE_Integrated (Managed)

    • Select the value Final in the ODL Checkpoint field

    • In the Summary field, put something like: Project-X Fluorine Release details

    • In the Description field, fill in the details of your accomplishments:

      This should be a high-level, human-readable summary of what a project
      did during a release. It should cover the project's major
      accomplishments, such as features, bugfixes, scale, stability or
      longevity improvements, additional test coverage, better documentation
      or other improvements. It may cover challenges the project has faced
      and needs help in the future from other projects, the TSC or the LFN
      umbrella. It should be written in a way that makes it amenable to use
      for external communication, such as marketing to users or a report to
      other LFN projects or the LFN Board.
      
    • In the ODL Gerrit Patch field, fill in the Gerrit patch URL to your project release notes

  • Release notes, user guide, developer guide submitted to the docs project.

The remaining artifacts will be automatically scraped:

  • Open Jira bugs marked as blockers.

  • Open Jira issues tracking weather items.

  • Statistics about jobs. * Autorelease failures per-project.

  • Statistics about patches. * Number of patches submitted during the release. * Number of patches merged during the release. * Number of reviews per-reviewer.

  • Grievances raised against the project since the start of the release.

Managed Release Integrated Release Process

Managed Projects that release independently (Release Integrated Projects), not as a part of the Simultaneous Release with Snapshot Integrated Projects, will need to follow a different release process.

Managed Release Integrated (MRI) Projects will provide the releases they want the Managed Snapshot Integrated (MSI) Projects to consume no later than two weeks after the start of the Managed Release. The TSC will decide by a majority vote whether to bump MSI versions to consume the new MRI releases. This should happen as early in the release as possible to get integration woes out of the way and allow projects to focus on developing against a stable base. If the TSC decide to consume the proposed MRI releases, all MSI Projects are required to bump to the new versions within a two day window. If some projects fail to merge version bump patches in time, the TSC will instruct Linux Foundation staff to temporarily wield committer rights and merge version bump patches. The TSC vote at any time to back out of a version bump if the new releases are found to be unsuitable.

MRI Projects are expected to provide bugfixes via minor or patch version updates during the release, but should strive to not expect MSI Projects to consume another major version update during the release.

MRI Projects are free to follow their own release cadence as they develop new features during the Managed Release. They need only have a stable version ready for the MSI Projects to consume by the next integration point.

Managed Release Integrated Checkpoints

The MRI Projects will follow similar checkpoints as the MSI Projects, but the timing will be different. At the time MRI Projects provide the releases they wish MSI Projects to consume for the next release, they will also provide their final checkpoints. Their midway checkpoints will be scraped one month before the deadline for them to deliver their artifacts to the MSI Projects. Their initial checkpoints will be due no later two weeks following the deadline for their delivery of artifacts to the MSI Projects. Their initial checkpoints will cover everything they expect to do in the next Managed Release, which may encompass any number of major version bumps for the MRI Projects.

Moving a Project from Self-Managed to Managed

Self-Managed Projects can request to become Managed by submitting a Project_Plan issue to the TSC project in Jira. See details as described under the Initial Checkpoint section above. Requests should be submitted before the start of a release. The requesting project should make it clear that they meet the Managed Release Project Requirements.

The TSC will evaluate requests to become Managed and inform projects of the result and the TSC’s reasoning no later than the start of the release or one week after the request was submitted, whichever comes last.

For the first release, the TSC will bootstrap the Managed Release with projects that are critical to the OpenDaylight platform. Other projects will need to follow the normal application process defined above.

The following projects are deemed critical to the OpenDaylight platform:

  • aaa

  • controller

  • infrautils

  • mdsal

  • netconf

  • odlparent

  • yangtools

Self-Managed Projects

In general there are two types of Self-Managed (SM) projects:

  1. Self-Managed projects that want to participate in the formal (major or service) OpenDaylight release distribution. This section includes the requirements and release process for these projects.

  2. Self-Managed projects that want to manage their own release schedule or provide their release distribution and installation instructions by the time of the release. There are no specific requirements for these projects.

Requirements for SM projects participating in the release distribution
Use of SNAPSHOT versions

Self-Managed Projects can consume whichever version of their upstream dependencies they want during most of the release cycle, but if they want to be included in the formal (major or service) release distribution they must have their upstream versions bumped to SNAPSHOT and build successfully no later than one week before the first Managed release candidate (RC) is created. Since bumping and integrating with upstream takes time, it is strongly recommended Self-Managed projects start this work early enough. This is no later than the middle checkpoint if they want to be in a major release, or by the previous release if they want to be in a service release (e.g. by the major release date if they want to be in SR1).

Note

To help with the integration effort, the Weather Page includes API and other important changes during the release cycle. After the formal release, the release notes also include this information.

Add to Common Distribution

In order to be included in the formal (major or service) release distribution, Self-Managed Projects must be in the common distribution pom.xml file and the distribution sanity test (see Add Projects to distribution) no later than one week before the first Managed release candidate (RC) is created. Projects should only be added to the final distribution pom.xml after they have succesfully published artifacts using upstream SNAPSHOTs. See Use of SNAPSHOT versions.

Note

It is very important Self-Managed projects do not miss the deadlines for upstream integration and final distribution check, otherwise there are high chances for missing the formal release distribution. See Release the project artifacts.

Cut Stable Branch

Self-Managed projects wanting to use the existing release job to release their artifacts (see Release the project artifacts) must have an stable branch in the major release (fluorine, neon, etc) they are targeting. It is highly recommended to cut the stable branch before the first Managed release candidate (RC) is created.

After creating the stable branch Self-Managed projects should:

  • Bump master branch version to X.Y+1.0-SNAPSHOT, this way any new merge in master will not interfere with the new created stable branch artifacts.

  • Update .gitreview for stable branch: change defaultbranch=master to stable branch. This way folks running “git review” will get the right branch.

  • Update their jenkins jobs: current release should point to the new created stable branch and next release should point to master branch. If you do not know how to do this please open a ticket to opendaylight helpdesk.

Release the project artifacts

Self-Managed projects wanting to participate in the formal (major or service) release distribution must release and publish their artifacts to nexus in the week after the Managed release is published to nexus.

Self-Managed projects having an stable branch with latest upstream SNAPSHOT (see previous requirements) can use the release job in Project Standalone Release to release their artifacts.

After creating the release, Self-Managed projects should bump the stable branch version to X.Y.Z+1-SNAPSHOT, this way any new merge in the stable branch will not interfere with pre-release artifacts.

Note

Self-Managed Projects will not have any leeway for missing deadlines. If projects are not in the final distribution in the allocated time (normally one week) after the Managed projects release, they will not be included in the release distribution.

Checkpoints

There are no checkpoints for Self-Managed Projects.

Moving a Project from Managed to Self-Managed

Managed Projects that are not required for dependency reasons can submit a Project_Plan issue to be Self-Managed to the TSC project in Jira. See details in the Initial Checkpoint section above. Requests should be submitted before the start of a release. Requests will be evaluated by the TSC.

The TSC may withdraw a project from the Managed Release at any time.

Installing Features from Self-Managed Projects

Self-Managed Projects will have their artifacts included in the final release if they are available on-time, but they will not be available to be installed until the user does a repo:add.

To install an Self-Managed Project feature, find the feature description in the system directory. For example, NetVirt’s main feature:

system/org/opendaylight/netvirt/odl-netvirt-openstack/0.6.0-SNAPSHOT/

Then use the Karaf shell to repo:add the feature:

feature:repo-add mvn:org.opendaylight.netvirt/odl-netvirt-openstack/0.6.0 -SNAPSHOT/xml/features

Grievances

For requirements that are difficult to automatically ascertain if a Managed Project is following or not, there should be a clear reporting process.

Grievance reports should be filed against the TSC project in Jira. Very urgent grievances can additionally be brought to the TSC’s attention via the TSC’s mailing list.

Process for Reporting Unresponsive Projects

If a Managed Project does not meet the Responsiveness Requirements, a Grievance issue should be filed against the TSC project in Jira.

Unresponsive project reports should include (at least):

  • Select the project being reported in the ODL_Project field

  • Select the release version in the ODL_Release field

  • In the Summary field, put something like: Grievance against Project-X

  • In the Description field, fill in the details:

    Document the details that show ExampleProject was slow to review a change.
    The report should include as much relevant information as possible,
    including a description of the situation, relevant Gerrit change IDs and
    relevant public email list threads.
    
  • In the ODL_Gerrit_Patch, put in a URL to a Gerrit patch, if applicable

Vocabulary Reference
  • Managed Release Process: The release process described in this document.

  • Managed Project: A project taking part in the Managed Release Process.

  • Self-Managed Project: A project not taking part in the Managed Release Process.

  • Simultaneous Release: Event wherein all Snapshot Integrated Project versions are rewriten to release versions and release artifacts are produced.

  • Snapshot Integrated Project: Project that integrates with OpenDaylight projects using snapshot version numbers. These projects release together in the Simultaneous Release.

  • Release Integrated Project: Project that releases independently of the Simultaneous Release. These projects are consumed by Snapshot Integrated Projects based on release version numbers, not snapshot versions.

Release Schedule

OpenDaylight releases twice per year. The six-month cadence is designed to synchronize OpenDaylight releases with OpenStack and OPNFV releases. Dates are adjusted to match current resources and requirements from the current OpenDaylight users. Dates are also adjusted when they conflict with holidays, overlap with other releases or are otherwise problematic. Dates include the release of both managed and self-managed projects.

Event

Sodium Dates

Relative Dates

Start-Relative Dates

Description

Release Start

2019-03-07

Start Date

Start Date +0

Declare Intention: Submit Project_Plan Jira item in TSC project

Initial Checkpoint

2019-03-21

Start Date + 2 weeks

Start Date +2 weeks

Initial Checkpoint. All Managed Projects must have completed Project_Plan Jira items in TSC project.

Release Integrated Deadline

2019-04-11

Initial Checkpoint + 2 weeks

Start Date +4 weeks

Deadline for Release Integrated Projects (currently, ODLPARENT, YANGTOOLS and MDSAL) to provide the desired version deliverables for downstream Snapshot Integrated Projects to consume. For Sodium, this is +1 more week to resolve conflict with ONS NA 2019.

Version Bump

2019-04-12

Release Integrated Deadline + 1 day

Start Date +4 weeks 1 day

Prepare version bump patches and merge them in (RelEng team). Spend the next 2 weeks to get green build for all MSI Projects and a healthy distribution.

Version Bump Checkpoint

2019-04-25

Release Integrated Deadline + 2 weeks

Start Date +6 weeks

Check status of MSI Projects to see if we have green builds and a healthy distribution. Revert the MRI deliverables if deemed necessary.

CSIT Checkpoint

2019-05-09

Version Bump Checkpoint + 2 weeks

Start Date +8 weeks

All Managed Release CSIT should be in good shape - get all MSI Projects’ CSIT results as they were before the version bump. This is the final opportunity to revert the MRI deliverables if deemed necessary.

Middle Checkpoint

2019-07-04

CSIT Checkpoint + 8 weeks (sometimes +2 weeks to avoid December holidays)

Start Date +16 weeks (sometimes +2 weeks to avoid December holidays)

Checkpoint for status of Managed Projects - especially Snapshot Integrated Projects.

Code Freeze

2019-08-01

Middle Checkpoint + 4 weeks

Start Date +20 weeks

Code freeze for all Managed Projects - cut and lock release branch. Only allow blocker bugfixes in release branch.

Final Checkpoint

2019-08-15

Code Freeze + 2 weeks

Start Date +22 weeks

Final Checkpoint for all Managed Projects.

Formal Release

2019-09-24

6 months after Start Date

Start Date +6 months

Formal release

Service Release 1

2019-11-12

1.5 month after Formal Release

Start Date +7.5 months

Service Release 1 (SR1)

Service Release 2

2020-02-17

3 months after SR1

Start Date +10.5 months

Service Release 2 (SR2)

Service Release 3

2020-05-28 (actual: 2020-06-03)

4 months after SR2

Start Date +14 months

Service Release 3 (SR3)

Service Release 4

2020-08-28

Not Applicable

Not Applicable

Service Release 4 (SR4) - Final Service Release

Release End of Life

2020-09-05

4 months after SR3

Start Date +18 months

End of Life - coincides with the Formal Release of the current release+2 versions and the start of the current release+3 versions

Fluorine Release Goals
Purpose

This document outlines OpenDaylight’s project-level goals for Fluorine. It is meant for consumption by fellow LFN projects, the LFN TAC and the LFN Board.

Goals
Infrastructure Efficiency

OpenDaylight has major infrastructure requirements that can’t be mitigated due to the large number of tests the community has developed over time. The Integration/Test and RelEng/Builder projects have always strove to use resources efficiently, to make OpenDaylight’s increasingly large test suite fit in the same resource allocation. However, OpenDaylight’s recent move to LFN and the Managed Release model may have unlocked new opportunities to achieve equally good or better test coverage at a lower cost.

A few ideas are outlined below, although it’s expected others will emerge.

Regular Cost Feedback

Getting feedback about the impact of efficiency efforts is critical. OpenDaylight has requested that LFN start sending out infrastructure spending reports. These will allow the community to make data-driven decisions about which changes have substantial impacts and which aren’t a good work-vs-reward trade-off.

Reports should be provided as frequently as possible and should include all available data, like per-flavor usage, to help target efforts.

Other LFN projects may find it helpful to request similar reports.

OpenStack Deployments via OPNFV Images

OpenDaylight currently spends significant infrastructure and developer resources maintaining our own Devstack-based OpenStack deployment logic. OPNFV installer projects already produce VM images with master branch versions of OpenStack and OpenDaylight installed via production tooling. OpenDaylight would like to move to doing our OpenStack testing using these images, updating the version of OpenDaylight to the build under test. Using a pre-baked OpenStack deployment vs deploying it ourselves in every job would result in substantial cost savings, and not having to maintain Devstack deployment logic would make our jobs much more stable and save developer time.

This change wasn’t possible in our previous Rackspace-hosted infrastructure, but we hope it will be enabled by our recent move to Vexxhost or by running jobs that require OpenStack on LFN-managed hardware.

Audit for Unwatched CSIT

As part of OpenDaylight’s move to the Managed Release model, the Test team will have greater freedom to step in and directly manage project’s tests. This may enable the Test team to disable tests that are not actively watched and make other jobs run less frequently.

Cross-Project CI/CD

OpenDaylight pioneered Cross-Project CI/CD (XCI) in LFN with OPNFV shortly after that project’s creation. Since then, both projects and others that have followed have realized major benefits from continuously integrating recent pre-release versions. OpenDaylight would like to continue and expand this work in Fluorine.

OpenDaylight as Infra

OpenDaylight’s cloud infrastructure runs on OpenStack. We would like to start using a released version of OpenDaylight NetVirt as the Neutron backend in this infrastructure. This “eating our own dogfood” exercise would make for a good production-level test and good marketing.

This change wasn’t possible in our previous Rackspace-hosted infrastructure, but we hope it will be enabled by our recent move to Vexxhost or by running jobs that require OpenStack on LFN-managed hardware.

Expand Contribution Base

OpenDaylight would like to continue bringing new contributors to the community.

For Fluorine, OpenDaylight would like to focus on getting downstream consumers involved in upstream development. In an ideal Open Source world, the users of an Open Source projects would contribute back to the projects they consume. OpenDaylight would like to facilitate this by building special relationships between key downstream consumers and the upstream developer community. These downstreams could be companies, universities or Open Source projects. We hope for contributions in the form of code, documentation and bug reports.

OpenDaylight would like to work with the LFN MAC and TAC to identify a small set of downstream users to pilot the program with. The users would provide developers with dedicated cycles and a commitment to stick around for the long-term. In exchange, the OpenDaylight developer community would prioritize training these developers, answering their questions and generally facilitate their bootstrapping into the upstream community.

Support Kernel Projects

Companies allocating contributors to OpenDaylight tend to distribute resources to projects that are directly related to the usecases they are interested in, but neglect to give sufficient resources to the kernel projects that support them. OpenDaylight’s kernel developers are doing a heroic job of keeping the platform healthy, but for the long-term health of the project special attention needs to be paid to sufficiently staffing these key projects.

OpenDaylight requests that LFN member companies that consume OpenDaylight consider contributing developer resources to kernel projects. The new developers should be allocated for the long-term, to avoid costing cycles for training that aren’t repaid by contributions.

Improve First-Impression Documentation

OpenDaylight has a tremendous amount of documentation, but much of it is written by experienced developers for experienced developers. As with most Open Source projects, the experienced developers typically don’t look at documentation targeted at inexperienced potential contributors. This type of general documentation is also typically not maintained by individual projects, who are focused on making sure their project-specific docs are in good shape.

To facilitate expanding OpenDaylight’s user and contributor base, we would like to focus on improving this “first impression” documentation for Fluorine. Since it’s not realistic to hope for a major improvement from the existing contributor base, OpenDaylight requests the LFN Board create a LF staff position focused on auditing and working with LFN project communities to improve this general, “first impression” documentation. This resource would be shared across all LFN projects.

Improve Release Model

The OpenDaylight community has developed a new release model for Fluorine. The Managed Release Model will facilitate timely releases, provide a more stable development environment for the most active OpenDaylight projects, reduce process overhead for all projects, give more autonomy to Unmanged Projects and allow the Release and Test teams to give more support to Managed Projects.

See the Managed Release Process for additional details.

Resync Release Cadence

OpenDaylight’s release dates need to synchronize with a number of related Open Source projects. The OpenDaylight TSC will work with those projects, perhaps making use of the LFN TAC, to understand the best time for our releases. The TSC will adjust OpenDaylight’s release schedule accordingly and ensure it’s met. We anticipate that the new Managed Release Process will make it easier for OpenDaylight to consistently meet release date targets going forward.

In-Person Developer Design Forum Per-Release

OpenDaylight would like to continue having a face-to-face Developer Design Forum to plan each release. The community has expressed many times that these events are extremely valuable, that they need to continue happening and that they can’t be replaced by remote DDFs.

OpenDaylight requests that the LFN Board allocate resources for at least one, ideally two, days of DDF for each OpenDaylight six-month release cycle. It has worked well to host these events in conjunction with other large, relevant events like ONS.

Processes

Project Standalone Release

This page explains how a project can release independently outside of the OpenDaylight simultanious release.

Preparing your project for release

A project can produce a staging repository by using one of the following methods against the {project-name}-maven-stage-{stream} job:

  • Leave a comment stage-release against any patch for the stream to build

  • Click Build with Parameters in Jenkins Web UI for the job

This job performs the following duties:

  1. Removes -SNAPSHOT from all pom files

  2. Produces a taglist.log, project.patch, and project.bundle files

  3. Runs a mvn clean deploy to a local staging repo

  4. Pushes the staging repo to a Nexus staging repo https://nexus.opendaylight.org/content/repositories/<REPO_ID> (REPO_ID is saved to staging-repo.txt on the log server)

  5. Archives taglist.log, project.patch, and project.bundle files to log server

The files taglist.log and project.bundle can be used later at release time to reproduce a byte exact commit of what was built by the Jenkins job. This can be used to tag the release at release time.

Releasing your project

Once testing against the staging repo has been completed and project has determined that the staged repo is ready for release. A release can the be performed using the self-serve release process: https://docs.releng.linuxfoundation.org/projects/global-jjb/en/latest/jjb/lf-release-jobs.html

  1. Ask helpdesk the necessary right on jenkins if you do not have them

  2. Log on https://jenkins.opendaylight.org/releng/

  3. Choose your project dashboard

  4. Check your release branch has been successfully staged and note the corresponding log folder

  5. Go back to the dashboard and choose the release-merge job

  6. Click on build with parameters

  7. Fill in the form:

  • GERRIT_BRANCH must be changed to the branch name you want to release (e.g. stable/sodium)

  • VERSION with your corresponding project version (e.g. 0.4.1)

  • LOG_DIR with the relative path of the log from the stage release job (e.g. project-maven-stage-master/17/)

  • choose maven DISTRIBUTION_TYPE in the select box

  • uncheck USE_RELEASE_FILE box

  1. Launch the jenkins job

This job performs the following duties: * download and patch your project repository * build the project * publish the artifacts on nexus * tag and sign the release on Gerrit

Autorelease

The Release Engineering - Autorelease project is targeted at building the artifacts that are used in the release candidates and final full release.

Cloning Autorelease

To clone all the autorelease repo including it’s submodules simply run the clone command with the ‘’‘–recursive’‘’ parameter.

git clone --recursive https://git.opendaylight.org/gerrit/releng/autorelease

If you forgot to add the –recursive parameter to your git clone you can pull the submodules after with the following commands.

git submodule init
git submodule update
Creating Autorelease - Release and RC build

An autorelease release build comes from the autorelease-release-<branch> job which can be found on the autorelease tab in the releng master:

For example to create a Boron release candidate build launch a build from the autorelease-release-boron job by clicking the ‘’‘Build with Parameters’‘’ button on the left hand menu:

Note

The only field that needs to be filled in is the ‘’‘RELEASE_TAG’‘’, leave all other fields to their default setting. Set this to Boron, Boron-RC0, Boron-RC1, etc… depending on the build you’d like to create.

Adding Autorelease staging repo to settings.xml

If you are building or testing this release in such a way that requires pulling some of the artifacts from the Nexus repo you may need to modify your settings.xml to include the staging repo URL as this URL is not part of ODL Nexus’ public or snapshot groups. If you’ve already cloned the recommended settings.xml for building ODL you will need to add an additional profile and activate it by adding these sections to the “<profiles>” and “<activeProfiles>” sections (please adjust accordingly).

Note

  • This is an example and you need to “Add” these example sections to your settings.xml do not delete your existing sections.

  • The URLs in the <repository> and <pluginRepository> sections will also need to be updated with the staging repo you want to test.

<profiles>
  <profile>
    <id>opendaylight-staging</id>
    <repositories>
      <repository>
        <id>opendaylight-staging</id>
        <name>opendaylight-staging</name>
        <url>https://nexus.opendaylight.org/content/repositories/automatedweeklyreleases-1062</url>
        <releases>
          <enabled>true</enabled>
          <updatePolicy>never</updatePolicy>
        </releases>
        <snapshots>
          <enabled>false</enabled>
        </snapshots>
      </repository>
    </repositories>
    <pluginRepositories>
      <pluginRepository>
        <id>opendaylight-staging</id>
        <name>opendaylight-staging</name>
        <url>https://nexus.opendaylight.org/content/repositories/automatedweeklyreleases-1062</url>
        <releases>
          <enabled>true</enabled>
          <updatePolicy>never</updatePolicy>
        </releases>
        <snapshots>
          <enabled>false</enabled>
        </snapshots>
      </pluginRepository>
    </pluginRepositories>
  </profile>
</profiles>

<activeProfiles>
  <activeProfile>opendaylight-staging</activeProfile>
</activeProfiles>
Project lifecycle

This page documents the current rules to follow when adding and removing a particular project to Simultaneous Release (SR).

List of states for projects in autorelease

The state names are short negative phrases describing what is missing to progress to the following state.

  • non-existent The project is not recognized by Technical Steering Committee (TSC) to be part of OpenDaylight (ODL).

  • non-participating The project is recognized byt TSC to be an ODL project, but the project has not confirmed participation in SR for given release cycle.

  • non-building The recognized project is willing to participate, but its current codebase is not passing its own merge job, or the project artifacts are otherwise unavailable in Nexus.

  • not-in-autorelease Project merge job passes, but the project is not added to autorelease (git submodule, maven module, validate-autorelease job passes).

  • failing-autorelease The project is added to autorelease (git submodule, maven module, validate-autorelease job passes), but autorelease build fails when building project’s artifact. Temporary state, timing out into not-in-autorelease.

  • repo-not-in-integration Project is succesfully built within autorelease, but integration/distribution:features-index is not listing all its public feature repositories.

  • feature-not-in-integration Feature repositories are referenced, distribution-check job is passing, but some user-facing features are absent from integration/distribution:features-test (possibly because adding them does not pass distribution SingleFeatureTest).

  • distribution-check-not-passing Features are in distribution, but distribution-check job is either not running, or it is failing for any reason. Temporary state, timing out into feature-not-in-integration.

  • feature-is-experimental All user-facing features are in features-test, but at least one of the corresponding functional CSIT jobs does not meet Integration/Test requirements.

  • feature-is-not-stable Feature does meet Integration/Test requirements, but it does not meed all requirements for stable features.

  • feature-is-stable

Note

A project may change its state in both directions, this list is to make sure a project is not left in an invalid state, for example distribution referencing feature repositories, but without passing distribution-check job.

Note

Projects can participate in Simultaneous Release even if they are not included in autorelease. Nitrogen example: Odlparent. FIXME: Clarify states for such projects (per version, if they released multiple times within the same cycle).

Branch Cutting

This page documents the current branch cutting tasks that are needed to be performed at RC0 and which team has the necessary permissions in order to perform the necessary task in Parentheses.

JJB (releng/builder)
  1. Export ${NEXT_RELEASE} and ${CURR_RELEASE} with new and current release names. (releng/builder committers)

    export CURR_RELEASE="Silicon"
    export NEXT_RELEASE="Phosphorus"
    
  2. Run the script cut-branch-jobs.py to generate next release jobs. (releng/builder committers)

    python scripts/cut-branch-jobs.py $CURR_RELEASE $NEXT_RELEASE jjb/
    pre-commit run --all-files
    

    Note

    pre-commit is necessary to adjust the formatting of the generated YAML.

    This script changes JJB yaml files to insert the next release configuration by updating streams and branches where relevant. For example if master is currently Silicon, the result of this script will update config blocks as follows:

    Update multi-streams:

    stream:
      - Phosphorus:
          branch: master
      - Silicon:
          branch: stable/silicon
    

    Insert project new blocks:

    - project:
        name: aaa-phosphorus
        jobs:
          - '{project-name}-verify-{stream}-{maven}-{jdks}'
        stream: phosphorus
        branch: master
    
    - project:
        name: aaa-silicon
        jobs:
          - '{project-name}-verify-{stream}-{maven}-{jdks}'
        stream: silicon
        branch: stable/silicon
    
  3. Review and submit the changes to releng/builder project. (releng/builder committers)

Autorelease
  1. Block submit permissions for registered users and elevate RE’s committer rights on gerrit. (Helpdesk)

    _images/gerrit-update-committer-rights.png

    Note

    Enable Exclusive checkbox for the submit button to override any existing permissions.

  2. Enable create reference permissions on gerrit for RE’s to submit .gitreview patches. (Helpdesk)

    _images/gerrit-update-create-reference.png

    Note

    Enable Exclusive checkbox override any existing permissions.

  3. Start the branch cut job or use the manual steps below for branch cutting autorelease. (Release Engineering Team)

  4. Start the version bump job or use the manual steps below for version bump autorelease. (Release Engineering Team)

  5. Merge all .gitreview patches submitted though the job or manually. (Release Engineering Team)

  6. Remove create reference permissions set on gerrit for RE’s. (Helpdesk)

  7. Merge all version bump patches in the order of dependencies. (Release Engineering Team)

  8. Re-enable submit permissions for registered users and disable elevated RE committer rights on gerrit. (Helpdesk)

  9. Notify release list on branch cutting work completion. (Release Engineering Team)

Branch cut job (Autorelease)

Branch cutting can be performed either through the job or manually.

  1. Start the autorelease-branch-cut job (Release Engineering Team)

Manual steps to branch cut (Autorelease)
  1. Setup releng/autorelease repository. (Release Engineering Team)

    git review -s
    git submodule foreach 'git review -s'
    git checkout master
    git submodule foreach 'git checkout master'
    git pull --rebase
    git submodule foreach 'git pull --rebase'
    
  2. Enable create reference permissions on gerrit for RE’s to submit .gitreview patches. (Helpdesk)

    _images/gerrit-update-create-reference.png

    Note

    Enable Exclusive check-box override any existing permissions.

  3. Create stable/${CURR_RELEASE} branches based on HEAD master. (Release Engineering Team)

    git checkout -b stable/${CURR_RELEASE,,} origin/master
    git submodule foreach 'git checkout -b stable/${CURR_RELEASE,,} origin/master'
    git push gerrit stable/${CURR_RELEASE,,}
    git submodule foreach 'git push gerrit stable/${CURR_RELEASE,,}'
    
  4. Contribute .gitreview updates to stable/${CURR_RELEASE,,}. (Release Engineering Team)

    git submodule foreach sed -i -e "s#defaultbranch=master#defaultbranch=stable/${CURR_RELEASE,,}#" .gitreview
    git submodule foreach git commit -asm "Update .gitreview to stable/${CURR_RELEASE,,}"
    git submodule foreach 'git review -t ${CURR_RELEASE,,}-branch-cut'
    sed -i -e "s#defaultbranch=master#defaultbranch=stable/${CURR_RELEASE,,}#" .gitreview
    git add .gitreview
    git commit -s -v -m "Update .gitreview to stable/${CURR_RELEASE,,}"
    git review -t  ${CURR_RELEASE,,}-branch-cut
    
Version bump job (Autorelease)

Version bump can performed either through the job or manually.

  1. Start the autorelease-version-bump-${NEXT_RELEASE,,} job (Release Engineering Team)

    Note

    Enabled BRANCH_CUT and disable DRY_RUN to run the job for branch cut work-flow. The version bump job can be run only on the master branch.

Manual steps to version bump (Autorelease)
  1. Version bump master by x.(y+1).z. (Release Engineering Team)

    git checkout master
    git submodule foreach 'git checkout master'
    pip install lftools
    lftools version bump ${CURR_RELEASE}
    
  2. Make sure the version bump changes does not modify anything under scripts or pom.xml. (Release Engineering Team)

    git checkout pom.xml scripts/
    
  3. Push version bump master changes to gerrit. (Release Engineering Team)

    git submodule foreach 'git commit -asm "Bump versions by x.(y+1).z for next dev cycle"'
    git submodule foreach 'git review -t ${CURR_RELEASE,,}-branch-cut'
    
  4. Merge the patches in order according to the merge-order.log file found in autorelease jobs. (Release Engineering Team)

    Note

    The version bump patches can be merged more quickly by performing a local build with mvn clean deploy -DskipTests to prime Nexus with the new version updates.

Documentation post branch tasks
  1. Git remove all files/directories from the docs/release-notes/* directory. (Release Engineering Team)

    git checkout master
    git rm -rf docs/release-notes/<project file and/or folder>
    git commit -sm "Reset release notes for next dev cycle"
    git review
    
Simultaneous Release

This page explains how the OpenDaylight release process works once the TSC has approved a release.

Code Freeze

At the first Release Candidate (RC) the Submit button is disabled on the stable branch to prevent projects from merging non-blocking patches into the release.

  1. Disable Submit for Registered Users and allow permission to the Release Engineering Team (Helpdesk)

    _images/gerrit-update-committer-rights.png

    Important

    DO NOT enable Code-Review+2 and Verified+1 to the Release Engienering Team during code freeze.

    Note

    Enable Exclusive checkbox for the submit button to override any existing persmissions. Code-Review and Verify permissions are only needed during version bumping.

Release Preparations

After release candidate is built gpg sign artifacts using the lftools sign command.

STAGING_REPO=autorelease-1903
STAGING_PROFILE_ID=abc123def456  # This Profile ID is listed in Nexus > Staging Profiles
lftools sign deploy-nexus https://nexus.opendaylight.org $STAGING_REPO $STAGING_PROFILE_ID

Verify the distribution-karaf file with the signature.

gpg2 --verify karaf-x.y.z-${RELEASE}.tar.gz.asc karaf-x.y.z-${RELEASE}.tar.gz

Note

Projects such as OpFlex participate in the Simultaneous Release but are not part of the autorelease build. Ping those projects and prep their staging repos as well.

Releasing OpenDaylight

The following describes the Simultaneous Release process for shipping out the binary and source code on release day.

Bulleted actions can be performed in parallel while numbered actions should be done in sequence.

  • Release the Nexus Staging repos (Helpdesk)

    1. Select both the artifacts and signature repos (created previously) and click Release.

    2. Enter Release OpenDaylight $RELEASE for the description and click confirm.

    Perform this step for any additional projects that are participating in the Simultaneous Release but are not part of the autorelease build.

    Tip

    This task takes hours to run so kicking it off early is a good idea.

  • Version bump for next dev cycle (Release Engineering Team)

    1. Run the autorelease-version-bump-${STREAM} job

      Tip

      This task takes hours to run so kicking it off early is a good idea.

    2. Enable Code-Review+2 and Verify+1 voting permissions for the Release Engineering Team (Helpdesk)

      _images/gerrit-update-committer-rights.png

      Note

      Enable Exclusive checkbox for the submit button to override any existing persmissions. Code-Review and Verify permissions are only needed during version bumping. DO NOT enable it during code freeze.

    3. Merge all patches generated by the job

    4. Restore Gerrit permissions for Registered Users and disable elevated Release Engineering Team permissions (Helpdesk)

  • Tag the release (Release Engineering Team)

    1. Install lftools

      lftools contains the version bumping scripts we need to version bump and tag the dev branches. We recommend using a virtualenv for this.

      # Skip mkvirtualenv if you already have an lftools virtualenv
      mkvirtualenv lftools
      workon lftools
      pip install --upgrade lftools
      
    2. Pull latest autorelease repository

      export RELEASE=Nitrogen-SR1
      export STREAM=${RELEASE//-*}
      export BRANCH=origin/stable/${STREAM,,}
      
      # No need to clean if you have already done it.
      git clone --recursive https://git.opendaylight.org/gerrit/releng/autorelease
      cd autorelease
      git fetch origin
      
      # Ensure we are on the right branch. Note that we are wiping out all
      # modifications in the repo so backup unsaved changes before doing this.
      git checkout -f
      git checkout ${BRANCH,,}
      git clean -xdff
      git submodule foreach git checkout -f
      git submodule foreach git clean -xdff
      git submodule update --init
      
      # Ensure git review is setup
      git review -s
      git submodule foreach 'git review -s'
      
    3. Publish release tags

      export BUILD_NUM=55
      export OPENJDKVER="openjdk8"
      export PATCH_URL="https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/autorelease-release-${STREAM,,}-mvn35-${OPENJDKVER}/${BUILD_NUM}/patches.tar.gz"
      ./scripts/release-tags.sh "${RELEASE}" /tmp/patches "$PATCH_URL"
      
  • Notify Community and Website teams

    1. Update downloads page

      Submit a patch to the ODL docs project to update the downloads page with the latest binaries and packages (Release Engineering Team)

    2. Email dev/release/tsc mailing lists announcing release binaries location (Release Engineering Team)

    3. Email dev/release/tsc mailing lists to notify of tagging and version bump completion (Release Engineering Team)

      Note

      This step is performed after Version Bump and Tagging steps are complete.

  • Generate Service Release notes

    Warning

    If this is a major release (eg. Sodium) as opposed to a Service Release (eg. Sodium-SR1). Skip this step.

    For major releases the notes come from the projects themselves in the docs repo via the docs/releaset-notes/projects directory.

    For service releases (SRs) we need to generate service release notes. This can be performed by running the autorelease-generate-release-notes-$STREAM job.

    1. Run the autorelease-generate-release-notes-${STREAM} job (Release Engineering Team)

      Trigger this job by leaving a Gerrit comment generate-release-notes Carbon-SR2

    Release notes can also be manually generated with the script:

    git checkout stable/${BRANCH,,}
    ./scripts/release-notes-generator.sh ${RELEASE}
    

    A release-notes.rst will be generated in the working directory. Submit this file as release-notes-sr1.rst (update the sr as necessary) to the docs project.

Super Committers

Super committers are a group of TSC-approved individuals within the OpenDaylight community with the power to merge patches on behalf of projects during approved Release Activities.

Super Committer Activities

Super committers are given super committer powers ONLY during TSC-approved activities and are not a power that is active on a regular basis. Once one of the TSC-approved activities are triggered, helpdesk will enable the permissions listed for the respective activities for the duration of that activity.

Code Freeze

Note

This activity has been pre-approved by the TSC and does not require a TSC vote. Helpdesk should be notified to enable the permissions and again to disable the permissions once activities are complete.

Super committers are granted powers to merge blocking patches for the duration code of freeze until a release is approved and code freeze is lifted. This permission is only granted for the specific branch that is frozen.

The following powers are granted:

  • Submit button access

During this time Super Committers can ONLY merge patches that have a +2 Code-Review by a project committer approving the merge, and the patch passes Jenkins Verify check. If neither of these conditions are met then DO NOT merge the patch.

Version bumping

Note

This activity has been pre-approved by the TSC and does not require a TSC vote. Helpdesk should be notified to enable the permissions and again to disable the permissions once activities are complete.

Super committers are granted powers to merge version bump related patches for the duration of version bumping activities.

The following powers are granted:

  • Vote Code-Review +2

  • Vote Verified +1

  • Remove Reviewer

  • Submit button access

These permissions are granted to allow super committers to push through version bump patches with haste. The Remove Reviewer permission is to be used only for removing Jenkins vote caused by a failed distribution-check job, if that failure is caused by a temporary version inconsistency present while the bump activity is being performed.

Version bumping activities come in 2 forms.

  1. Post-release Autorelease version bumping

  2. MRI project version bumping

Case 1, the TSC has approved an official OpenDaylight release and after the binaries are released to the world all Autorelease managed projects are version bumped appropriately to the next development release number.

Case 2, During the Release Integrated Deadline of the release schedule MRI projects submit desired version updates. Once approved by the TSC Super Committers can merge these patches across the projects.

Ideally the version bumping activities should not include code modifications, if they do +2 Code-Review vote should be complete by a committer on the project to indicate that they approve the code changes.

Once version bump patches are merged these permissions are removed.

Exceptional cases

Any activities not in the list above will fall under the exceptional case in which requires TSC approval before Super Committers can merge changes. These cases should be brought up to the TSC for voting.

Super Committers

Name

IRC

Email

Anil Belur

abelur

abelur@linuxfoundation.org

Daniel Farrell

dfarrell07

dfarrell@redhat.com

Jamo Luhrsen

jamoluhrsen

jluhrsen@gmail.com

Luis Gomez

LuisGomez

ecelgp@gmail.com

Michael Vorburger

vorburger

mike@vorburger.ch

Sam Hague

shague

shague@redhat.com

Stephen Kitt

skitt

skitt@redhat.com

Robert Varga

rovarga

nite@hq.sk

Thanh Ha

zxiiro

zxiiro@gmail.com

Supporting Documentation

Identifying Managed Projects in an OpenDaylight Version
What are Managed Projects?

Managed Projects are simply projects that take part in the Managed Release Process. Managed Projects are either core components of OpenDaylight or have demonstrated their maturity and ability to successfully take part in the Managed Release.

For more information, see the full description of Managed Projects.

What is a Managed Distribution?

Managed Projects are aggregated together by a POM file that defines a Managed Distribution. The Managed Distribution is the focus of OpenDaylight development. It’s continuously built, tested, packaged and released into Continuous Delivery pipelines. As prescribed by the Managed Release Process, Managed Distributions are eventually blessed as formal OpenDaylight releases.

NB: OpenDaylight’s Fluorine release actually included Managed and Self-Managed Projects, but the community is working towards the formal release being exactly the Managed Distribution, with an option for Self-Managed Projects to release independently on top of the Managed Distribution later.

Finding the Managed Projects given a Managed Distribution

Given a Managed Distribution (tar.gz, .zip, RPM, Deb), the Managed Projects that constitute it can be found in the taglist.log file in the root of the archive.

taglist.log files are of the format:

<Managed Project> <Git SHA of built commit> <Codename of release>
Finding the Managed Projects Given a Branch

To find the current set of Managed Projects in a given OpenDaylight branch, examine the integration/distribution/features/repos/index/pom.xml file that defines the Managed Distribution.

The release management team maintains several documents in Google Drive to track releases. These documents can be found at the following link:

https://drive.google.com/drive/folders/0ByPlysxjHHJaUXdfRkJqRGo4aDg

Java API Documentation

Release Integrated Projects

OpenDaylight User Guide

Overview

This first part of the user guide covers the basic user operations of the OpenDaylight Release using the generic base functionality.

OpenDaylight Controller Overview

The OpenDaylight controller is JVM software and can be run from any operating system and hardware as long as it supports Java. The controller is an implementation of the Software Defined Network (SDN) concept and makes use of the following tools:

  • Maven: OpenDaylight uses Maven for easier build automation. Maven uses pom.xml (Project Object Model) to script the dependencies between bundle and also to describe what bundles to load and start.

  • OSGi: This framework is the back-end of OpenDaylight as it allows dynamically loading bundles and packages JAR files, and binding bundles together for exchanging information.

  • JAVA interfaces: Java interfaces are used for event listening, specifications, and forming patterns. This is the main way in which specific bundles implement call-back functions for events and also to indicate awareness of specific state.

  • REST APIs: These are northbound APIs such as topology manager, host tracker, flow programmer, static routing, and so on.

The controller exposes open northbound APIs which are used by applications. The OSGi framework and bidirectional REST are supported for the northbound APIs. The OSGi framework is used for applications that run in the same address space as the controller while the REST (web-based) API is used for applications that do not run in the same address space (or even the same system) as the controller. The business logic and algorithms reside in the applications. These applications use the controller to gather network intelligence, run its algorithm to do analytics, and then orchestrate the new rules throughout the network. On the southbound, multiple protocols are supported as plugins, e.g. OpenFlow 1.0, OpenFlow 1.3, BGP-LS, and so on. The OpenDaylight controller starts with an OpenFlow 1.0 southbound plugin. Other OpenDaylight contributors begin adding to the controller code. These modules are linked dynamically into a Service Abstraction Layer (SAL).

The SAL exposes services to which the modules north of it are written. The SAL figures out how to fulfill the requested service irrespective of the underlying protocol used between the controller and the network devices. This provides investment protection to the applications as OpenFlow and other protocols evolve over time. For the controller to control devices in its domain, it needs to know about the devices, their capabilities, reachability, and so on. This information is stored and managed by the Topology Manager. The other components like ARP handler, Host Tracker, Device Manager, and Switch Manager help in generating the topology database for the Topology Manager.

For a more detailed overview of the OpenDaylight controller, see the OpenDaylight Developer Guide.

Project-specific User Guides

Distribution Version reporting
Overview

This section provides an overview of odl-distribution-version feature.

A remote user of OpenDaylight usually has access to RESTCONF and NETCONF northbound interfaces, but does not have access to the system OpenDaylight is running on. OpenDaylight has released multiple versions including Service Releases, and there are incompatible changes between them. In order to know which YANG modules to use, which bugs to expect and which workarounds to apply, such user would need to know the exact version of at least one OpenDaylight component.

There are indirect ways to deduce such version, but the direct way is enabled by odl-distribution-version feature. Administrator can specify version strings, which would be available to users via NETCONF, or via RESTCONF if OpenDaylight is configured to initiate NETCONF connection to its config subsystem northbound interface.

By default, users have write access to config subsystem, so they can add, modify or delete any version strings present there. Admins can only influence whether the feature is installed, and initial values.

Config subsystem is local only, not cluster aware, so each member reports versions independently. This is suitable for heterogeneous clusters.

Default config file

Initial version values are set via config file odl-version.xml which is created in $KARAF_HOME/etc/opendaylight/karaf/ upon installation of odl-distribution-version feature. If admin wants to use different content, the file with desired content has to be created there before feature installation happens.

By default, the config file defines two config modules, named odl-distribution-version and odl-odlparent-version.

RESTCONF usage

Opendaylight config subsystem NETCONF northbound is not made available just by installing odl-distribution-version, but most other feature installations would enable it. RESTCONF interfaces are enabled by installing odl-restconf feature, but that do not allow access to config subsystem by itself.

On single node deployments, installation of odl-netconf-connector-ssh is recommended, which would configure controller-config device and its MD-SAL mount point.

For cluster deployments, installing odl-netconf-clustered-topology is recommended. See documentation for clustering on how to create similar devices for each member, as controller-config name is not unique in that context.

Assuming single node deployment and user located on the same system, here is an example curl command accessing odl-odlparent-version config module:

curl 127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-distribution-version:odl-version/odl-odlparent-version
Neutron Service User Guide
Overview

This Karaf feature (odl-neutron-service) provides integration support for OpenStack Neutron via the OpenDaylight ML2 mechanism driver. The Neutron Service is only one of the components necessary for OpenStack integration. For those related components please refer to documentations of each component:

Use cases and who will use the feature

If you want OpenStack integration with OpenDaylight, you will need this feature with an OpenDaylight provider feature like netvirt, group based policy, VTN, and lisp mapper. For provider configuration, please refer to each individual provider’s documentation. Since the Neutron service only provides the northbound API for the OpenStack Neutron ML2 mechanism driver. Without those provider features, the Neutron service itself isn’t useful.

Neutron Service feature Architecture

The Neutron service provides northbound API for OpenStack Neutron via RESTCONF and also its dedicated REST API. It communicates through its YANG model with providers.

Neutron Service Architecture

Neutron Service Architecture

Configuring Neutron Service feature

As the Karaf feature includes everything necessary for communicating northbound, no special configuration is needed. Usually this feature is used with an OpenDaylight southbound plugin that implements actual network virtualization functionality and OpenStack Neutron. The user wants to setup those configurations. Refer to each related documentations for each configurations.

Administering or Managing odl-neutron-service

There is no specific configuration regarding to Neutron service itself. For related configuration, please refer to OpenStack Neutron configuration and OpenDaylight related services which are providers for OpenStack.

installing odl-neutron-service while the controller running
  1. While OpenDaylight is running, in Karaf prompt, type: feature:install odl-neutron-service.

  2. Wait a while until the initialization is done and the controller stabilizes.

odl-neutron-service provides only a unified interface for OpenStack Neutron. It doesn’t provide actual functionality for network virtualization. Refer to each OpenDaylight project documentation for actual configuration with OpenStack Neutron.

Neutron Logger

Another service, the Neutron Logger, is provided for debugging/logging purposes. It logs changes on Neutron YANG models.

feature:install odl-neutron-logger
Service Function Chaining
OpenDaylight Service Function Chaining (SFC) Overview

OpenDaylight Service Function Chaining (SFC) provides the ability to define an ordered list of network services (e.g. firewalls, load balancers). These services are then “stitched” together in the network to create a service chain. This project provides the infrastructure (chaining logic, APIs) needed for ODL to provision a service chain in the network and an end-user application for defining such chains.

  • ACE - Access Control Entry

  • ACL - Access Control List

  • SCF - Service Classifier Function

  • SF - Service Function

  • SFC - Service Function Chain

  • SFF - Service Function Forwarder

  • SFG - Service Function Group

  • SFP - Service Function Path

  • RSP - Rendered Service Path

  • NSH - Network Service Header

SFC User Interface
Overview

The SFC User interface comes with a Command Line Interface (CLI): it provides several Karaf console commands to show the SFC model (SF, SFFs, etc.) provisioned in the datastore.

SFC Web Interface (SFC-UI)
Architecture

SFC-UI operates purely by using RESTCONF.

SFC-UI integration into ODL

SFC-UI integration into ODL

How to access
  1. Run ODL distribution (run karaf)

  2. In Karaf console execute: feature:install odl-sfc-ui

  3. Visit SFC-UI on: http://<odl_ip_address>:8181/sfc/index.html

SFC Command Line Interface (SFC-CLI)
Overview

The Karaf Container offers a complete Unix-like console that allows managing the container. This console can be extended with custom commands to manage the features deployed on it. This feature will add some basic commands to show the provisioned SFC entities.

How to use it

The SFC-CLI implements commands to show some of the provisioned SFC entities: Service Functions, Service Function Forwarders, Service Function Chains, Service Function Paths, Service Function Classifiers, Service Nodes and Service Function Types:

  • List one/all provisioned Service Functions:

    sfc:sf-list [--name <name>]
    
  • List one/all provisioned Service Function Forwarders:

    sfc:sff-list [--name <name>]
    
  • List one/all provisioned Service Function Chains:

    sfc:sfc-list [--name <name>]
    
  • List one/all provisioned Service Function Paths:

    sfc:sfp-list [--name <name>]
    
  • List one/all provisioned Service Function Classifiers:

    sfc:sc-list [--name <name>]
    
  • List one/all provisioned Service Nodes:

    sfc:sn-list [--name <name>]
    
  • List one/all provisioned Service Function Types:

    sfc:sft-list [--name <name>]
    
SFC Southbound REST Plug-in
Overview

The Southbound REST Plug-in is used to send configuration from datastore down to network devices supporting a REST API (i.e. they have a configured REST URI). It supports POST/PUT/DELETE operations, which are triggered accordingly by changes in the SFC data stores.

  • Access Control List (ACL)

  • Service Classifier Function (SCF)

  • Service Function (SF)

  • Service Function Group (SFG)

  • Service Function Schedule Type (SFST)

  • Service Function Forwarder (SFF)

  • Rendered Service Path (RSP)

Southbound REST Plug-in Architecture

From the user perspective, the REST plug-in is another SFC Southbound plug-in used to communicate with network devices.

Southbound REST Plug-in integration into ODL

Southbound REST Plug-in integration into ODL

Configuring Southbound REST Plugin
  1. Run ODL distribution (run karaf)

  2. In Karaf console execute: feature:install odl-sfc-sb-rest

  3. Configure REST URIs for SF/SFF through SFC User Interface or RESTCONF (required configuration steps can be found in the tutorial stated bellow)

Tutorial

Comprehensive tutorial on how to use the Southbound REST Plug-in and how to control network devices with it can be found on: https://wiki-archive.opendaylight.org/view/Service_Function_Chaining:Main

SFC-OVS integration
Overview

SFC-OVS provides integration of SFC with Open vSwitch (OVS) devices. Integration is realized through mapping of SFC objects (like SF, SFF, Classifier, etc.) to OVS objects (like Bridge, TerminationPoint=Port/Interface). The mapping takes care of automatic instantiation (setup) of corresponding object whenever its counterpart is created. For example, when a new SFF is created, the SFC-OVS plug-in will create a new OVS bridge.

The feature is intended for SFC users willing to use Open vSwitch as an underlying network infrastructure for deploying RSPs (Rendered Service Paths).

SFC-OVS Architecture

SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing information from/to OVS devices. From the user perspective SFC-OVS acts as a layer between SFC datastore and OVSDB.

SFC-OVS integration into ODL

SFC-OVS integration into ODL

Configuring SFC-OVS
  1. Run ODL distribution (run karaf)

  2. In Karaf console execute: feature:install odl-sfc-ovs

  3. Configure Open vSwitch to use ODL as a manager, using following command: ovs-vsctl set-manager tcp:<odl_ip_address>:6640

Tutorials
Verifying mapping from SFF to OVS
Overview

This tutorial shows the usual workflow during creation of an OVS Bridge with use of the SFC APIs.

Prerequisites
  • Open vSwitch installed (ovs-vsctl command available in shell)

  • SFC-OVS feature configured as stated above

Instructions
  1. In a shell execute: ovs-vsctl set-manager tcp:<odl_ip_address>:6640

  2. Send POST request to URL: http://<odl_ip_address>:8181/restconf/operations/service-function-forwarder-ovs:create-ovs-bridge Use Basic auth with credentials: “admin”, “admin” and set Content-Type: application/json. The content of POST request should be following:

{
    "input":
    {
        "name": "br-test",
        "ovs-node": {
            "ip": "<Open_vSwitch_ip_address>"
        }
    }
}

Open_vSwitch_ip_address is the IP address of the machine where Open vSwitch is installed.

Verification

In a shell execute: ovs-vsctl show. There should be a Bridge with the name br-test and one port/interface called br-test.

Also, the corresponding SFF for this OVS Bridge should be configured, which can be verified through the SFC User Interface or RESTCONF as follows.

  1. Visit the SFC User Interface: http://<odl_ip_address>:8181/sfc/index.html#/sfc/serviceforwarder

  2. Use pure RESTCONF and send a GET request to URL: http://<odl_ip_address>:8181/restconf/config/service-function-forwarder:service-function-forwarders

There should be an SFF, whose name will be ending with br1 and the SFF should contain two DataPlane locators: br1 and testPort.

SFC Classifier User Guide
Overview

Description of classifier can be found in: https://datatracker.ietf.org/doc/draft-ietf-sfc-architecture/

There are two types of classifier:

  1. OpenFlow Classifier

  2. Iptables Classifier

OpenFlow Classifier

OpenFlow Classifier implements the classification criteria based on OpenFlow rules deployed into an OpenFlow switch. An Open vSwitch will take the role of a classifier and performs various encapsulations such NSH, VLAN, MPLS, etc. In the existing implementation, classifier can support NSH encapsulation. Matching information is based on ACL for MAC addresses, ports, protocol, IPv4 and IPv6. Supported protocols are TCP, UDP and SCTP. Actions information in the OF rules, shall be forwarding of the encapsulated packets with specific information related to the RSP.

Classifier Architecture

The OVSDB Southbound interface is used to create an instance of a bridge in a specific location (via IP address). This bridge contains the OpenFlow rules that perform the classification of the packets and react accordingly. The OpenFlow Southbound interface is used to translate the ACL information into OF rules within the Open vSwitch.

Note

in order to create the instance of the bridge that takes the role of a classifier, an “empty” SFF must be created.

Configuring Classifier
  1. An empty SFF must be created in order to host the ACL that contains the classification information.

  2. SFF data plane locator must be configured

  3. Classifier interface must be manually added to SFF bridge.

Administering or Managing Classifier

Classification information is based on MAC addresses, protocol, ports and IP. ACL gathers this information and is assigned to an RSP which turns to be a specific path for a Service Chain.

Iptables Classifier

Classifier manages everything from starting the packet listener to creation (and removal) of appropriate ip(6)tables rules and marking received packets accordingly. Its functionality is available only on Linux as it leverdges NetfilterQueue, which provides access to packets matched by an iptables rule. Classifier requires root privileges to be able to operate.

So far it is capable of processing ACL for MAC addresses, ports, IPv4 and IPv6. Supported protocols are TCP and UDP.

Classifier Architecture

Python code located in the project repository sfc-py/common/classifier.py.

Note

classifier assumes that Rendered Service Path (RSP) already exists in ODL when an ACL referencing it is obtained

  1. sfc_agent receives an ACL and passes it for processing to the classifier

  2. the RSP (its SFF locator) referenced by ACL is requested from ODL

  3. if the RSP exists in the ODL then ACL based iptables rules for it are applied

After this process is over, every packet successfully matched to an iptables rule (i.e. successfully classified) will be NSH encapsulated and forwarded to a related SFF, which knows how to traverse the RSP.

Rules are created using appropriate iptables command. If the Access Control Entry (ACE) rule is MAC address related both iptables and IPv6 tables rules re issued. If ACE rule is IPv4 address related, only iptables rules are issued, same for IPv6.

Note

iptables raw table contains all created rules

Configuring Classifier
Classfier does’t need any configuration.
Its only requirement is that the second (2) Netfilter Queue is not used by any other process and is avalilable for the classifier.
Administering or Managing Classifier

Classifier runs alongside sfc_agent, therefore the command for starting it locally is:

sudo python3.4 sfc-py/sfc_agent.py --rest --odl-ip-port localhost:8181
--auto-sff-name --nfq-class
SFC OpenFlow Renderer User Guide
Overview

The Service Function Chaining (SFC) OpenFlow Renderer (SFC OF Renderer) implements Service Chaining on OpenFlow switches. It listens for the creation of a Rendered Service Path (RSP) in the operational data store, and once received it programs Service Function Forwarders (SFF) that are hosted on OpenFlow capable switches to forward packets through the service chain. Currently the only tested OpenFlow capable switch is OVS 2.9.

Common acronyms used in the following sections:

  • SF - Service Function

  • SFF - Service Function Forwarder

  • SFC - Service Function Chain

  • SFP - Service Function Path

  • RSP - Rendered Service Path

SFC OpenFlow Renderer Architecture

The SFC OF Renderer is invoked after a RSP is created in the operational data store using an MD-SAL listener called SfcOfRspDataListener. Upon SFC OF Renderer initialization, the SfcOfRspDataListener registers itself to listen for RSP changes. When invoked, the SfcOfRspDataListener processes the RSP and calls the SfcOfFlowProgrammerImpl to create the necessary flows in the Service Function Forwarders configured in the RSP. Refer to the following diagram for more details.

SFC OpenFlow Renderer High Level Architecture

SFC OpenFlow Renderer High Level Architecture

SFC OpenFlow Switch Flow pipeline

The SFC OpenFlow Renderer uses the following tables for its Flow pipeline:

  • Table 0, Classifier

  • Table 1, Transport Ingress

  • Table 2, Path Mapper

  • Table 3, Path Mapper ACL

  • Table 4, Next Hop

  • Table 10, Transport Egress

The OpenFlow Table Pipeline is intended to be generic to work for all of the different encapsulations supported by SFC.

All of the tables are explained in detail in the following section.

The SFFs (SFF1 and SFF2), SFs (SF1), and topology used for the flow tables in the following sections are as described in the following diagram.

SFC OpenFlow Renderer Typical Network Topology

SFC OpenFlow Renderer Typical Network Topology

Classifier Table detailed

It is possible for the SFF to also act as a classifier. This table maps subscriber traffic to RSPs, and is explained in detail in the classifier documentation.

If the SFF is not a classifier, then this table will just have a simple Goto Table 1 flow.

Transport Ingress Table detailed

The Transport Ingress table has an entry per expected tunnel transport type to be received in a particular SFF, as established in the SFC configuration.

Here are two example on SFF1: one where the RSP ingress tunnel is MPLS assuming VLAN is used for the SFF-SF, and the other where the RSP ingress tunnel is either Eth+NSH or just NSH with no ethernet.

Priority

Match

Action

256

EtherType==0x8847 (MPLS unicast)

Goto Table 2

256

EtherType==0x8100 (VLAN)

Goto Table 2

250

EtherType==0x894f (Eth+NSH)

Goto Table 2

250

PacketType==0x894f (NSH no Eth)

Goto Table 2

5

Match Any

Drop

Table: Table Transport Ingress

Path Mapper Table detailed

The Path Mapper table has an entry per expected tunnel transport info to be received in a particular SFF, as established in the SFC configuration. The tunnel transport info is used to determine the RSP Path ID, and is stored in the OpenFlow Metadata. This table is not used for NSH, since the RSP Path ID is stored in the NSH header.

For SF nodes that do not support NSH tunneling, the IP header DSCP field is used to store the RSP Path Id. The RSP Path Id is written to the DSCP field in the Transport Egress table for those packets sent to an SF.

Here is an example on SFF1, assuming the following details:

  • VLAN ID 1000 is used for the SFF-SF

  • The RSP Path 1 tunnel uses MPLS label 100 for ingress and 101 for egress

  • The RSP Path 2 (symmetric downlink path) uses MPLS label 101 for ingress and 100 for egress

Priority

Match

Action

256

MPLS Label==100

RSP Path=1, Pop MPLS, Goto Table 4

256

MPLS Label==101

RSP Path=2, Pop MPLS, Goto Table 4

256

VLAN ID==1000, IP DSCP==1

RSP Path=1, Pop VLAN, Goto Table 4

256

VLAN ID==1000, IP DSCP==2

RSP Path=2, Pop VLAN, Goto Table 4

5

Match Any

Goto Table 3

Table: Table Path Mapper

Path Mapper ACL Table detailed

This table is only populated when PacketIn packets are received from the switch for TcpProxy type SFs. These flows are created with an inactivity timer of 60 seconds and will be automatically deleted upon expiration.

Next Hop Table detailed

The Next Hop table uses the RSP Path Id and appropriate packet fields to determine where to send the packet next. For NSH, only the NSP (Network Services Path, RSP ID) and NSI (Network Services Index, next hop) fields from the NSH header are needed to determine the VXLAN tunnel destination IP. For VLAN or MPLS, then the source MAC address is used to determine the destination MAC address.

Here are two examples on SFF1, assuming SFF1 is connected to SFF2. RSP Paths 1 and 2 are symmetric VLAN paths. RSP Paths 3 and 4 are symmetric NSH paths. RSP Path 1 ingress packets come from external to SFC, for which we don’t have the source MAC address (MacSrc).

Priority

Match

Action

256

RSP Path==1, MacSrc==SF1

MacDst=SFF2, Goto Table 10

256

RSP Path==2, MacSrc==SF1

Goto Table 10

256

RSP Path==2, MacSrc==SFF2

MacDst=SF1, Goto Table 10

246

RSP Path==1

MacDst=SF1, Goto Table 10

550

dl_type=0x894f, nsh_spi=3,nsh_si=255 (NSH, SFF Ingress RSP 3, hop 1)

load:0xa000002→ NXM_NX_TUN_IPV4_DST[], Goto Table 10

550

dl_type=0x894f nsh_spi=3,nsh_si=254 (NSH, SFF Ingress from SF, RSP 3, hop 2)

load:0xa00000a→ NXM_NX_TUN_IPV4_DST[], Goto Table 10

550

dl_type=0x894f, nsh_spi=4,nsh_si=254 (NSH, SFF1 Ingress from SFF2)

load:0xa00000a→ NXM_NX_TUN_IPV4_DST[], Goto Table 10

5

Match Any

Drop

Table: Table Next Hop

Transport Egress Table detailed

The Transport Egress table prepares egress tunnel information and sends the packets out.

Here are two examples on SFF1. RSP Paths 1 and 2 are symmetric MPLS paths that use VLAN for the SFF-SF. RSP Paths 3 and 4 are symmetric NSH paths. Since it is assumed that switches used for NSH will only have one VXLAN port, the NSH packets are just sent back where they came from.

Priority

Match

Action

256

RSP Path==1, MacDst==SF1

Push VLAN ID 1000, Port=SF1

256

RSP Path==1, MacDst==SFF2

Push MPLS Label 101, Port=SFF2

256

RSP Path==2, MacDst==SF1

Push VLAN ID 1000, Port=SF1

246

RSP Path==2

Push MPLS Label 100, Port=Ingress

256

in_port=1,dl_type=0x894f nsh_spi=0x3,nsh_si=255 (NSH, SFF Ingress RSP 3)

IN_PORT

256

in_port=1,dl_type=0x894f, nsh_spi=0x3,nsh_si=254 (NSH,SFF Ingress from SF,RSP 3)

IN_PORT

256 | in_port=1,dl_type=0x894f,
nsh_spi=0x4,nsh_si=254
(NSH, SFF1 Ingress from SFF2)

IN_PORT

5

Match Any

Drop

Table: Table Transport Egress

Administering SFC OF Renderer

To use the SFC OpenFlow Renderer Karaf, at least the following Karaf features must be installed.

  • odl-openflowplugin-nxm-extensions

  • odl-openflowplugin-flow-services

  • odl-sfc-provider

  • odl-sfc-model

  • odl-sfc-openflow-renderer

  • odl-sfc-ui (optional)

Since OpenDaylight Karaf features internally install dependent features all of the above features can be installed by simply installing the ‘’odl-sfc-openflow-renderer’’ feature.

The following command can be used to view all of the currently installed Karaf features:

opendaylight-user@root>feature:list -i

Or, pipe the command to a grep to see a subset of the currently installed Karaf features:

opendaylight-user@root>feature:list -i | grep sfc

To install a particular feature, use the Karaf feature:install command.

SFC OF Renderer Tutorial
Overview

In this tutorial, the VXLAN-GPE NSH encapsulations will be shown. The following Network Topology diagram is a logical view of the SFFs and SFs involved in creating the Service Chains.

SFC OpenFlow Renderer Typical Network Topology

SFC OpenFlow Renderer Typical Network Topology

Prerequisites

To use this example, SFF OpenFlow switches must be created and connected as illustrated above. Additionally, the SFs must be created and connected.

Note that RSP symmetry depends on the Service Function Path symmetric field, if present. If not, the RSP will be symmetric if any of the SFs involved in the chain has the bidirectional field set to true.

Target Environment

The target environment is not important, but this use-case was created and tested on Linux.

Instructions

The steps to use this tutorial are as follows. The referenced configuration in the steps is listed in the following sections.

There are numerous ways to send the configuration. In the following configuration chapters, the appropriate curl command is shown for each configuration to be sent, including the URL.

Steps to configure the SFC OF Renderer tutorial:

  1. Send the SF RESTCONF configuration

  2. Send the SFF RESTCONF configuration

  3. Send the SFC RESTCONF configuration

  4. Send the SFP RESTCONF configuration

  5. The RSP will be created internally when the SFP is created.

Once the configuration has been successfully created, query the Rendered Service Paths with either the SFC UI or via RESTCONF. Notice that the RSP is symmetrical, so the following 2 RSPs will be created:

  • sfc-path1-Path-<RSP-ID>

  • sfc-path1-Path-<RSP-ID>-Reverse

At this point the Service Chains have been created, and the OpenFlow Switches are programmed to steer traffic through the Service Chain. Traffic can now be injected from a client into the Service Chain. To debug problems, the OpenFlow tables can be dumped with the following commands, assuming SFF1 is called s1 and SFF2 is called s2.

sudo ovs-ofctl -O OpenFlow13  dump-flows s1
sudo ovs-ofctl -O OpenFlow13  dump-flows s2

In all the following configuration sections, replace the ${JSON} string with the appropriate JSON configuration. Also, change the localhost destination in the URL accordingly.

SFC OF Renderer NSH Tutorial

The following configuration sections show how to create the different elements using NSH encapsulation.

NSH Service Function configuration

The Service Function configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function:service-functions/

SF configuration JSON.

{
 "service-functions": {
   "service-function": [
     {
       "name": "sf1",
       "type": "http-header-enrichment",
       "ip-mgmt-address": "10.0.0.2",
       "sf-data-plane-locator": [
         {
           "name": "sf1dpl",
           "ip": "10.0.0.10",
           "port": 4789,
           "transport": "service-locator:vxlan-gpe",
           "service-function-forwarder": "sff1"
         }
       ]
     },
     {
       "name": "sf2",
       "type": "firewall",
       "ip-mgmt-address": "10.0.0.3",
       "sf-data-plane-locator": [
         {
           "name": "sf2dpl",
            "ip": "10.0.0.20",
            "port": 4789,
            "transport": "service-locator:vxlan-gpe",
           "service-function-forwarder": "sff2"
         }
       ]
     }
   ]
 }
}
NSH Service Function Forwarder configuration

The Service Function Forwarder configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/

SFF configuration JSON.

{
 "service-function-forwarders": {
   "service-function-forwarder": [
     {
       "name": "sff1",
       "service-node": "openflow:2",
       "sff-data-plane-locator": [
         {
           "name": "sff1dpl",
           "data-plane-locator":
           {
               "ip": "10.0.0.1",
               "port": 4789,
               "transport": "service-locator:vxlan-gpe"
           }
         }
       ],
       "service-function-dictionary": [
         {
           "name": "sf1",
           "sff-sf-data-plane-locator":
           {
               "sf-dpl-name": "sf1dpl",
               "sff-dpl-name": "sff1dpl"
           }
         }
       ]
     },
     {
       "name": "sff2",
       "service-node": "openflow:3",
       "sff-data-plane-locator": [
         {
           "name": "sff2dpl",
           "data-plane-locator":
           {
               "ip": "10.0.0.2",
               "port": 4789,
               "transport": "service-locator:vxlan-gpe"
           }
         }
       ],
       "service-function-dictionary": [
         {
           "name": "sf2",
           "sff-sf-data-plane-locator":
           {
               "sf-dpl-name": "sf2dpl",
               "sff-dpl-name": "sff2dpl"
           }
         }
       ]
     }
   ]
 }
}
NSH Service Function Chain configuration

The Service Function Chain configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/

SFC configuration JSON.

{
 "service-function-chains": {
   "service-function-chain": [
     {
       "name": "sfc-chain1",
       "sfc-service-function": [
         {
           "name": "hdr-enrich-abstract1",
           "type": "http-header-enrichment"
         },
         {
           "name": "firewall-abstract1",
           "type": "firewall"
         }
       ]
     }
   ]
 }
}
NSH Service Function Path configuration

The Service Function Path configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-path:service-function-paths/

SFP configuration JSON.

{
  "service-function-paths": {
    "service-function-path": [
      {
        "name": "sfc-path1",
        "service-chain-name": "sfc-chain1",
        "transport-type": "service-locator:vxlan-gpe",
        "symmetric": true
      }
    ]
  }
}
NSH Rendered Service Path Query

The following command can be used to query all of the created Rendered Service Paths:

curl -H "Content-Type: application/json" -H "Cache-Control: no-cache" -X GET --user admin:admin http://localhost:8181/restconf/operational/rendered-service-path:rendered-service-paths/
SFC OF Renderer MPLS Tutorial

The following configuration sections show how to create the different elements using MPLS encapsulation.

MPLS Service Function configuration

The Service Function configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function:service-functions/

SF configuration JSON.

{
 "service-functions": {
   "service-function": [
     {
       "name": "sf1",
       "type": "http-header-enrichment",
       "ip-mgmt-address": "10.0.0.2",
       "sf-data-plane-locator": [
         {
           "name": "sf1-sff1",
           "mac": "00:00:08:01:02:01",
           "vlan-id": 1000,
           "transport": "service-locator:mac",
           "service-function-forwarder": "sff1"
         }
       ]
     },
     {
       "name": "sf2",
       "type": "firewall",
       "ip-mgmt-address": "10.0.0.3",
       "sf-data-plane-locator": [
         {
           "name": "sf2-sff2",
           "mac": "00:00:08:01:03:01",
           "vlan-id": 2000,
           "transport": "service-locator:mac",
           "service-function-forwarder": "sff2"
         }
       ]
     }
   ]
 }
}
MPLS Service Function Forwarder configuration

The Service Function Forwarder configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/

SFF configuration JSON.

{
 "service-function-forwarders": {
   "service-function-forwarder": [
     {
       "name": "sff1",
       "service-node": "openflow:2",
       "sff-data-plane-locator": [
         {
           "name": "ulSff1Ingress",
           "data-plane-locator":
           {
               "mpls-label": 100,
               "transport": "service-locator:mpls"
           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "mac": "11:11:11:11:11:11",
               "port-id" : "1"
           }
         },
         {
           "name": "ulSff1ToSff2",
           "data-plane-locator":
           {
               "mpls-label": 101,
               "transport": "service-locator:mpls"
           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "mac": "33:33:33:33:33:33",
               "port-id" : "2"
           }
         },
         {
           "name": "toSf1",
           "data-plane-locator":
           {
               "mac": "22:22:22:22:22:22",
               "vlan-id": 1000,
               "transport": "service-locator:mac",
           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "mac": "33:33:33:33:33:33",
               "port-id" : "3"
           }
         }
       ],
       "service-function-dictionary": [
         {
           "name": "sf1",
           "sff-sf-data-plane-locator":
           {
               "sf-dpl-name": "sf1-sff1",
               "sff-dpl-name": "toSf1"
           }
         }
       ]
     },
     {
       "name": "sff2",
       "service-node": "openflow:3",
       "sff-data-plane-locator": [
         {
           "name": "ulSff2Ingress",
           "data-plane-locator":
           {
               "mpls-label": 101,
               "transport": "service-locator:mpls"
           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "mac": "44:44:44:44:44:44",
               "port-id" : "1"
           }
         },
         {
           "name": "ulSff2Egress",
           "data-plane-locator":
           {
               "mpls-label": 102,
               "transport": "service-locator:mpls"
           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "mac": "66:66:66:66:66:66",
               "port-id" : "2"
           }
         },
         {
           "name": "toSf2",
           "data-plane-locator":
           {
               "mac": "55:55:55:55:55:55",
               "vlan-id": 2000,
               "transport": "service-locator:mac"
           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "port-id" : "3"
           }
         }
       ],
       "service-function-dictionary": [
         {
           "name": "sf2",
           "sff-sf-data-plane-locator":
           {
               "sf-dpl-name": "sf2-sff2",
               "sff-dpl-name": "toSf2"

           },
           "service-function-forwarder-ofs:ofs-port":
           {
               "port-id" : "3"
           }
         }
       ]
     }
   ]
 }
}
MPLS Service Function Chain configuration

The Service Function Chain configuration can be sent with the following command:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
 --data '${JSON}' -X PUT --user admin:admin
 http://localhost:8181/restconf/config/service-function-chain:service-function-chains/

SFC configuration JSON.

{
 "service-function-chains": {
   "service-function-chain": [
     {
       "name": "sfc-chain1",
       "sfc-service-function": [
         {
           "name": "hdr-enrich-abstract1",
           "type": "http-header-enrichment"
         },
         {
           "name": "firewall-abstract1",
           "type": "firewall"
         }
       ]
     }
   ]
 }
}
MPLS Service Function Path configuration

The Service Function Path configuration can be sent with the following command. This will internally trigger the Rendered Service Paths to be created.

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user admin:admin
 http://localhost:8181/restconf/config/service-function-path:service-function-paths/

SFP configuration JSON.

{
  "service-function-paths": {
    "service-function-path": [
      {
        "name": "sfc-path1",
        "service-chain-name": "sfc-chain1",
        "transport-type": "service-locator:mpls",
        "symmetric": true
      }
    ]
  }
}

The following command can be used to query all of the Rendered Service Paths that were created when the Service Function Path was created:

curl -H "Content-Type: application/json" -H "Cache-Control: no-cache" -X GET
--user admin:admin http://localhost:8181/restconf/operational/rendered-service-path:rendered-service-paths/
SFC IOS XE Renderer User Guide
Overview

The early Service Function Chaining (SFC) renderer for IOS-XE devices (SFC IOS-XE renderer) implements Service Chaining functionality on IOS-XE capable switches. It listens for the creation of a Rendered Service Path (RSP) and sets up Service Function Forwarders (SFF) that are hosted on IOS-XE switches to steer traffic through the service chain.

Common acronyms used in the following sections:

  • SF - Service Function

  • SFF - Service Function Forwarder

  • SFC - Service Function Chain

  • SP - Service Path

  • SFP - Service Function Path

  • RSP - Rendered Service Path

  • LSF - Local Service Forwarder

  • RSF - Remote Service Forwarder

SFC IOS-XE Renderer Architecture

When the SFC IOS-XE renderer is initialized, all required listeners are registered to handle incoming data. It involves CSR/IOS-XE NodeListener which stores data about all configurable devices including their mountpoints (used here as databrokers), ServiceFunctionListener, ServiceForwarderListener (see mapping) and RenderedPathListener used to listen for RSP changes. When the SFC IOS-XE renderer is invoked, RenderedPathListener calls the IosXeRspProcessor which processes the RSP change and creates all necessary Service Paths and Remote Service Forwarders (if necessary) on IOS-XE devices.

Service Path details

Each Service Path is defined by index (represented by NSP) and contains service path entries. Each entry has appropriate service index (NSI) and definition of next hop. Next hop can be Service Function, different Service Function Forwarder or definition of end of chain - terminate. After terminating, the packet is sent to destination. If a SFF is defined as a next hop, it has to be present on device in the form of Remote Service Forwarder. RSFs are also created during RSP processing.

Example of Service Path:

service-chain service-path 200
   service-index 255 service-function firewall-1
   service-index 254 service-function dpi-1
   service-index 253 terminate
Mapping to IOS-XE SFC entities

Renderer contains mappers for SFs and SFFs. IOS-XE capable device is using its own definition of Service Functions and Service Function Forwarders according to appropriate .yang file. ServiceFunctionListener serves as a listener for SF changes. If SF appears in datastore, listener extracts its management ip address and looks into cached IOS-XE nodes. If some of available nodes match, Service function is mapped in IosXeServiceFunctionMapper to be understandable by IOS-XE device and it’s written into device’s config. ServiceForwarderListener is used in a similar way. All SFFs with suitable management ip address it mapped in IosXeServiceForwarderMapper. Remapped SFFs are configured as a Local Service Forwarders. It is not possible to directly create Remote Service Forwarder using IOS-XE renderer. RSF is created only during RSP processing.

Administering SFC IOS-XE renderer

To use the SFC IOS-XE Renderer Karaf, at least the following Karaf features must be installed:

  • odl-aaa-shiro

  • odl-sfc-model

  • odl-sfc-provider

  • odl-restconf

  • odl-netconf-topology

  • odl-sfc-ios-xe-renderer

SFC IOS-XE renderer Tutorial
Overview

This tutorial is a simple example how to create Service Path on IOS-XE capable device using IOS-XE renderer

Preconditions

To connect to IOS-XE device, it is necessary to use several modified yang models and override device’s ones. All .yang files are in the Yang/netconf folder in the sfc-ios-xe-renderer module in the SFC project. These files have to be copied to the cache/schema directory, before Karaf is started. After that, custom capabilities have to be sent to network-topology:

  • PUT ./config/network-topology:network-topology/topology/topology-netconf/node/<device-name>

    <node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
      <node-id>device-name</node-id>
      <host xmlns="urn:opendaylight:netconf-node-topology">device-ip</host>
      <port xmlns="urn:opendaylight:netconf-node-topology">2022</port>
      <username xmlns="urn:opendaylight:netconf-node-topology">login</username>
      <password xmlns="urn:opendaylight:netconf-node-topology">password</password>
      <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
      <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">0</keepalive-delay>
      <yang-module-capabilities xmlns="urn:opendaylight:netconf-node-topology">
         <override>true</override>
         <capability xmlns="urn:opendaylight:netconf-node-topology">
            urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&amp;revision=2013-07-15
         </capability>
         <capability xmlns="urn:opendaylight:netconf-node-topology">
            urn:ietf:params:xml:ns:yang:ietf-yang-types?module=ietf-yang-types&amp;revision=2013-07-15
         </capability>
         <capability xmlns="urn:opendaylight:netconf-node-topology">
            urn:ios?module=ned&amp;revision=2016-03-08
         </capability>
         <capability xmlns="urn:opendaylight:netconf-node-topology">
            http://tail-f.com/yang/common?module=tailf-common&amp;revision=2015-05-22
         </capability>
         <capability xmlns="urn:opendaylight:netconf-node-topology">
            http://tail-f.com/yang/common?module=tailf-meta-extensions&amp;revision=2013-11-07
         </capability>
         <capability xmlns="urn:opendaylight:netconf-node-topology">
            http://tail-f.com/yang/common?module=tailf-cli-extensions&amp;revision=2015-03-19
         </capability>
      </yang-module-capabilities>
    </node>
    

Note

The device name in the URL and in the XML must match.

Instructions

When the IOS-XE renderer is installed, all NETCONF nodes in topology-netconf are processed and all capable nodes with accessible mountpoints are cached. The first step is to create LSF on node.

Service Function Forwarder configuration

  • PUT ./config/service-function-forwarder:service-function-forwarders

    {
        "service-function-forwarders": {
            "service-function-forwarder": [
                {
                    "name": "CSR1Kv-2",
                    "ip-mgmt-address": "172.25.73.23",
                    "sff-data-plane-locator": [
                        {
                            "name": "CSR1Kv-2-dpl",
                            "data-plane-locator": {
                                "transport": "service-locator:vxlan-gpe",
                                "port": 6633,
                                "ip": "10.99.150.10"
                            }
                        }
                    ]
                }
            ]
        }
    }
    

If the IOS-XE node with appropriate management IP exists, this configuration is mapped and LSF is created on the device. The same approach is used for Service Functions.

  • PUT ./config/service-function:service-functions

    {
        "service-functions": {
            "service-function": [
                {
                    "name": "Firewall",
                    "ip-mgmt-address": "172.25.73.23",
                    "type": "firewall",
                    "sf-data-plane-locator": [
                        {
                            "name": "firewall-dpl",
                            "port": 6633,
                            "ip": "12.1.1.2",
                            "transport": "service-locator:gre",
                            "service-function-forwarder": "CSR1Kv-2"
                        }
                    ]
                },
                {
                    "name": "Dpi",
                    "ip-mgmt-address": "172.25.73.23",
                    "type":"dpi",
                    "sf-data-plane-locator": [
                        {
                            "name": "dpi-dpl",
                            "port": 6633,
                            "ip": "12.1.1.1",
                            "transport": "service-locator:gre",
                            "service-function-forwarder": "CSR1Kv-2"
                        }
                    ]
                },
                {
                    "name": "Qos",
                    "ip-mgmt-address": "172.25.73.23",
                    "type":"qos",
                    "sf-data-plane-locator": [
                        {
                            "name": "qos-dpl",
                            "port": 6633,
                            "ip": "12.1.1.4",
                            "transport": "service-locator:gre",
                            "service-function-forwarder": "CSR1Kv-2"
                        }
                    ]
                }
            ]
        }
    }
    

All these SFs are configured on the same device as the LSF. The next step is to prepare Service Function Chain.

  • PUT ./config/service-function-chain:service-function-chains/

    {
        "service-function-chains": {
            "service-function-chain": [
                {
                    "name": "CSR3XSF",
                    "sfc-service-function": [
                        {
                            "name": "Firewall",
                            "type": "firewall"
                        },
                        {
                            "name": "Dpi",
                            "type": "dpi"
                        },
                        {
                            "name": "Qos",
                            "type": "qos"
                        }
                    ]
                }
            ]
        }
    }
    

Service Function Path:

  • PUT ./config/service-function-path:service-function-paths/

    {
        "service-function-paths": {
            "service-function-path": [
                {
                    "name": "CSR3XSF-Path",
                    "service-chain-name": "CSR3XSF",
                    "starting-index": 255,
                    "symmetric": "true"
                }
            ]
        }
    }
    

Without a classifier, there is possibility to POST RSP directly.

  • POST ./operations/rendered-service-path:create-rendered-path

    {
      "input": {
          "name": "CSR3XSF-Path-RSP",
          "parent-service-function-path": "CSR3XSF-Path"
      }
    }
    

The resulting configuration:

!
service-chain service-function-forwarder local
  ip address 10.99.150.10
!
service-chain service-function firewall
ip address 12.1.1.2
  encapsulation gre enhanced divert
!
service-chain service-function dpi
ip address 12.1.1.1
  encapsulation gre enhanced divert
!
service-chain service-function qos
ip address 12.1.1.4
  encapsulation gre enhanced divert
!
service-chain service-path 1
  service-index 255 service-function firewall
  service-index 254 service-function dpi
  service-index 253 service-function qos
  service-index 252 terminate
!
service-chain service-path 2
  service-index 255 service-function qos
  service-index 254 service-function dpi
  service-index 253 service-function firewall
  service-index 252 terminate
!

Service Path 1 is direct, Service Path 2 is reversed. Path numbers may vary.

Service Function Scheduling Algorithms
Overview

When creating the Rendered Service Path, the origin SFC controller chose the first available service function from a list of service function names. This may result in many issues such as overloaded service functions and a longer service path as SFC has no means to understand the status of service functions and network topology. The service function selection framework supports at least four algorithms (Random, Round Robin, Load Balancing and Shortest Path) to select the most appropriate service function when instantiating the Rendered Service Path. In addition, it is an extensible framework that allows 3rd party selection algorithm to be plugged in.

Architecture

The following figure illustrates the service function selection framework and algorithms.

SF Selection Architecture

SF Selection Architecture

A user has three different ways to select one service function selection algorithm:

  1. Integrated RESTCONF Calls. OpenStack and/or other administration system could provide plugins to call the APIs to select one scheduling algorithm.

  2. Command line tools. Command line tools such as curl or browser plugins such as POSTMAN (for Google Chrome) and RESTClient (for Mozilla Firefox) could select schedule algorithm by making RESTCONF calls.

  3. SFC-UI. Now the SFC-UI provides an option for choosing a selection algorithm when creating a Rendered Service Path.

The RESTCONF northbound SFC API provides GUI/RESTCONF interactions for choosing the service function selection algorithm. MD-SAL data store provides all supported service function selection algorithms, and provides APIs to enable one of the provided service function selection algorithms. Once a service function selection algorithm is enabled, the service function selection algorithm will work when creating a Rendered Service Path.

Select SFs with Scheduler

Administrator could use both the following ways to select one of the selection algorithm when creating a Rendered Service Path.

  • Command line tools. Command line tools includes Linux commands curl or even browser plugins such as POSTMAN(for Google Chrome) or RESTClient(for Mozilla Firefox). In this case, the following JSON content is needed at the moment: Service_function_schudule_type.json

    {
      "service-function-scheduler-types": {
        "service-function-scheduler-type": [
          {
            "name": "random",
            "type": "service-function-scheduler-type:random",
            "enabled": false
          },
          {
            "name": "roundrobin",
            "type": "service-function-scheduler-type:round-robin",
            "enabled": true
          },
          {
            "name": "loadbalance",
            "type": "service-function-scheduler-type:load-balance",
            "enabled": false
          },
          {
            "name": "shortestpath",
            "type": "service-function-scheduler-type:shortest-path",
            "enabled": false
          }
        ]
      }
    }
    

    If using the Linux curl command, it could be:

    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
    --data '$${Service_function_schudule_type.json}' -X PUT
    --user admin:admin http://localhost:8181/restconf/config/service-function-scheduler-type:service-function-scheduler-types/
    

Here is also a snapshot for using the RESTClient plugin:

Mozilla Firefox RESTClient

Mozilla Firefox RESTClient

  • SFC-UI.SFC-UI provides a drop down menu for service function selection algorithm. Here is a snapshot for the user interaction from SFC-UI when creating a Rendered Service Path.

Karaf Web UI

Karaf Web UI

Note

Some service function selection algorithms in the drop list are not implemented yet. Only the first three algorithms are committed at the moment.

Random

Select Service Function from the name list randomly.

Overview

The Random algorithm is used to select one Service Function from the name list which it gets from the Service Function Type randomly.

Prerequisites
  • Service Function information are stored in datastore.

  • Either no algorithm or the Random algorithm is selected.

Target Environment

The Random algorithm will work either no algorithm type is selected or the Random algorithm is selected.

Instructions

Once the plugins are installed into Karaf successfully, a user can use his favorite method to select the Random scheduling algorithm type. There are no special instructions for using the Random algorithm.

Round Robin

Select Service Function from the name list in Round Robin manner.

Overview

The Round Robin algorithm is used to select one Service Function from the name list which it gets from the Service Function Type in a Round Robin manner, this will balance workloads to all Service Functions. However, this method cannot help all Service Functions load the same workload because it’s flow-based Round Robin.

Prerequisites
  • Service Function information are stored in datastore.

  • Round Robin algorithm is selected

Target Environment

The Round Robin algorithm will work one the Round Robin algorithm is selected.

Instructions

Once the plugins are installed into Karaf successfully, a user can use his favorite method to select the Round Robin scheduling algorithm type. There are no special instructions for using the Round Robin algorithm.

Load Balance Algorithm

Select appropriate Service Function by actual CPU utilization.

Overview

The Load Balance Algorithm is used to select appropriate Service Function by actual CPU utilization of service functions. The CPU utilization of service function obtained from monitoring information reported via NETCONF.

Prerequisites
  • CPU-utilization for Service Function.

  • NETCONF server.

  • NETCONF client.

  • Each VM has a NETCONF server and it could work with NETCONF client well.

Instructions

Set up VMs as Service Functions. enable NETCONF server in VMs. Ensure that you specify them separately. For example:

  1. Set up 4 VMs include 2 SFs’ type are Firewall, Others are Napt44. Name them as firewall-1, firewall-2, napt44-1, napt44-2 as Service Function. The four VMs can run either the same server or different servers.

  2. Install NETCONF server on every VM and enable it. More information on NETCONF can be found on the OpenDaylight wiki here: https://wiki-archive.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf:Manual_netopeer_installation

  3. Get Monitoring data from NETCONF server. These monitoring data should be get from the NETCONF server which is running in VMs. The following static XML data is an example:

static XML data like this:

<?xml version="1.0" encoding="UTF-8"?>
<service-function-description-monitor-report>
  <SF-description>
    <number-of-dataports>2</number-of-dataports>
    <capabilities>
      <supported-packet-rate>5</supported-packet-rate>
      <supported-bandwidth>10</supported-bandwidth>
      <supported-ACL-number>2000</supported-ACL-number>
      <RIB-size>200</RIB-size>
      <FIB-size>100</FIB-size>
      <ports-bandwidth>
        <port-bandwidth>
          <port-id>1</port-id>
          <ipaddress>10.0.0.1</ipaddress>
          <macaddress>00:1e:67:a2:5f:f4</macaddress>
          <supported-bandwidth>20</supported-bandwidth>
        </port-bandwidth>
        <port-bandwidth>
          <port-id>2</port-id>
          <ipaddress>10.0.0.2</ipaddress>
          <macaddress>01:1e:67:a2:5f:f6</macaddress>
          <supported-bandwidth>10</supported-bandwidth>
        </port-bandwidth>
      </ports-bandwidth>
    </capabilities>
  </SF-description>
  <SF-monitoring-info>
    <liveness>true</liveness>
    <resource-utilization>
        <packet-rate-utilization>10</packet-rate-utilization>
        <bandwidth-utilization>15</bandwidth-utilization>
        <CPU-utilization>12</CPU-utilization>
        <memory-utilization>17</memory-utilization>
        <available-memory>8</available-memory>
        <RIB-utilization>20</RIB-utilization>
        <FIB-utilization>25</FIB-utilization>
        <power-utilization>30</power-utilization>
        <SF-ports-bandwidth-utilization>
          <port-bandwidth-utilization>
            <port-id>1</port-id>
            <bandwidth-utilization>20</bandwidth-utilization>
          </port-bandwidth-utilization>
          <port-bandwidth-utilization>
            <port-id>2</port-id>
            <bandwidth-utilization>30</bandwidth-utilization>
          </port-bandwidth-utilization>
        </SF-ports-bandwidth-utilization>
    </resource-utilization>
  </SF-monitoring-info>
</service-function-description-monitor-report>
  1. Unzip SFC release tarball.

  2. Run SFC: ${sfc}/bin/karaf. More information on Service Function Chaining can be found on the OpenDaylight SFC’s wiki page: https://wiki-archive.opendaylight.org/view/Service_Function_Chaining:Main

  1. Deploy the SFC2 (firewall-abstract2⇒napt44-abstract2) and click button to Create Rendered Service Path in SFC UI (http://localhost:8181/sfc/index.html).

  2. Verify the Rendered Service Path to ensure the CPU utilization of the selected hop is the minimum one among all the service functions with same type. The correct RSP is firewall-1⇒napt44-2

Shortest Path Algorithm

Select appropriate Service Function by Dijkstra’s algorithm. Dijkstra’s algorithm is an algorithm for finding the shortest paths between nodes in a graph.

Overview

The Shortest Path Algorithm is used to select appropriate Service Function by actual topology.

Prerequisites
Instructions
  1. Unzip SFC release tarball.

  2. Run SFC: ${sfc}/bin/karaf.

  3. Depoly SFFs and SFs. import the service-function-forwarders.json and service-functions.json in UI (http://localhost:8181/sfc/index.html#/sfc/config)

service-function-forwarders.json:

{
  "service-function-forwarders": {
    "service-function-forwarder": [
      {
        "name": "SFF-br1",
        "service-node": "OVSDB-test01",
        "rest-uri": "http://localhost:5001",
        "sff-data-plane-locator": [
          {
            "name": "eth0",
            "service-function-forwarder-ovs:ovs-bridge": {
              "uuid": "4c3778e4-840d-47f4-b45e-0988e514d26c",
              "bridge-name": "br-tun"
            },
            "data-plane-locator": {
              "port": 5000,
              "ip": "192.168.1.1",
              "transport": "service-locator:vxlan-gpe"
            }
          }
        ],
        "service-function-dictionary": [
          {
            "sff-sf-data-plane-locator": {
               "sf-dpl-name": "sf1dpl",
               "sff-dpl-name": "sff1dpl"
            },
            "name": "napt44-1",
            "type": "napt44"
          },
          {
            "sff-sf-data-plane-locator": {
               "sf-dpl-name": "sf2dpl",
               "sff-dpl-name": "sff2dpl"
            },
            "name": "firewall-1",
            "type": "firewall"
          }
        ],
        "connected-sff-dictionary": [
          {
            "name": "SFF-br3"
          }
        ]
      },
      {
        "name": "SFF-br2",
        "service-node": "OVSDB-test01",
        "rest-uri": "http://localhost:5002",
        "sff-data-plane-locator": [
          {
            "name": "eth0",
            "service-function-forwarder-ovs:ovs-bridge": {
              "uuid": "fd4d849f-5140-48cd-bc60-6ad1f5fc0a1",
              "bridge-name": "br-tun"
            },
            "data-plane-locator": {
              "port": 5000,
              "ip": "192.168.1.2",
              "transport": "service-locator:vxlan-gpe"
            }
          }
        ],
        "service-function-dictionary": [
          {
            "sff-sf-data-plane-locator": {
               "sf-dpl-name": "sf1dpl",
               "sff-dpl-name": "sff1dpl"
            },
            "name": "napt44-2",
            "type": "napt44"
          },
          {
            "sff-sf-data-plane-locator": {
               "sf-dpl-name": "sf2dpl",
               "sff-dpl-name": "sff2dpl"
            },
            "name": "firewall-2",
            "type": "firewall"
          }
        ],
        "connected-sff-dictionary": [
          {
            "name": "SFF-br3"
          }
        ]
      },
      {
        "name": "SFF-br3",
        "service-node": "OVSDB-test01",
        "rest-uri": "http://localhost:5005",
        "sff-data-plane-locator": [
          {
            "name": "eth0",
            "service-function-forwarder-ovs:ovs-bridge": {
              "uuid": "fd4d849f-5140-48cd-bc60-6ad1f5fc0a4",
              "bridge-name": "br-tun"
            },
            "data-plane-locator": {
              "port": 5000,
              "ip": "192.168.1.2",
              "transport": "service-locator:vxlan-gpe"
            }
          }
        ],
        "service-function-dictionary": [
          {
            "sff-sf-data-plane-locator": {
               "sf-dpl-name": "sf1dpl",
               "sff-dpl-name": "sff1dpl"
            },
            "name": "test-server",
            "type": "dpi"
          },
          {
            "sff-sf-data-plane-locator": {
               "sf-dpl-name": "sf2dpl",
               "sff-dpl-name": "sff2dpl"
            },
            "name": "test-client",
            "type": "dpi"
          }
        ],
        "connected-sff-dictionary": [
          {
            "name": "SFF-br1"
          },
          {
            "name": "SFF-br2"
          }
        ]
      }
    ]
  }
}

service-functions.json:

{
  "service-functions": {
    "service-function": [
      {
        "rest-uri": "http://localhost:10001",
        "ip-mgmt-address": "10.3.1.103",
        "sf-data-plane-locator": [
          {
            "name": "preferred",
            "port": 10001,
            "ip": "10.3.1.103",
            "service-function-forwarder": "SFF-br1"
          }
        ],
        "name": "napt44-1",
        "type": "napt44"
      },
      {
        "rest-uri": "http://localhost:10002",
        "ip-mgmt-address": "10.3.1.103",
        "sf-data-plane-locator": [
          {
            "name": "master",
            "port": 10002,
            "ip": "10.3.1.103",
            "service-function-forwarder": "SFF-br2"
          }
        ],
        "name": "napt44-2",
        "type": "napt44"
      },
      {
        "rest-uri": "http://localhost:10003",
        "ip-mgmt-address": "10.3.1.103",
        "sf-data-plane-locator": [
          {
            "name": "1",
            "port": 10003,
            "ip": "10.3.1.102",
            "service-function-forwarder": "SFF-br1"
          }
        ],
        "name": "firewall-1",
        "type": "firewall"
      },
      {
        "rest-uri": "http://localhost:10004",
        "ip-mgmt-address": "10.3.1.103",
        "sf-data-plane-locator": [
          {
            "name": "2",
            "port": 10004,
            "ip": "10.3.1.101",
            "service-function-forwarder": "SFF-br2"
          }
        ],
        "name": "firewall-2",
        "type": "firewall"
      },
      {
        "rest-uri": "http://localhost:10005",
        "ip-mgmt-address": "10.3.1.103",
        "sf-data-plane-locator": [
          {
            "name": "3",
            "port": 10005,
            "ip": "10.3.1.104",
            "service-function-forwarder": "SFF-br3"
          }
        ],
        "name": "test-server",
        "type": "dpi"
      },
      {
        "rest-uri": "http://localhost:10006",
        "ip-mgmt-address": "10.3.1.103",
        "sf-data-plane-locator": [
          {
            "name": "4",
            "port": 10006,
            "ip": "10.3.1.102",
            "service-function-forwarder": "SFF-br3"
          }
        ],
        "name": "test-client",
        "type": "dpi"
      }
    ]
  }
}

The deployed topology like this:

          +----+           +----+          +----+
          |sff1|+----------|sff3|---------+|sff2|
          +----+           +----+          +----+
            |                                  |
     +--------------+                   +--------------+
     |              |                   |              |
+----------+   +--------+          +----------+   +--------+
|firewall-1|   |napt44-1|          |firewall-2|   |napt44-2|
+----------+   +--------+          +----------+   +--------+
  • Deploy the SFC2(firewall-abstract2⇒napt44-abstract2), select “Shortest Path” as schedule type and click button to Create Rendered Service Path in SFC UI (http://localhost:8181/sfc/index.html).

select schedule type

select schedule type

  • Verify the Rendered Service Path to ensure the selected hops are linked in one SFF. The correct RSP is firewall-1⇒napt44-1 or firewall-2⇒napt44-2. The first SF type is Firewall in Service Function Chain. So the algorithm will select first Hop randomly among all the SFs type is Firewall. Assume the first selected SF is firewall-2. All the path from firewall-1 to SF which type is Napt44 are list:

    • Path1: firewall-2 → sff2 → napt44-2

    • Path2: firewall-2 → sff2 → sff3 → sff1 → napt44-1 The shortest path is Path1, so the selected next hop is napt44-2.

rendered service path

rendered service path

Service Function Load Balancing User Guide
Overview

SFC Load-Balancing feature implements load balancing of Service Functions, rather than a one-to-one mapping between Service-Function-Forwarder and Service-Function.

Load Balancing Architecture

Service Function Groups (SFG) can replace Service Functions (SF) in the Rendered Path model. A Service Path can only be defined using SFGs or SFs, but not a combination of both.

Relevant objects in the YANG model are as follows:

  1. Service-Function-Group-Algorithm:

    Service-Function-Group-Algorithms {
        Service-Function-Group-Algorithm {
            String name
            String type
        }
    }
    
    Available types: ALL, SELECT, INDIRECT, FAST_FAILURE
    
  2. Service-Function-Group:

    Service-Function-Groups {
        Service-Function-Group {
            String name
            String serviceFunctionGroupAlgorithmName
            String type
            String groupId
            Service-Function-Group-Element {
                String service-function-name
                int index
            }
        }
    }
    
  3. ServiceFunctionHop: holds a reference to a name of SFG (or SF)

Tutorials

This tutorial will explain how to create a simple SFC configuration, with SFG instead of SF. In this example, the SFG will include two existing SF.

Setup SFC

For general SFC setup and scenarios, please see the SFC wiki page: https://wiki-archive.opendaylight.org/view/Service_Function_Chaining:Main

Create an algorithm

POST - http://127.0.0.1:8181/restconf/config/service-function-group-algorithm:service-function-group-algorithms

{
    "service-function-group-algorithm": [
      {
        "name": "alg1"
        "type": "ALL"
      }
   ]
}

(Header “content-type”: application/json)

Create a group

POST - http://127.0.0.1:8181/restconf/config/service-function-group:service-function-groups

{
    "service-function-group": [
    {
        "rest-uri": "http://localhost:10002",
        "ip-mgmt-address": "10.3.1.103",
        "algorithm": "alg1",
        "name": "SFG1",
        "type": "napt44",
        "sfc-service-function": [
            {
                "name":"napt44-104"
            },
            {
                "name":"napt44-103-1"
            }
        ]
      }
    ]
}
SFC Proof of Transit User Guide
Overview

Several deployments use traffic engineering, policy routing, segment routing or service function chaining (SFC) to steer packets through a specific set of nodes. In certain cases regulatory obligations or a compliance policy require to prove that all packets that are supposed to follow a specific path are indeed being forwarded across the exact set of nodes specified. I.e. if a packet flow is supposed to go through a series of service functions or network nodes, it has to be proven that all packets of the flow actually went through the service chain or collection of nodes specified by the policy. In case the packets of a flow weren’t appropriately processed, a proof of transit egress device would be required to identify the policy violation and take corresponding actions (e.g. drop or redirect the packet, send an alert etc.) corresponding to the policy.

Service Function Chaining (SFC) Proof of Transit (SFC PoT) implements Service Chaining Proof of Transit functionality on capable network devices. Proof of Transit defines mechanisms to securely prove that traffic transited the defined path. After the creation of an Rendered Service Path (RSP), a user can configure to enable SFC proof of transit on the selected RSP to effect the proof of transit.

To ensure that the data traffic follows a specified path or a function chain, meta-data is added to user traffic in the form of a header. The meta-data is based on a ‘share of a secret’ and provisioned by the SFC PoT configuration from ODL over a secure channel to each of the nodes in the SFC. This meta-data is updated at each of the service-hop while a designated node called the verifier checks whether the collected meta-data allows the retrieval of the secret.

The following diagram shows the overview and essentially utilizes Shamir’s secret sharing algorithm, where each service is given a point on the curve and when the packet travels through each service, it collects these points (meta-data) and a verifier node tries to re-construct the curve using the collected points, thus verifying that the packet traversed through all the service functions along the chain.

SFC Proof of Transit overview

SFC Proof of Transit overview

Transport options for different protocols includes a new TLV in SR header for Segment Routing, NSH Type-2 meta-data, IPv6 extension headers, IPv4 variants and for VXLAN-GPE. More details are captured in the following link.

In-situ OAM: https://github.com/CiscoDevNet/iOAM

Common acronyms used in the following sections:

  • SF - Service Function

  • SFF - Service Function Forwarder

  • SFC - Service Function Chain

  • SFP - Service Function Path

  • RSP - Rendered Service Path

  • SFC PoT - Service Function Chain Proof of Transit

SFC Proof of Transit Architecture

SFC PoT feature is implemented as a two-part implementation with a north-bound handler that augments the RSP while a south-bound renderer auto-generates the required parameters and passes it on to the nodes that belong to the SFC.

The north-bound feature is enabled via odl-sfc-pot feature while the south-bound renderer is enabled via the odl-sfc-pot-netconf-renderer feature. For the purposes of SFC PoT handling, both features must be installed.

RPC handlers to augment the RSP are part of SfcPotRpc while the RSP augmentation to enable or disable SFC PoT feature is done via SfcPotRspProcessor.

SFC Proof of Transit entities

In order to implement SFC Proof of Transit for a service function chain, an RSP is a pre-requisite to identify the SFC to enable SFC PoT on. SFC Proof of Transit for a particular RSP is enabled by an RPC request to the controller along with necessary parameters to control some of the aspects of the SFC Proof of Transit process.

The RPC handler identifies the RSP and adds PoT feature meta-data like enable/disable, number of PoT profiles, profiles refresh parameters etc., that directs the south-bound renderer appropriately when RSP changes are noticed via call-backs in the renderer handlers.

Administering SFC Proof of Transit

To use the SFC Proof of Transit Karaf, at least the following Karaf features must be installed:

  • odl-sfc-model

  • odl-sfc-provider

  • odl-sfc-netconf

  • odl-restconf

  • odl-netconf-topology

  • odl-netconf-connector-all

  • odl-sfc-pot

Please note that the odl-sfc-pot-netconf-renderer or other renderers in future must be installed for the feature to take full-effect. The details of the renderer features are described in other parts of this document.

SFC Proof of Transit Tutorial
Overview

This tutorial is a simple example how to configure Service Function Chain Proof of Transit using SFC POT feature.

Preconditions

To enable a device to handle SFC Proof of Transit, it is expected that the NETCONF node device advertise capability as under ioam-sb-pot.yang present under sfc-model/src/main/yang folder. It is also expected that base NETCONF support be enabled and its support capability advertised as capabilities.

NETCONF support:urn:ietf:params:netconf:base:1.0

PoT support: (urn:cisco:params:xml:ns:yang:sfc-ioam-sb-pot?revision=2017-01-12)sfc-ioam-sb-pot

It is also expected that the devices are netconf mounted and available in the topology-netconf store.

Instructions

When SFC Proof of Transit is installed, all netconf nodes in topology-netconf are processed and all capable nodes with accessible mountpoints are cached.

First step is to create the required RSP as is usually done using RSP creation steps in SFC main.

Once RSP name is available it is used to send a POST RPC to the controller similar to below:

POST - http://127.0.0.1:8181/restconf/operations/sfc-ioam-nb-pot:enable-sfc-ioam-pot-rendered-path/

{
    "input":
    {
        "sfc-ioam-pot-rsp-name": "sfc-path-3sf3sff",
        "ioam-pot-enable":true,
        "ioam-pot-num-profiles":2,
        "ioam-pot-bit-mask":"bits32",
        "refresh-period-time-units":"milliseconds",
        "refresh-period-value":5000
    }
}

The following can be used to disable the SFC Proof of Transit on an RSP which disables the PoT feature.

POST - http://127.0.0.1:8181/restconf/operations/sfc-ioam-nb-pot:disable-sfc-ioam-pot-rendered-path/

{
    "input":
    {
        "sfc-ioam-pot-rsp-name": "sfc-path-3sf3sff",
    }
}
SFC PoT NETCONF Renderer User Guide
Overview

The SFC Proof of Transit (PoT) NETCONF renderer implements SFC Proof of Transit functionality on NETCONF-capable devices, that have advertised support for in-situ OAM (iOAM) support.

It listens for an update to an existing RSP with enable or disable proof of transit support and adds the auto-generated SFC PoT configuration parameters to all the SFC hop nodes. The last node in the SFC is configured as a verifier node to allow SFC PoT process to be completed.

Common acronyms are used as below:

  • SF - Service Function

  • SFC - Service Function Chain

  • RSP - Rendered Service Path

  • SFF - Service Function Forwarder

Mapping to SFC entities

The renderer module listens to RSP updates in SfcPotNetconfRSPListener and triggers configuration generation in SfcPotNetconfIoam class. Node arrival and leaving are managed via SfcPotNetconfNodeManager and SfcPotNetconfNodeListener. In addition there is a timer thread that runs to generate configuration periodically to refresh the profiles in the nodes that are part of the SFC.

Administering SFC PoT NETCONF Renderer

To use the SFC Proof of Transit Karaf, the following Karaf features must be installed:

  • odl-sfc-model

  • odl-sfc-provider

  • odl-sfc-netconf

  • odl-restconf-all

  • odl-netconf-topology

  • odl-netconf-connector-all

  • odl-sfc-pot

  • odl-sfc-pot-netconf-renderer

SFC PoT NETCONF Renderer Tutorial
Overview

This tutorial is a simple example how to enable SFC PoT on NETCONF-capable devices.

Preconditions

The NETCONF-capable device will have to support sfc-ioam-sb-pot.yang file.

It is expected that a NETCONF-capable VPP device has Honeycomb (Hc2vpp) Java-based agent that helps to translate between NETCONF and VPP internal APIs.

More details are here: In-situ OAM: https://github.com/CiscoDevNet/iOAM

Steps

When the SFC PoT NETCONF renderer module is installed, all NETCONF nodes in topology-netconf are processed and all sfc-ioam-sb-pot yang capable nodes with accessible mountpoints are cached.

The first step is to create RSP for the SFC as per SFC guidelines above.

Enable SFC PoT is done on the RSP via RESTCONF to the ODL as outlined above.

Internally, the NETCONF renderer will act on the callback to a modified RSP that has PoT enabled.

In-situ OAM algorithms for auto-generation of SFC PoT parameters are generated automatically and sent to these nodes via NETCONF.

Logical Service Function Forwarder
Overview
Rationale

When the current SFC is deployed in a cloud environment, it is assumed that each switch connected to a Service Function is configured as a Service Function Forwarder and each Service Function is connected to its Service Function Forwarder depending on the Compute Node where the Virtual Machine is located.

Deploying SFC in Cloud Environments

As shown in the picture above, this solution allows the basic cloud use cases to be fulfilled, as for example, the ones required in OPNFV Brahmaputra, however, some advanced use cases like the transparent migration of VMs can not be implemented. The Logical Service Function Forwarder enables the following advanced use cases:

  1. Service Function mobility without service disruption

  2. Service Functions load balancing and failover

As shown in the picture below, the Logical Service Function Forwarder concept extends the current SFC northbound API to provide an abstraction of the underlying Data Center infrastructure. The Data Center underlaying network can be abstracted by a single SFF. This single SFF uses the logical port UUID as data plane locator to connect SFs globally and in a location-transparent manner. SFC makes use of Genius project to track the location of the SF’s logical ports.

Single Logical SFF concept

The SFC internally distributes the necessary flow state over the relevant switches based on the internal Data Center topology and the deployment of SFs.

Changes in data model

The Logical Service Function Forwarder concept extends the current SFC northbound API to provide an abstraction of the underlying Data Center infrastructure.

The Logical SFF simplifies the configuration of the current SFC data model by reducing the number of parameters to be be configured in every SFF, since the controller will discover those parameters by interacting with the services offered by the Genius project.

The following picture shows the Logical SFF data model. The model gets simplified as most of the configuration parameters of the current SFC data model are discovered in runtime. The complete YANG model can be found here logical SFF model.

Logical SFF data model
How to configure the Logical SFF

The following are examples to configure the Logical SFF:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/restconf/config/service-function:service-functions/

Service Functions JSON.

{
"service-functions": {
    "service-function": [
        {
            "name": "firewall-1",
            "type": "firewall",
            "sf-data-plane-locator": [
                {
                    "name": "firewall-dpl",
                    "interface-name": "eccb57ae-5a2e-467f-823e-45d7bb2a6a9a",
                    "transport": "service-locator:eth-nsh",
                    "service-function-forwarder": "sfflogical1"

                }
            ]
        },
        {
            "name": "dpi-1",
            "type": "dpi",
            "sf-data-plane-locator": [
                {
                    "name": "dpi-dpl",
                    "interface-name": "df15ac52-e8ef-4e9a-8340-ae0738aba0c0",
                    "transport": "service-locator:eth-nsh",
                    "service-function-forwarder": "sfflogical1"
                }
            ]
        }
    ]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/

Service Function Forwarders JSON.

{
"service-function-forwarders": {
    "service-function-forwarder": [
       {
            "name": "sfflogical1"
        }
    ]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/

Service Function Chains JSON.

{
"service-function-chains": {
    "service-function-chain": [
        {
            "name": "SFC1",
            "sfc-service-function": [
                {
                    "name": "dpi-abstract1",
                    "type": "dpi"
                },
                {
                    "name": "firewall-abstract1",
                    "type": "firewall"
                }
            ]
        },
        {
            "name": "SFC2",
            "sfc-service-function": [
                {
                    "name": "dpi-abstract1",
                    "type": "dpi"
                }
            ]
        }
    ]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
 admin:admin http://localhost:8182/restconf/config/service-function-chain:service-function-paths/

Service Function Paths JSON.

{
"service-function-paths": {
    "service-function-path": [
        {
            "name": "SFP1",
            "service-chain-name": "SFC1",
            "starting-index": 255,
            "symmetric": "true",
            "context-metadata": "NSH1",
            "transport-type": "service-locator:vxlan-gpe"

        }
    ]
}
}

As a result of above configuration, OpenDaylight renders the needed flows in all involved SFFs. Those flows implement:

  • Two Rendered Service Paths:

    • dpi-1 (SF1), firewall-1 (SF2)

    • firewall-1 (SF2), dpi-1 (SF1)

  • The communication between SFFs and SFs based on eth-nsh

  • The communication between SFFs based on vxlan-gpe

The following picture shows a topology and traffic flow (in green) which corresponds to the above configuration.

Logical SFF Example

Logical SFF Example

The Logical SFF functionality allows OpenDaylight to find out the SFFs holding the SFs involved in a path. In this example the SFFs affected are Node3 and Node4 thus the controller renders the flows containing NSH parameters just in those SFFs.

Here you have the new flows rendered in Node3 and Node4 which implement the NSH protocol. Every Rendered Service Path is represented by an NSP value. We provisioned a symmetric RSP so we get two NSPs: 8388613 and 5. Node3 holds the first SF of NSP 8388613 and the last SF of NSP 5. Node 4 holds the first SF of NSP 5 and the last SF of NSP 8388613. Both Node3 and Node4 will pop the NSH header when the received packet has gone through the last SF of its path.

Rendered flows Node 3

cookie=0x14, duration=59.264s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
cookie=0x14, duration=59.194s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
cookie=0x14, duration=59.257s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=5 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0x14, duration=59.189s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=8388613 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0xba5eba1100000203, duration=59.213s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=5 actions=pop_nsh,set_field:6e:e0:06:b4:c5:1e->eth_src,resubmit(,17)
cookie=0xba5eba1100000201, duration=59.213s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=59.188s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=8388613 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=59.182s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=set_field:0->tun_id,output:6

Rendered Flows Node 4

cookie=0x14, duration=69.040s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
cookie=0x14, duration=69.008s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
cookie=0x14, duration=69.040s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=5 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0x14, duration=69.005s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=8388613 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0xba5eba1100000201, duration=69.029s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=5 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=69.029s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=set_field:0->tun_id,output:1
cookie=0xba5eba1100000201, duration=68.999s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000203, duration=68.996s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=8388613 actions=pop_nsh,set_field:02:14:84:5e:a8:5d->eth_src,resubmit(,17)

An interesting scenario to show the Logical SFF strength is the migration of a SF from a compute node to another. The OpenDaylight will learn the new topology by itself, then it will re-render the new flows to the new SFFs affected.

Logical SFF - SF Migration Example

Logical SFF - SF Migration Example

In our example, SF2 is moved from Node4 to Node2 then OpenDaylight removes NSH specific flows from Node4 and puts them in Node2. Check below flows showing this effect. Now Node3 keeps holding the first SF of NSP 8388613 and the last SF of NSP 5; but Node2 becomes the new holder of the first SF of NSP 5 and the last SF of NSP 8388613.

Rendered Flows Node 3 After Migration

cookie=0x14, duration=64.044s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
cookie=0x14, duration=63.947s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
cookie=0x14, duration=64.044s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=5 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0x14, duration=63.947s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=8388613 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0xba5eba1100000201, duration=64.034s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000203, duration=64.034s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=5 actions=pop_nsh,set_field:6e:e0:06:b4:c5:1e->eth_src,resubmit(,17)
cookie=0xba5eba1100000201, duration=63.947s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=8388613 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=63.942s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=set_field:0->tun_id,output:2

Rendered Flows Node 2 After Migration

cookie=0x14, duration=56.856s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
cookie=0x14, duration=56.755s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
cookie=0x14, duration=56.847s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=5 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0x14, duration=56.755s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=8388613 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0xba5eba1100000201, duration=56.823s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=5 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=56.823s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=set_field:0->tun_id,output:4
cookie=0xba5eba1100000201, duration=56.755s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000203, duration=56.750s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=8388613 actions=pop_nsh,set_field:02:14:84:5e:a8:5d->eth_src,resubmit(,17)

Rendered Flows Node 4 After Migration

-- No flows for NSH processing --
Classifier impacts

As previously mentioned, in the Logical SFF rationale, the Logical SFF feature relies on Genius to get the dataplane IDs of the OpenFlow switches, in order to properly steer the traffic through the chain.

Since one of the classifier’s objectives is to steer the packets into the SFC domain, the classifier has to be aware of where the first Service Function is located - if it migrates somewhere else, the classifier table has to be updated accordingly, thus enabling the seemless migration of Service Functions.

For this feature, mobility of the client VM is out of scope, and should be managed by its high-availability module, or VNF manager.

Keep in mind that classification always occur in the compute-node where the client VM (i.e. traffic origin) is running.

How to attach the classifier to a Logical SFF

In order to leverage this functionality, the classifier has to be configured using a Logical SFF as an attachment-point, specifying within it the neutron port to classify.

The following examples show how to configure an ACL, and a classifier having a Logical SFF as an attachment-point:

Configure an ACL

The following ACL enables traffic intended for port 80 within the subnetwork 192.168.2.0/24, for RSP1 and RSP1-Reverse.

{
  "access-lists": {
    "acl": [
      {
        "acl-name": "ACL1",
        "acl-type": "ietf-access-control-list:ipv4-acl",
        "access-list-entries": {
          "ace": [
            {
              "rule-name": "ACE1",
              "actions": {
                "service-function-acl:rendered-service-path": "RSP1"
              },
              "matches": {
                "destination-ipv4-network": "192.168.2.0/24",
                "source-ipv4-network": "192.168.2.0/24",
                "protocol": "6",
                "source-port-range": {
                    "lower-port": 0
                },
                "destination-port-range": {
                    "lower-port": 80
                }
              }
            }
          ]
        }
      },
      {
        "acl-name": "ACL2",
        "acl-type": "ietf-access-control-list:ipv4-acl",
        "access-list-entries": {
          "ace": [
            {
              "rule-name": "ACE2",
              "actions": {
                "service-function-acl:rendered-service-path": "RSP1-Reverse"
              },
              "matches": {
                "destination-ipv4-network": "192.168.2.0/24",
                "source-ipv4-network": "192.168.2.0/24",
                "protocol": "6",
                "source-port-range": {
                    "lower-port": 80
                },
                "destination-port-range": {
                    "lower-port": 0
                }
              }
            }
          ]
        }
      }
    ]
  }
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/ietf-access-control-list:access-lists/

Configure a classifier JSON

The following JSON provisions a classifier, having a Logical SFF as an attachment point. The value of the field ‘interface’ is where you indicate the neutron ports of the VMs you want to classify.

{
  "service-function-classifiers": {
    "service-function-classifier": [
      {
        "name": "Classifier1",
        "scl-service-function-forwarder": [
          {
            "name": "sfflogical1",
            "interface": "09a78ba3-78ba-40f5-a3ea-1ce708367f2b"
          }
        ],
        "acl": {
            "name": "ACL1",
            "type": "ietf-access-control-list:ipv4-acl"
         }
      }
    ]
  }
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-classifier:service-function-classifiers/
SFC pipeline impacts

After binding SFC service with a particular interface by means of Genius, as explained in the Genius User Guide, the entry point in the SFC pipeline will be table 82 (SFC_TRANSPORT_CLASSIFIER_TABLE), and from that point, packet processing will be similar to the SFC OpenFlow pipeline, just with another set of specific tables for the SFC service.

This picture shows the SFC pipeline after service integration with Genius:

SFC Logical SFF OpenFlow pipeline

SFC Logical SFF OpenFlow pipeline

Directional data plane locators for symmetric paths
Overview

A symmetric path results from a Service Function Path with the symmetric field set or when any of the constituent Service Functions is set as bidirectional. Such a path is defined by two Rendered Service Paths where one of them steers the traffic through the same Service Functions as the other but in opposite order. These two Rendered Service Paths are also said to be symmetric to each other and gives to each path a sense of direction: The Rendered Service Path that corresponds to the same order of Service Functions as that defined on the Service Function Chain is tagged as the forward or up-link path, while the Rendered Service Path that corresponds to the opposite order is tagged as reverse or down-link path.

Directional data plane locators allow the use of different interfaces or interface details between the Service Function Forwarder and the Service Function in relation with the direction of the path for which they are being used. This function is relevant for Service Functions that would have no other way of discerning the direction of the traffic, like for example legacy bump-in-the-wire network devices.

                    +-----------------------------------------------+
                    |                                               |
                    |                                               |
                    |                      SF                       |
                    |                                               |
                    |  sf-forward-dpl                sf-reverse-dpl |
                    +--------+-----------------------------+--------+
                             |                             |
                     ^       |      +              +       |      ^
                     |       |      |              |       |      |
                     |       |      |              |       |      |
                     +       |      +              +       |      +
                Forward Path | Reverse Path   Forward Path | Reverse Path
                     +       |      +              +       |      +
                     |       |      |              |       |      |
                     |       |      |              |       |      |
                     |       |      |              |       |      |
                     +       |      v              v       |      +
                             |                             |
                 +-----------+-----------------------------------------+
  Forward Path   |     sff-forward-dpl               sff-reverse-dpl   |   Forward Path
+--------------> |                                                     | +-------------->
                 |                                                     |
                 |                         SFF                         |
                 |                                                     |
<--------------+ |                                                     | <--------------+
  Reverse Path   |                                                     |   Reverse Path
                 +-----------------------------------------------------+

As shown in the previous figure, the forward path egress from the Service Function Forwarder towards the Service Function is defined by the sff-forward-dpl and sf-forward-dpl data plane locators. The forward path ingress from the Service Function to the Service Function Forwarder is defined by the sf-reverse-dpl and sff-reverse-dpl data plane locators. For the reverse path, it’s the opposite: the sff-reverse-dpl and sf-reverse-dpl define the egress from the Service Function Forwarder to the Service Function, and the sf-forward-dpl and sff-forward-dpl define the ingress into the Service Function Forwarder from the Service Function.

Note

Directional data plane locators are only supported in combination with the SFC OF Renderer at this time.

Configuration

Directional data plane locators are configured within the service-function-forwarder in the service-function-dictionary entity, which describes the association between a Service Function Forwarder and Service Functions:

service-function-forwarder.yang
     list service-function-dictionary {
         key "name";
         leaf name {
           type sfc-common:sf-name;
           description
               "The name of the service function.";
         }
         container sff-sf-data-plane-locator {
           description
             "SFF and SF data plane locators to use when sending
              packets from this SFF to the associated SF";
           leaf sf-dpl-name {
             type sfc-common:sf-data-plane-locator-name;
             description
               "The SF data plane locator to use when sending
                packets to the associated service function.
                Used both as forward and reverse locators for
                paths of a symmetric chain.";
           }
           leaf sff-dpl-name {
             type sfc-common:sff-data-plane-locator-name;
             description
               "The SFF data plane locator to use when sending
                packets to the associated service function.
                Used both as forward and reverse locators for
                paths of a symmetric chain.";
           }
           leaf sf-forward-dpl-name {
             type sfc-common:sf-data-plane-locator-name;
             description
               "The SF data plane locator to use when sending
                packets to the associated service function
                on the forward path of a symmetric chain";
           }
           leaf sf-reverse-dpl-name {
             type sfc-common:sf-data-plane-locator-name;
             description
               "The SF data plane locator to use when sending
                packets to the associated service function
                on the reverse path of a symmetric chain";
           }
           leaf sff-forward-dpl-name {
             type sfc-common:sff-data-plane-locator-name;
             description
               "The SFF data plane locator to use when sending
                packets to the associated service function
                on the forward path of a symmetric chain.";
           }
           leaf sff-reverse-dpl-name {
             type sfc-common:sff-data-plane-locator-name;
             description
               "The SFF data plane locator to use when sending
                packets to the associated service function
                on the reverse path of a symmetric chain.";
           }
         }
     }
Example

The following configuration example is based on the Logical SFF configuration one. Only the Service Function and Service Function Forwarder configuration changes with respect to that example:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/restconf/config/service-function:service-functions/

Service Functions JSON.

{
"service-functions": {
    "service-function": [
        {
            "name": "firewall-1",
            "type": "firewall",
            "sf-data-plane-locator": [
                {
                    "name": "sf-firewall-net-A-dpl",
                    "interface-name": "eccb57ae-5a2e-467f-823e-45d7bb2a6a9a",
                    "transport": "service-locator:mac",
                    "service-function-forwarder": "sfflogical1"

                },
                {
                    "name": "sf-firewall-net-B-dpl",
                    "interface-name": "7764b6f1-a5cd-46be-9201-78f917ddee1d",
                    "transport": "service-locator:mac",
                    "service-function-forwarder": "sfflogical1"

                }
            ]
        },
        {
            "name": "dpi-1",
            "type": "dpi",
            "sf-data-plane-locator": [
                {
                    "name": "sf-dpi-net-A-dpl",
                    "interface-name": "df15ac52-e8ef-4e9a-8340-ae0738aba0c0",
                    "transport": "service-locator:mac",
                    "service-function-forwarder": "sfflogical1"
                },
                {
                    "name": "sf-dpi-net-B-dpl",
                    "interface-name": "1bb09b01-422d-4ccf-8d7a-9ebf00d1a1a5",
                    "transport": "service-locator:mac",
                    "service-function-forwarder": "sfflogical1"
                }
            ]
        }
    ]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/

Service Function Forwarders JSON.

{
"service-function-forwarders": {
    "service-function-forwarder": [
        {
            "name": "sfflogical1"
            "sff-data-plane-locator": [
                {
                    "name": "sff-firewall-net-A-dpl",
                    "data-plane-locator": {
                        "interface-name": "eccb57ae-5a2e-467f-823e-45d7bb2a6a9a",
                        "transport": "service-locator:mac"
                    }
                },
                {
                    "name": "sff-firewall-net-B-dpl",
                    "data-plane-locator": {
                        "interface-name": "7764b6f1-a5cd-46be-9201-78f917ddee1d",
                        "transport": "service-locator:mac"
                    }
                },
                {
                    "name": "sff-dpi-net-A-dpl",
                    "data-plane-locator": {
                        "interface-name": "df15ac52-e8ef-4e9a-8340-ae0738aba0c0",
                        "transport": "service-locator:mac"
                    }
                },
                {
                    "name": "sff-dpi-net-B-dpl",
                    "data-plane-locator": {
                        "interface-name": "1bb09b01-422d-4ccf-8d7a-9ebf00d1a1a5",
                        "transport": "service-locator:mac"
                    }
                }
            ],
            "service-function-dictionary": [
                {
                    "name": "firewall-1",
                    "sff-sf-data-plane-locator": {
                        "sf-forward-dpl-name": "sf-firewall-net-A-dpl",
                        "sf-reverse-dpl-name": "sf-firewall-net-B-dpl",
                        "sff-forward-dpl-name": "sff-firewall-net-A-dpl",
                        "sff-reverse-dpl-name": "sff-firewall-net-B-dpl",
                    }
                },
                {
                    "name": "dpi-1",
                    "sff-sf-data-plane-locator": {
                        "sf-forward-dpl-name": "sf-dpi-net-A-dpl",
                        "sf-reverse-dpl-name": "sf-dpi-net-B-dpl",
                        "sff-forward-dpl-name": "sff-dpi-net-A-dpl",
                        "sff-reverse-dpl-name": "sff-dpi-net-B-dpl",
                    }
                }
            ]
        }
    ]
}
}

In comparison with the Logical SFF example, noticed that each Service Function is configured with two data plane locators instead of one so that each can be used in different directions of the path. To specify which locator is used on which direction, the Service Function Forwarder configuration is also more extensive compared to the previous example.

When comparing this example with the Logical SFF one, that the Service Function Forwarder is configured with data plane locators and that they hold the same interface name values as the corresponding Service Function interfaces. This is because in the Logical SFF particular case, a single logical interface fully describes an attachment of a Service Function Forwarder to a Service Function on both the Service Function and Service Function Forwarder sides. For non-Logical SFF scenarios, it would be expected for the data plane locators to have different values as we have seen on other examples through out this user guide. For example, if mac addresses are to be specified in the locators, the Service Function would have a different mac address than the Service Function Forwarder.

As a result of the overall configuration, two Rendered Service Paths are implemented. The forward path:

                      +------------+                +-------+
                      | firewall-1 |                | dpi- 1 |
                      +---+---+----+                +--+--+-+
                          ^   |                        ^  |
                 net-A-dpl|   |net-B-dpl      net-A-dpl|  |net-B-dpl
                          |   |                        |  |
+----------+              |   |                        |  |             +----------+
| client A +--------------+   +------------------------+  +------------>+ server B |
+----------+                                                            +----------+

And the reverse path:

                      +------------+                +-------+
                      | firewall 1 |                | dpi-1 |
                      +---+---+----+                +--+--+-+
                          |   ^                        |  ^
                 net-A-dpl|   |net-B-dpl      net-A-dpl|  |net-B-dpl
                          |   |                        |  |
+----------+              |   |                        |  |             +----------+
| client A +<-------------+   +------------------------+  +-------------+ server B |
+----------+                                                            +----------+

Consider the following notes to put the example in context:

  • The classification function is obviated from the illustration.

  • The forward path is up-link traffic from a client in network A to a server in network B.

  • The reverse path is down-link traffic from a server in network B to a client in network A.

  • The service functions might be legacy bump-in-the-wire network devices that need to use different interfaces for each network.

SFC Statistics User Guide

Statistics can be queried for Rendered Service Paths created on OVS bridges. Future support will be added for Service Function Forwarders and Service Functions. Future support will also be added for VPP and IOs-XE devices.

To use SFC statistics the ‘odl-sfc-statistics’ Karaf feature needs to be installed.

Statistics are queried by sending an RPC RESTconf message to ODL. For RSPs, its possible to either query statistics for one individual RSP or for all RSPs, as follows:

Querying statistics for a specific RSP:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '{ "input": { "name" : "path1-Path-42" } }' -X POST --user admin:admin
http://localhost:8181/restconf/operations/sfc-statistics-operations:get-rsp-statistics

Querying statistics for all RSPs:

curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '{ "input": { } }' -X POST --user admin:admin
http://localhost:8181/restconf/operations/sfc-statistics-operations:get-rsp-statistics

The following is the sort of output that can be expected for each RSP.

{
    "output": {
        "statistics": [
            {
                "name": "sfc-path-1sf1sff-Path-34",
                "statistic-by-timestamp": [
                    {
                        "service-statistic": {
                            "bytes-in": 0,
                            "bytes-out": 0,
                            "packets-in": 0,
                            "packets-out": 0
                        },
                        "timestamp": 1518561500480
                    }
                ]
            }
        ]
    }
}
NETCONF User Guide

Guide has moved. Please navigate to document from here

Developer Guide

Overview

Integrating Animal Sniffer with OpenDaylight projects

This section provides information required to setup OpenDaylight projects with the Maven’s Animal Sniffer plugin for testing API compatibility with OpenJDK.

Steps to setup up animal sniffer plugin with your project
  1. Clone odlparent and checkout the required branch. The example below uses the branch ‘origin/master/2.0.x’

git clone https://git.opendaylight.org/gerrit/odlparent
cd odlparent
git checkout origin/master/2.0.x
  1. Modify the file odlparent/pom.xml to install the Animal Sniffer plugin as shown in the below example or refer to the change odlparent gerrit patch.

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>animal-sniffer-maven-plugin</artifactId>
  <version>1.16</version>
  <configuration>
      <signature>
          <groupId>org.codehaus.mojo.signature</groupId>
          <artifactId>java18</artifactId>
          <version>1.0</version>
      </signature>
  </configuration>
  <executions>
      <execution>
          <id>animal-sniffer</id>
          <phase>verify</phase>
          <goals>
              <goal>check</goal>
          </goals>
      </execution>
      <execution>
          <id>check-java-version</id>
          <phase>package</phase>
          <goals>
              <goal>build</goal>
          </goals>
          <configuration>
            <signature>
              <groupId>org.codehaus.mojo.signature</groupId>
              <artifactId>java18</artifactId>
              <version>1.0</version>
            </signature>
          </configuration>
      </execution>
  </executions>
</plugin>
  1. Run a mvn clean install in odlparent.

mvn clean install
  1. Clone the respective project to be tested with the plugin. As shown in the example in yangtools gerrit patch, modify the relevant pom.xml files to reference the version of odlparent which is checked-out. As shown in the example below change the version to 2.0.6-SNAPSHOT or the version of the 2.0.x-SNAPSHOT odlparent is checked out.

<parent>
    <groupId>org.opendaylight.odlparent</groupId>
    <artifactId>odlparent</artifactId>
    <version>2.0.6-SNAPSHOT</version>
    <relativePath/>
</parent>
  1. Run a mvn clean install in your project.

mvn clean install
  1. Run mvn animal-sniffer:check on your project and fix any relevant issues.

mvn animal-sniffer:check

Project-specific Developer Guides

Distribution Version reporting
Overview

This section provides an overview of odl-distribution-version feature.

A remote user of OpenDaylight usually has access to RESTCONF and NETCONF northbound interfaces, but does not have access to the system OpenDaylight is running on. OpenDaylight has released multiple versions including Service Releases, and there are incompatible changes between them. In order to know which YANG modules to use, which bugs to expect and which workarounds to apply, such user would need to know the exact version of at least one OpenDaylight component.

There are indirect ways to deduce such version, but the direct way is enabled by odl-distribution-version feature. Administrator can specify version strings, which would be available to users via NETCONF, or via RESTCONF if OpenDaylight is configured to initiate NETCONF connection to its config subsystem northbound interface.

By default, users have write access to config subsystem, so they can add, modify or delete any version strings present there. Admins can only influence whether the feature is installed, and initial values.

Config subsystem is local only, not cluster aware, so each member reports versions independently. This is suitable for heterogeneous clusters. On homogeneous clusters, make sure you set and check every member.

Key APIs and Interfaces

Current implementation relies heavily on config-parent parent POM file from Controller project.

YANG model for config subsystem

Throughout this chapter, model denotes YANG module, and module denotes item in config subsystem module list.

Version functionality relies on config subsystem and its config YANG model. The YANG model odl-distribution-version adds an identity odl-version and augments /config:modules/module/configuration adding new case for odl-version type. This case contains single leaf version, which would hold the version string.

Config subsystem can hold multiple modules, the version string should contain version of OpenDaylight component corresponding to the module name. As this is pure metadata with no consequence on OpenDaylight behavior, there is no prescribed scheme for chosing config module names. But see the default configuration file for examples.

Java API

Each config module needs to come with java classes which override customValidation() and createInstance(). Version related modules have no impact on OpenDaylight internal behavior, so the methods return void and dummy closeable respectively, without any side effect.

Default config file

Initial version values are set via config file odl-version.xml which is created in $KARAF_HOME/etc/opendaylight/karaf/ upon installation of odl-distribution-version feature. If admin wants to use different content, the file with desired content has to be created there before feature installation happens.

By default, the config file defines two config modules, named odl-distribution-version and odl-odlparent-version.

Currently the default version values are set to Maven property strings (as opposed to valid values), as the needed new functionality did not make it into Controller project in Boron. See Bug number 6003.

Karaf Feature

The odl-distribution-version feature is currently the only feature defined in feature repository of artifactId features-distribution, which is available (transitively) in OpenDaylight Karaf distribution.

RESTCONF usage

Opendaylight config subsystem NETCONF northbound is not made available just by installing odl-distribution-version, but most other feature installations would enable it. RESTCONF interfaces are enabled by installing odl-restconf feature, but that do not allow access to config subsystem by itself.

On single node deployments, installation of odl-netconf-connector-ssh is recommended, which would configure controller-config device and its MD-SAL mount point. See documentation for clustering on how to create similar devices for member modes, as controller-config name is not unique in that context.

Assuming single node deployment and user located on the same system, here is an example curl command accessing odl-odlparent-version config module:

curl 127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-distribution-version:odl-version/odl-odlparent-version
Distribution features
Overview

This section provides an overview of odl-integration-compatible-with-all and odl-integration-all features.

Integration/Distribution project produces a Karaf 4 distribution which gives users access to many Karaf features provided by upstream OpenDaylight projects. Users are free to install arbitrary subset of those features, but not every feature combination is expected to work properly.

Some features are pro-active, which means OpenDaylight in contact with othe network elements starts diving changes in the network even without prompting by users, in order to satisfy initial conditions their use case expects. Such activity from one feature may in turn affect behavior of another feature.

In some cases, there exists features which offer diferent implementation of the same service, they may fail to initialize properly (e.g. failing to bind a port already bound by the other feature).

Integration/Test project is maintaining system tests (CSIT) jobs. Aside of testing scenarios with only a minimal set of features installed (-only- jobs), the scenarios are also tested with a large set of features installed (-all- jobs).

In order to define a proper set of features to test with, Integration/Distribution project defines two “aggregate” features. Note that these features are not intended for production use, so the feature repository which defines them is not enabled by default.

The content of these features is determined by upstream OpenDaylight contributions, with Integration/Test providing insight on observed compatibuility relations. Integration/Distribution team is focused only on making sure the build process is reliable.

Feature repositories
features-index

This feature repository is enabled by default. It does not refer to any new features directly, instead it refers to upstream feature repositories, enabling any feature contained therein to be available for installation.

features-test

This feature repository defines the two aggregate features. To enable this repository, change the featuresRepositories line of org.apache.karaf.features.cfg file, by copy-pasting the feature-index value and editing the name.

Karaf features

The two aggregate features, defining sets of user-facing features defined by compatibility requirements. Note that is the compatibility relation differs between single node an cluster deployments, single node point of view takes precedence.

odl-integration-all

This feature contains the largest set of user-facing features which may affect each others operation, but the set does not affect usability of Karaf infrastructure.

Note that port binding conflicts and “server is unhealthy” status of config subsystem are considered to affect usability, as is a failure of Restconf to respond to GET on /restconf/modules with HTTP status 200.

This feature is used in verification process for Integration/Distribution contributions.

odl-integration-compatible-with-all

This feature contains the largest set of user-facing features which are not pro-active and do not affect each others operation.

Installing this set together with just one of odl-integration-all features should still result in fully operational installation, as one pro-active feature should not lead to any conflicts. This should also hold if the single added feature is outside odl-integration-all, if it is one of conflicting implementations (and no such implementations is in odl-integration-all).

This feature is used in the aforementioned -all- CSIT jobs.

Neutron Service Developer Guide
Overview

This Karaf feature (odl-neutron-service) provides integration support for OpenStack Neutron via the OpenDaylight ML2 mechanism driver. The Neutron Service is only one of the components necessary for OpenStack integration. It defines YANG models for OpenStack Neutron data models and northbound API via REST API and YANG model RESTCONF.

Those developers who want to add new provider for new OpenStack Neutron extensions/services (Neutron constantly adds new extensions/services and OpenDaylight will keep up with those new things) need to communicate with this Neutron Service or add models to Neutron Service. If you want to add new extensions/services themselves to the Neutron Service, new YANG data models need to be added, but that is out of scope of this document because this guide is for a developer who will be using the feature to build something separate, but not somebody who will be developing code for this feature itself.

Neutron Service Architecture
Neutron Service Architecture

Neutron Service Architecture

The Neutron Service defines YANG models for OpenStack Neutron integration. When OpenStack admins/users request changes (creation/update/deletion) of Neutron resources, e.g., Neutron network, Neutron subnet, Neutron port, the corresponding YANG model within OpenDaylight will be modified. The OpenDaylight OpenStack will subscribe the changes on those models and will be notified those modification through MD-SAL when changes are made. Then the provider will do the necessary tasks to realize OpenStack integration. How to realize it (or even not realize it) is up to each provider. The Neutron Service itself does not take care of it.

How to Write a SB Neutron Consumer

In Boron, there is only one options for SB Neutron Consumers:

  • Listening for changes via the Neutron YANG model

Until Beryllium there was another way with the legacy I*Aware interface. From Boron, the interface was eliminated. So all the SB Neutron Consumers have to use Neutron YANG model.

Neutron YANG models

Neutron service defines YANG models for Neutron. The details can be found at

Basically those models are based on OpenStack Neutron API definitions. For exact definitions, OpenStack Neutron source code needs to be referred as the above documentation doesn’t always cover the necessary details. There is nothing special to utilize those Neutron YANG models. The basic procedure will be:

  1. subscribe for changes made to the model

  2. respond on the data change notification for each models

Note

Currently there is no way to refuse the request configuration at this point. That is left to future work.

public class NeutronNetworkChangeListener implements DataChangeListener, AutoCloseable {
    private ListenerRegistration<DataChangeListener> registration;
    private DataBroker db;

    public NeutronNetworkChangeListener(DataBroker db){
        this.db = db;
        // create identity path to register on service startup
        InstanceIdentifier<Network> path = InstanceIdentifier
                .create(Neutron.class)
                .child(Networks.class)
                .child(Network.class);
        LOG.debug("Register listener for Neutron Network model data changes");
        // register for Data Change Notification
        registration =
                this.db.registerDataChangeListener(LogicalDatastoreType.CONFIGURATION, path, this, DataChangeScope.ONE);

    }

    @Override
    public void onDataChanged(
            AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject> changes) {
        LOG.trace("Data changes : {}",changes);

        // handle data change notification
        Object[] subscribers = NeutronIAwareUtil.getInstances(INeutronNetworkAware.class, this);
        createNetwork(changes, subscribers);
        updateNetwork(changes, subscribers);
        deleteNetwork(changes, subscribers);
    }
}
Neutron configuration

From Boron, new models of configuration for OpenDaylight to tell OpenStack neutron/networking-odl its configuration/capability.

hostconfig

This is for OpenDaylight to tell per-node configuration to Neutron. Especially this is used by pseudo agent port binding heavily.

The model definition can be found at

How to populate this for pseudo agent port binding is documented at

Neutron extension config

In Boron this is experimental. The model definition can be found at

Each Neutron Service provider has its own feature set. Some support the full features of OpenStack, but others support only a subset. With same supported Neutron API, some functionality may or may not be supported. So there is a need for a way that OpenDaylight can tell networking-odl its capability. Thus networking-odl can initialize Neutron properly based on reported capability.

Neutorn Logger

There is another small Karaf feature, odl-neutron-logger, which logs changes of Neutron YANG models. which can be used for debug/audit.

It would also help to understand how to listen the change.

Neutron Northbound
How to add new API support

OpenStack Neutron is a moving target. It is continuously adding new features as new rest APIs. Here is a basic step to add new API support:

In the Neutron Northbound project:

  • Add new YANG model for it under neutron/model/src/main/yang and update neutron.yang

  • Add northbound API for it, and neutron-spi

    • Implement Neutron<New API>Request.java and Neutron<New API>Norhtbound.java under neutron/northbound-api/src/main/java/org/opendaylight/neutron/northbound/api/

    • Implement INeutron<New API>CRUD.java and new data structure if any under neutron/neutron-spi/src/main/java/org/opendaylight/neutron/spi/

    • update neutron/neutron-spi/src/main/java/org/opendaylight/neutron/spi/NeutronCRUDInterfaces.java to wire new CRUD interface

    • Add unit tests, Neutron<New structure>JAXBTest.java under neutron/neutron-spi/src/test/java/org/opendaylight/neutron/spi/

  • update neutron/northbound-api/src/main/java/org/opendaylight/neutron/northbound/api/NeutronNorthboundRSApplication.java to wire new northbound api to RSApplication

  • Add transcriber, Neutron<New API>Interface.java under transcriber/src/main/java/org/opendaylight/neutron/transcriber/

  • update transcriber/src/main/java/org/opendaylight/neutron/transcriber/NeutronTranscriberProvider.java to wire a new transcriber

    • Add integration tests Neutron<New API>Tests.java under integration/test/src/test/java/org/opendaylight/neutron/e2etest/

    • update integration/test/src/test/java/org/opendaylight/neutron/e2etest/ITNeutronE2E.java to run a newly added tests.

In OpenStack networking-odl

  • Add new driver (or plugin) for new API with tests.

In a southbound Neutron Provider

  • implement actual backend to realize those new API by listening related YANG models.

How to write transcriber

For each Neutron data object, there is an Neutron*Interface defined within the transcriber artifact that will write that object to the MD-SAL configuration datastore.

All Neutron*Interface extend AbstractNeutronInterface, in which two methods are defined:

  • one takes the Neutron object as input, and will create a data object from it.

  • one takes an uuid as input, and will create a data object containing the uuid.

protected abstract T toMd(S neutronObject);
protected abstract T toMd(String uuid);

In addition the AbstractNeutronInterface class provides several other helper methods (addMd, updateMd, removeMd), which handle the actual writing to the configuration datastore.

The semantics of the toMD() methods

Each of the Neutron YANG models defines structures containing data. Further each YANG-modeled structure has it own builder. A particular toMD() method instantiates an instance of the correct builder, fills in the properties of the builder from the corresponding values of the Neutron object and then creates the YANG-modeled structures via the build() method.

As an example, one of the toMD code for Neutron Networks is presented below:

protected Network toMd(NeutronNetwork network) {
    NetworkBuilder networkBuilder = new NetworkBuilder();
    networkBuilder.setAdminStateUp(network.getAdminStateUp());
    if (network.getNetworkName() != null) {
        networkBuilder.setName(network.getNetworkName());
    }
    if (network.getShared() != null) {
        networkBuilder.setShared(network.getShared());
    }
    if (network.getStatus() != null) {
        networkBuilder.setStatus(network.getStatus());
    }
    if (network.getSubnets() != null) {
        List<Uuid> subnets = new ArrayList<Uuid>();
        for( String subnet : network.getSubnets()) {
            subnets.add(toUuid(subnet));
        }
        networkBuilder.setSubnets(subnets);
    }
    if (network.getTenantID() != null) {
        networkBuilder.setTenantId(toUuid(network.getTenantID()));
    }
    if (network.getNetworkUUID() != null) {
        networkBuilder.setUuid(toUuid(network.getNetworkUUID()));
    } else {
        logger.warn("Attempting to write neutron network without UUID");
    }
    return networkBuilder.build();
}
ODL Parent Developer Guide
Parent POMs
Overview

The ODL Parent component for OpenDaylight provides a number of Maven parent POMs which allow Maven projects to be easily integrated in the OpenDaylight ecosystem. Technically, the aim of projects in OpenDaylight is to produce Karaf features, and these parent projects provide common support for the different types of projects involved.

These parent projects are:

  • odlparent-lite — the basic parent POM for Maven modules which don’t produce artifacts (e.g. aggregator POMs)

  • odlparent — the common parent POM for Maven modules containing Java code

  • bundle-parent — the parent POM for Maven modules producing OSGi bundles

The following parent projects are deprecated, but still used in Carbon:

  • feature-parent — the parent POM for Maven modules producing Karaf 3 feature repositories

  • karaf-parent — the parent POM for Maven modules producing Karaf 3 distributions

The following parent projects are new in Carbon, for Karaf 4 support (which won’t be complete until Nitrogen):

  • single-feature-parent — the parent POM for Maven modules producing a single Karaf 4 feature

  • feature-repo-parent — the parent POM for Maven modules producing Karaf 4 feature repositories

  • karaf4-parent — the parent POM for Maven modules producing Karaf 4 distributions

odlparent-lite

This is the base parent for all OpenDaylight Maven projects and modules. It provides the following, notably to allow publishing artifacts to Maven Central:

  • license information;

  • organization information;

  • issue management information (a link to our Bugzilla);

  • continuous integration information (a link to our Jenkins setup);

  • default Maven plugins (maven-clean-plugin, maven-deploy-plugin, maven-install-plugin, maven-javadoc-plugin with HelpMojo support, maven-project-info-reports-plugin, maven-site-plugin with Asciidoc support, jdepend-maven-plugin);

  • distribution management information.

It also defines two profiles which help during development:

  • q (-Pq), the quick profile, which disables tests, code coverage, Javadoc generation, code analysis, etc. — anything which isn’t necessary to build the bundles and features (see this blog post for details);

  • addInstallRepositoryPath (-DaddInstallRepositoryPath=…/karaf/system) which can be used to drop a bundle in the appropriate Karaf location, to enable hot-reloading of bundles during development (see this blog post for details).

For modules which don’t produce any useful artifacts (e.g. aggregator POMs), you should add the following to avoid processing artifacts:

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-deploy-plugin</artifactId>
            <configuration>
                <skip>true</skip>
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-install-plugin</artifactId>
            <configuration>
                <skip>true</skip>
            </configuration>
        </plugin>
    </plugins>
</build>
odlparent

This inherits from odlparent-lite and mainly provides dependency and plugin management for OpenDaylight projects.

If you use any of the following libraries, you should rely on odlparent to provide the appropriate versions:

  • Akka (and Scala)

  • Apache Commons:

    • commons-codec

    • commons-fileupload

    • commons-io

    • commons-lang

    • commons-lang3

    • commons-net

  • Apache Shiro

  • Guava

  • JAX-RS with Jersey

  • JSON processing:

    • GSON

    • Jackson

  • Logging:

    • Logback

    • SLF4J

  • Netty

  • OSGi:

    • Apache Felix

    • core OSGi dependencies (core, compendium…)

  • Testing:

    • Hamcrest

    • JSON assert

    • JUnit

    • Mockito

    • Pax Exam

    • PowerMock

  • XML/XSL:

    • Xerces

    • XML APIs

Note

This list isn’t exhaustive. It’s also not cast in stone; if you’d like to add a new dependency (or migrate a dependency), please contact the mailing list.

odlparent also enforces some Checkstyle verification rules. In particular, it enforces the common license header used in all OpenDaylight code:

/*
 * Copyright © ${year} ${holder} and others.  All rights reserved.
 *
 * This program and the accompanying materials are made available under the
 * terms of the Eclipse Public License v1.0 which accompanies this distribution,
 * and is available at http://www.eclipse.org/legal/epl-v10.html
 */

where “${year}” is initially the first year of publication, then (after a year has passed) the first and latest years of publication, separated by commas (e.g. “2014, 2016”), and “${holder}” is the initial copyright holder (typically, the first author’s employer). “All rights reserved” is optional.

If you need to disable this license check, e.g. for files imported under another license (EPL-compatible of course), you can override the maven-checkstyle-plugin configuration. features-test does this for its CustomBundleUrlStreamHandlerFactory class, which is ASL-licensed:

<plugin>
    <artifactId>maven-checkstyle-plugin</artifactId>
    <executions>
        <execution>
            <id>check-license</id>
            <goals>
                <goal>check</goal>
            </goals>
            <phase>process-sources</phase>
            <configuration>
                <configLocation>check-license.xml</configLocation>
                <headerLocation>EPL-LICENSE.regexp.txt</headerLocation>
                <includeResources>false</includeResources>
                <includeTestResources>false</includeTestResources>
                <sourceDirectory>${project.build.sourceDirectory}</sourceDirectory>
                <excludes>
                    <!-- Skip Apache Licensed files -->
                    org/opendaylight/odlparent/featuretest/CustomBundleUrlStreamHandlerFactory.java
                </excludes>
                <failsOnError>false</failsOnError>
                <consoleOutput>true</consoleOutput>
            </configuration>
        </execution>
    </executions>
</plugin>
bundle-parent

This inherits from odlparent and enables functionality useful for OSGi bundles:

  • maven-javadoc-plugin is activated, to build the Javadoc JAR;

  • maven-source-plugin is activated, to build the source JAR;

  • maven-bundle-plugin is activated (including extensions), to build OSGi bundles (using the “bundle” packaging).

In addition to this, JUnit is included as a default dependency in “test” scope.

features-parent

This inherits from odlparent and enables functionality useful for Karaf features:

  • karaf-maven-plugin is activated, to build Karaf features — but for OpenDaylight, projects need to use “jar” packaging (not “feature” or “kar”);

  • features.xml files are processed from templates stored in src/main/features/features.xml;

  • Karaf features are tested after build to ensure they can be activated in a Karaf container.

The features.xml processing allows versions to be ommitted from certain feature dependencies, and replaced with “{{version}}”. For example:

<features name="odl-mdsal-${project.version}" xmlns="http://karaf.apache.org/xmlns/features/v1.2.0"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://karaf.apache.org/xmlns/features/v1.2.0 http://karaf.apache.org/xmlns/features/v1.2.0">

    <repository>mvn:org.opendaylight.odlparent/features-odlparent/{{VERSION}}/xml/features</repository>

    [...]
    <feature name='odl-mdsal-broker-local' version='${project.version}' description="OpenDaylight :: MDSAL :: Broker">
        <feature version='${yangtools.version}'>odl-yangtools-common</feature>
        <feature version='${mdsal.version}'>odl-mdsal-binding-dom-adapter</feature>
        <feature version='${mdsal.model.version}'>odl-mdsal-models</feature>
        <feature version='${project.version}'>odl-mdsal-common</feature>
        <feature version='${config.version}'>odl-config-startup</feature>
        <feature version='${config.version}'>odl-config-netty</feature>
        <feature version='[3.3.0,4.0.0)'>odl-lmax</feature>
        [...]
        <bundle>mvn:org.opendaylight.controller/sal-dom-broker-config/{{VERSION}}</bundle>
        <bundle start-level="40">mvn:org.opendaylight.controller/blueprint/{{VERSION}}</bundle>
        <configfile finalname="${config.configfile.directory}/${config.mdsal.configfile}">mvn:org.opendaylight.controller/md-sal-config/{{VERSION}}/xml/config</configfile>
    </feature>

As illustrated, versions can be ommitted in this way for repository dependencies, bundle dependencies and configuration files. They must be specified traditionally (either hard-coded, or using Maven properties) for feature dependencies.

karaf-parent

This allows building a Karaf 3 distribution, typically for local testing purposes. Any runtime-scoped feature dependencies will be included in the distribution, and the karaf.localFeature property can be used to specify the boot feature (in addition to standard).

single-feature-parent

This inherits from odlparent and enables functionality useful for Karaf 4 features:

  • karaf-maven-plugin is activated, to build Karaf features, typically with “feature” packaging (“kar” is also supported);

  • feature.xml files are generated based on the compile-scope dependencies defined in the POM, optionally initialised from a stub in src/main/feature/feature.xml.

  • Karaf features are tested after build to ensure they can be activated in a Karaf container.

The feature.xml processing adds transitive dependencies by default, which allows features to be defined using only the most significant dependencies (those that define the feature); other requirements are determined automatically as long as they exist as Maven dependencies.

“configfiles” need to be defined both as Maven dependencies (with the appropriate type and classifier) and as <configfile> elements in the feature.xml stub.

Other features which a feature depends on need to be defined as Maven dependencies with type “xml” and classifier “features” (note the plural here).

feature-repo-parent

This inherits from odlparent and enables functionality useful for Karaf 4 feature repositories. It follows the same principles as single-feature-parent, but is designed specifically for repositories and should be used only for this type of artifacts.

It builds a feature repository referencing all the (feature) dependencies listed in the POM.

karaf4-parent

This allows building a Karaf 4 distribution, typically for local testing purposes. Any runtime-scoped feature dependencies will be included in the distribution, and the karaf.localFeature property can be used to specify the boot feature (in addition to standard).

Features (for Karaf 3)

The ODL Parent component for OpenDaylight provides a number of Karaf 3 features which can be used by other Karaf 3 features to use certain third-party upstream dependencies.

These features are:

  • Akka features (in the features-akka repository):

    • odl-akka-all — all Akka bundles;

    • odl-akka-scala-2.11 — Scala runtime for OpenDaylight;

    • odl-akka-system-2.4 — Akka actor framework bundles;

    • odl-akka-clustering-2.4 — Akka clustering bundles and dependencies;

    • odl-akka-leveldb-0.7 — LevelDB;

    • odl-akka-persistence-2.4 — Akka persistence;

  • general third-party features (in the features-odlparent repository):

    • odl-netty-4 — all Netty bundles;

    • odl-guava-18 — Guava 18;

    • odl-guava-21 — Guava 21 (not indended for use in Carbon);

    • odl-lmax-3 — LMAX Disruptor;

    • odl-triemap-0.2 — Concurrent Trie HashMap.

To use these, you need to declare a dependency on the appropriate repository in your features.xml file:

<repository>mvn:org.opendaylight.odlparent/features-odlparent/{{VERSION}}/xml/features</repository>

and then include the feature, e.g.:

<feature name='odl-mdsal-broker-local' version='${project.version}' description="OpenDaylight :: MDSAL :: Broker">
    [...]
    <feature version='[3.3.0,4.0.0)'>odl-lmax</feature>
    [...]
</feature>

You also need to depend on the features repository in your POM:

<dependency>
    <groupId>org.opendaylight.odlparent</groupId>
    <artifactId>features-odlparent</artifactId>
    <classifier>features</classifier>
    <type>xml</type>
</dependency>

assuming the appropriate dependency management:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.opendaylight.odlparent</groupId>
            <artifactId>odlparent-artifacts</artifactId>
            <version>1.8.0-SNAPSHOT</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>

(the version number there is appropriate for Carbon). For the time being you also need to depend separately on the individual JARs as compile-time dependencies to build your dependent code; the relevant dependencies are managed in odlparent’s dependency management.

The suggested version ranges are as follows:
  • odl-netty: [4.0.37,4.1.0) or [4.0.37,5.0.0);

  • odl-guava: [18,19) (if your code is ready for it, [19,20) is also available, but the current default version of Guava in OpenDaylight is 18);

  • odl-lmax: [3.3.4,4.0.0)

Features (for Karaf 4)

There are equivalent features to all the Karaf 3 features, for Karaf 4. The repositories use “features4” instead of “features”, and the features use “odl4” instead of “odl”.

The following new features are specific to Karaf 4:

  • Karaf wrapper features (also in the features4-odlparent repository) — these can be used to pull in a Karaf feature using a Maven dependency in a POM:

    • odl-karaf-feat-feature — the Karaf feature feature;

    • odl-karaf-feat-jdbc — the Karaf jdbc feature;

    • odl-karaf-feat-jetty — the Karaf jetty feature;

    • odl-karaf-feat-war — the Karaf war feature.

To use these, all you need to do now is add the appropriate dependency in your feature POM; for example:

<dependency>
    <groupId>org.opendaylight.odlparent</groupId>
    <artifactId>odl4-guava-18</artifactId>
    <classifier>features</classifier>
    <type>xml</type>
</dependency>

assuming the appropriate dependency management:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.opendaylight.odlparent</groupId>
            <artifactId>odlparent-artifacts</artifactId>
            <version>1.8.0-SNAPSHOT</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>

(the version number there is appropriate for Carbon). We no longer use version ranges, the feature dependencies all use the odlparent version (but you should rely on the artifacts POM).

Service Function Chaining
OpenDaylight Service Function Chaining (SFC) Overview

OpenDaylight Service Function Chaining (SFC) provides the ability to define an ordered list of a network services (e.g. firewalls, load balancers). These service are then “stitched” together in the network to create a service chain. This project provides the infrastructure (chaining logic, APIs) needed for ODL to provision a service chain in the network and an end-user application for defining such chains.

  • ACE - Access Control Entry

  • ACL - Access Control List

  • SCF - Service Classifier Function

  • SF - Service Function

  • SFC - Service Function Chain

  • SFF - Service Function Forwarder

  • SFG - Service Function Group

  • SFP - Service Function Path

  • RSP - Rendered Service Path

  • NSH - Network Service Header

SFC Classifier Control and Date plane Developer guide
Overview

Description of classifier can be found in: https://datatracker.ietf.org/doc/draft-ietf-sfc-architecture/

Classifier manages everything from starting the packet listener to creation (and removal) of appropriate ip(6)tables rules and marking received packets accordingly. Its functionality is available only on Linux as it leverages NetfilterQueue, which provides access to packets matched by an iptables rule. Classifier requires root privileges to be able to operate.

So far it is capable of processing ACL for MAC addresses, ports, IPv4 and IPv6. Supported protocols are TCP and UDP.

Classifier Architecture

Python code located in the project repository sfc-py/common/classifier.py.

Note

classifier assumes that Rendered Service Path (RSP) already exists in ODL when an ACL referencing it is obtained

  1. sfc_agent receives an ACL and passes it for processing to the classifier

  2. the RSP (its SFF locator) referenced by ACL is requested from ODL

  3. if the RSP exists in the ODL then ACL based iptables rules for it are applied

After this process is over, every packet successfully matched to an iptables rule (i.e. successfully classified) will be NSH encapsulated and forwarded to a related SFF, which knows how to traverse the RSP.

Rules are created using appropriate iptables command. If the Access Control Entry (ACE) rule is MAC address related both iptables and IPv6 tables rules are issued. If ACE rule is IPv4 address related, only iptables rules are issued, same for IPv6.

Note

iptables raw table contains all created rules

Information regarding already registered RSP(s) are stored in an internal data-store, which is represented as a dictionary:

{rsp_id: {'name': <rsp_name>,
          'chains': {'chain_name': (<ipv>,),
                     ...
                     },
          'sff': {'ip': <ip>,
                  'port': <port>,
                  'starting-index': <starting-index>,
                  'transport-type': <transport-type>
                  },
          },
...
}
  • name: name of the RSP

  • chains: dictionary of iptables chains related to the RSP with information about IP version for which the chain exists

  • SFF: SFF forwarding parameters

    • ip: SFF IP address

    • port: SFF port

    • starting-index: index given to packet at first RSP hop

    • transport-type: encapsulation protocol

Key APIs and Interfaces

This features exposes API to configure classifier (corresponds to service-function-classifier.yang)

API Reference Documentation

See: sfc-model/src/main/yang/service-function-classifier.yang

SFC-OVS Plug-in
Overview

SFC-OVS provides integration of SFC with Open vSwitch (OVS) devices. Integration is realized through mapping of SFC objects (like SF, SFF, Classifier, etc.) to OVS objects (like Bridge, TerminationPoint=Port/Interface). The mapping takes care of automatic instantiation (setup) of corresponding object whenever its counterpart is created. For example, when a new SFF is created, the SFC-OVS plug-in will create a new OVS bridge and when a new OVS Bridge is created, the SFC-OVS plug-in will create a new SFF.

SFC-OVS Architecture

SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing information from/to OVS devices. The core functionality consists of two types of mapping:

  1. mapping from OVS to SFC

    • OVS Bridge is mapped to SFF

    • OVS TerminationPoints are mapped to SFF DataPlane locators

  2. mapping from SFC to OVS

    • SFF is mapped to OVS Bridge

    • SFF DataPlane locators are mapped to OVS TerminationPoints

SFC < — > OVS mapping flow diagram

SFC < — > OVS mapping flow diagram

Key APIs and Interfaces
  • SFF to OVS mapping API (methods to convert SFF object to OVS Bridge and OVS TerminationPoints)

  • OVS to SFF mapping API (methods to convert OVS Bridge and OVS TerminationPoints to SFF object)

SFC Southbound REST Plug-in
Overview

The Southbound REST Plug-in is used to send configuration from datastore down to network devices supporting a REST API (i.e. they have a configured REST URI). It supports POST/PUT/DELETE operations, which are triggered accordingly by changes in the SFC data stores.

  • Access Control List (ACL)

  • Service Classifier Function (SCF)

  • Service Function (SF)

  • Service Function Group (SFG)

  • Service Function Schedule Type (SFST)

  • Service Function Forwarder (SFF)

  • Rendered Service Path (RSP)

Southbound REST Plug-in Architecture
  1. listeners - used to listen on changes in the SFC data stores

  2. JSON exporters - used to export JSON-encoded data from binding-aware data store objects

  3. tasks - used to collect REST URIs of network devices and to send JSON-encoded data down to these devices

Southbound REST Plug-in Architecture diagram

Southbound REST Plug-in Architecture diagram

Key APIs and Interfaces

The plug-in provides Southbound REST API designated to listening REST devices. It supports POST/PUT/DELETE operations. The operation (with corresponding JSON-encoded data) is sent to unique REST URL belonging to certain data type.

  • Access Control List (ACL): http://<host>:<port>/config/ietf-acl:access-lists/access-list/

  • Service Function (SF): http://<host>:<port>/config/service-function:service-functions/service-function/

  • Service Function Group (SFG): http://<host>:<port>/config/service-function:service-function-groups/service-function-group/

  • Service Function Schedule Type (SFST): http://<host>:<port>/config/service-function-scheduler-type:service-function-scheduler-types/service-function-scheduler-type/

  • Service Function Forwarder (SFF): http://<host>:<port>/config/service-function-forwarder:service-function-forwarders/service-function-forwarder/

  • Rendered Service Path (RSP): http://<host>:<port>/operational/rendered-service-path:rendered-service-paths/rendered-service-path/

Therefore, network devices willing to receive REST messages must listen on these REST URLs.

Note

Service Classifier Function (SCF) URL does not exist, because SCF is considered as one of the network devices willing to receive REST messages. However, there is a listener hooked on the SCF data store, which is triggering POST/PUT/DELETE operations of ACL object, because ACL is referenced in service-function-classifier.yang

Service Function Load Balancing Developer Guide
Overview

SFC Load-Balancing feature implements load balancing of Service Functions, rather than a one-to-one mapping between Service Function Forwarder and Service Function.

Load Balancing Architecture

Service Function Groups (SFG) can replace Service Functions (SF) in the Rendered Path model. A Service Path can only be defined using SFGs or SFs, but not a combination of both.

Relevant objects in the YANG model are as follows:

  1. Service-Function-Group-Algorithm:

    Service-Function-Group-Algorithms {
        Service-Function-Group-Algorithm {
            String name
            String type
        }
    }
    
    Available types: ALL, SELECT, INDIRECT, FAST_FAILURE
    
  2. Service-Function-Group:

    Service-Function-Groups {
        Service-Function-Group {
            String name
            String serviceFunctionGroupAlgorithmName
            String type
            String groupId
            Service-Function-Group-Element {
                String service-function-name
                int index
            }
        }
    }
    
  3. ServiceFunctionHop: holds a reference to a name of SFG (or SF)

Key APIs and Interfaces

This feature enhances the existing SFC API.

REST API commands include: * For Service Function Group (SFG): read existing SFG, write new SFG, delete existing SFG, add Service Function (SF) to SFG, and delete SF from SFG * For Service Function Group Algorithm (SFG-Alg): read, write, delete

Bundle providing the REST API: sfc-sb-rest * Service Function Groups and Algorithms are defined in: sfc-sfg and sfc-sfg-alg * Relevant JAVA API: SfcProviderServiceFunctionGroupAPI, SfcProviderServiceFunctionGroupAlgAPI

Service Function Scheduling Algorithms
Overview

When creating the Rendered Service Path (RSP), the earlier release of SFC chose the first available service function from a list of service function names. Now a new API is introduced to allow developers to develop their own schedule algorithms when creating the RSP. There are four scheduling algorithms (Random, Round Robin, Load Balance and Shortest Path) are provided as examples for the API definition. This guide gives a simple introduction of how to develop service function scheduling algorithms based on the current extensible framework.

Architecture

The following figure illustrates the service function selection framework and algorithms.

SF Scheduling Algorithm framework Architecture

SF Scheduling Algorithm framework Architecture

The YANG Model defines the Service Function Scheduling Algorithm type identities and how they are stored in the MD-SAL data store for the scheduling algorithms.

The MD-SAL data store stores all informations for the scheduling algorithms, including their types, names, and status.

The API provides some basic APIs to manage the informations stored in the MD-SAL data store, like putting new items into it, getting all scheduling algorithms, etc.

The RESTCONF API provides APIs to manage the informations stored in the MD-SAL data store through RESTful calls.

The Service Function Chain Renderer gets the enabled scheduling algorithm type, and schedules the service functions with scheduling algorithm implementation.

Key APIs and Interfaces

While developing a new Service Function Scheduling Algorithm, a new class should be added and it should extend the base schedule class SfcServiceFunctionSchedulerAPI. And the new class should implement the abstract function:

public List<String> scheduleServiceFuntions(ServiceFunctionChain chain, int serviceIndex).

  • ``ServiceFunctionChain chain``: the chain which will be rendered

  • ``int serviceIndex``: the initial service index for this rendered service path

  • ``List<String>``: a list of service function names which scheduled by the Service Function Scheduling Algorithm.

API Reference Documentation

Please refer the API docs generated in the mdsal-apidocs.

SFC Proof of Transit Developer Guide
Overview

SFC Proof of Transit implements the in-situ OAM (iOAM) Proof of Transit verification for SFCs and other paths. The implementation is broadly divided into the North-bound (NB) and the South-bound (SB) side of the application. The NB side is primarily charged with augmenting the RSP with user-inputs for enabling the PoT on the RSP, while the SB side is dedicated to auto-generated SFC PoT parameters, periodic refresh of these parameters and delivering the parameters to the NETCONF and iOAM capable nodes (eg. VPP instances).

Architecture

The following diagram gives the high level overview of the different parts.

SFC Proof of Transit Internal Architecture

SFC Proof of Transit Internal Architecture

The Proof of Transit feature is enabled by two sub-features:

  1. ODL SFC PoT: feature:install odl-sfc-pot

  2. ODL SFC PoT NETCONF Renderer: feature:install odl-sfc-pot-netconf-renderer

Details

The following classes and handlers are involved.

  1. The class (SfcPotRpc) sets up RPC handlers for enabling the feature.

  2. There are new RPC handlers for two new RPCs (EnableSfcIoamPotRenderedPath and DisableSfcIoamPotRenderedPath) and effected via SfcPotRspProcessor class.

  3. When a user configures via a POST RPC call to enable Proof of Transit on a particular SFC (via the Rendered Service Path), the configuration drives the creation of necessary augmentations to the RSP (to modify the RSP) to effect the Proof of Transit configurations.

  4. The augmentation meta-data added to the RSP are defined in the sfc-ioam-nb-pot.yang file.

    Note

    There are no auto generated configuration parameters added to the RSP to avoid RSP bloat.

  5. Adding SFC Proof of Transit meta-data to the RSP is done in the SfcPotRspProcessor class.

  6. Once the RSP is updated, the RSP data listeners in the SB renderer modules (odl-sfc-pot-netconf-renderer) will listen to the RSP changes and send out configurations to the necessary network nodes that are part of the SFC.

  7. The configurations are handled mainly in the SfcPotAPI, SfcPotConfigGenerator, SfcPotPolyAPI, SfcPotPolyClass and SfcPotPolyClassAPI classes.

  8. There is a sfc-ioam-sb-pot.yang file that shows the format of the iOAM PoT configuration data sent to each node of the SFC.

  9. A timer is started based on the “ioam-pot-refresh-period” value in the SB renderer module that handles configuration refresh periodically.

  10. The SB and timer handling are done in the odl-sfc-pot-netconf-renderer module. Note: This is NOT done in the NB odl-sfc-pot module to avoid periodic updates to the RSP itself.

  11. ODL creates a new profile of a set of keys and secrets at a constant rate and updates an internal data store with the configuration. The controller labels the configurations per RSP as “even” or “odd” – and the controller cycles between “even” and “odd” labeled profiles. The rate at which these profiles are communicated to the nodes is configurable and in future, could be automatic based on profile usage. Once the profile has been successfully communicated to all nodes (all Netconf transactions completed), the controller sends an “enable pot-profile” request to the ingress node.

  12. The nodes are to maintain two profiles (an even and an odd pot-profile). One profile is currently active and in use, and one profile is about to get used. A flag in the packet is indicating whether the odd or even pot-profile is to be used by a node. This is to ensure that during profile change we’re not disrupting the service. I.e. if the “odd” profile is active, the controller can communicate the “even” profile to all nodes and only if all the nodes have received it, the controller will tell the ingress node to switch to the “even” profile. Given that the indicator travels within the packet, all nodes will switch to the “even” profile. The “even” profile gets active on all nodes – and nodes are ready to receive a new “odd” profile.

  13. HashedTimerWheel implementation is used to support the periodic configuration refresh. The default refresh is 5 seconds to start with.

  14. Depending on the last updated profile, the odd or the even profile is updated in the fresh timer pop and the configurations are sent down appropriately.

  15. SfcPotTimerQueue, SfcPotTimerWheel, SfcPotTimerTask, SfcPotTimerData and SfcPotTimerThread are the classes that handle the Proof of Transit protocol profile refresh implementation.

  16. The RSP data store is NOT being changed periodically and the timer and configuration refresh modules are present in the SB renderer module handler and hence there are are no scale or RSP churn issues affecting the design.

The following diagram gives the overall sequence diagram of the interactions between the different classes.

SFC Proof of Transit Sequence Diagram

SFC Proof of Transit Sequence Diagram

Logical Service Function Forwarder
Overview
Rationale

When the current SFC is deployed in a cloud environment, it is assumed that each switch connected to a Service Function is configured as a Service Function Forwarder and each Service Function is connected to its Service Function Forwarder depending on the Compute Node where the Virtual Machine is located. This solution allows the basic cloud use cases to be fulfilled, as for example, the ones required in OPNFV Brahmaputra, however, some advanced use cases, like the transparent migration of VMs can not be implemented. The Logical Service Function Forwarder enables the following advanced use cases:

  1. Service Function mobility without service disruption

  2. Service Functions load balancing and failover

As shown in the picture below, the Logical Service Function Forwarder concept extends the current SFC northbound API to provide an abstraction of the underlying Data Center infrastructure. The Data Center underlaying network can be abstracted by a single SFF. This single SFF uses the logical port UUID as data plane locator to connect SFs globally and in a location-transparent manner. SFC makes use of Genius project to track the location of the SF’s logical ports.

Single Logical SFF concept

The SFC internally distributes the necessary flow state over the relevant switches based on the internal Data Center topology and the deployment of SFs.

Changes in data model

The Logical Service Function Forwarder concept extends the current SFC northbound API to provide an abstraction of the underlying Data Center infrastructure.

The Logical SFF simplifies the configuration of the current SFC data model by reducing the number of parameters to be be configured in every SFF, since the controller will discover those parameters by interacting with the services offered by the Genius project.

The following picture shows the Logical SFF data model. The model gets simplified as most of the configuration parameters of the current SFC data model are discovered in runtime. The complete YANG model can be found here logical SFF model.

Logical SFF data model

There are other minor changes in the data model; the SFC encapsulation type has been added or moved in the following files:

Interaction with Genius

Feature sfc-genius functionally enables SFC integration with Genius. This allows configuring a Logical SFF and SFs attached to this Logical SFF via logical interfaces (i.e. neutron ports) that are registered with Genius.

As shown in the following picture, SFC will interact with Genius project’s services to provide the Logical SFF functionality.

SFC and Genius

The following are the main Genius’ services used by SFC:

  1. Interaction with Interface Tunnel Manager (ITM)

  2. Interaction with the Interface Manager

  3. Interaction with Resource Manager

SFC Service registration with Genius

Genius handles the coexistence of different network services. As such, SFC service is registered with Genius performing the following actions:

SFC Service Binding

As soon as a Service Function associated to the Logical SFF is involved in a Rendered Service Path, SFC service is bound to its logical interface via Genius Interface Manager. This has the effect of forwarding every incoming packet from the Service Function to the SFC pipeline of the attached switch, as long as it is not consumed by a different bound service with higher priority.

SFC Service Terminating Action

As soon as SFC service is bound to the interface of a Service Function for the first time on a specific switch, a terminating service action is configured on that switch via Genius Interface Tunnel Manager. This has the effect of forwarding every incoming packet from a different switch to the SFC pipeline as long as the traffic is VXLAN encapsulated on VNI 0.

The following sequence diagrams depict how the overall process takes place:

sfc-genius at RSP render

SFC genius module interaction with Genius at RSP creation.

sfc-genius at RSP removal

SFC genius module interaction with Genius at RSP removal.

For more information on how Genius allows different services to coexist, see the Genius User Guide.

Path Rendering

During path rendering, Genius is queried to obtain needed information, such as:

  • Location of a logical interface on the data-plane.

  • Tunnel interface for a specific pair of source and destination switches.

  • Egress OpenFlow actions to output packets to a specific interface.

See RSP Rendering section for more information.

VM migration

Upon VM migration, it’s logical interface is first unregistered and then registered with Genius, possibly at a new physical location. sfc-genius reacts to this by re-rendering all the RSPs on which the associated SF participates, if any.

The following picture illustrates the process:

sfc-genius at VM migration

SFC genius module at VM migration.

RSP Rendering changes for paths using the Logical SFF
  1. Construction of the auxiliary rendering graph

    When starting the rendering of a RSP, the SFC renderer builds an auxiliary graph with information about the required hops for traffic traversing the path. RSP processing is achieved by iteratively evaluating each of the entries in the graph, writing the required flows in the proper switch for each hop.

    It is important to note that the graph includes both traffic ingress (i.e. traffic entering into the first SF) and traffic egress (i.e. traffic leaving the chain from the last SF) as hops. Therefore, the number of entries in the graph equals the number of SFs in the chain plus one.

    _images/sfc-genius-example-auxiliary-graph.png

    The process of rendering a chain when the switches involved are part of the Logical SFF also starts with the construction of the hop graph. The difference is that when the SFs used in the chain are using a logical interface, the SFC renderer will also retrieve from Genius the DPIDs for the switches, storing them in the graph. In this context, those switches are the ones in the compute nodes each SF is hosted on at the time the chain is rendered.

    _images/sfc-genius-example-auxiliary-graph-logical-sff.png
  2. New transport processor

    Transport processors are classes which calculate and write the correct flows for a chain. Each transport processor specializes on writing the flows for a given combination of transport type and SFC encapsulation.

    A specific transport processor has been created for paths using a Logical SFF. A particularity of this transport processor is that its use is not only determined by the transport / SFC encapsulation combination, but also because the chain is using a Logical SFF. The actual condition evaluated for selecting the Logical SFF transport processor is that the SFs in the chain are using logical interface locators, and that the DPIDs for those locators can be successfully retrieved from Genius.

    _images/transport_processors_class_diagram.png

    The main differences between the Logical SFF transport processor and other processors are the following:

    • Instead of srcSff, dstSff fields in the hops graph (which are all equal in a path using a Logical SFF), the Logical SFF transport processor uses previously stored srcDpnId, dstDpnId fields in order to know whether an actual hop between compute nodes must be performed or not (it is possible that two consecutive SFs are collocated in the same compute node).

    • When a hop between switches really has to be performed, it relies on Genius for getting the actions to perform that hop. The retrieval of those actions involve two steps:

      • First, Genius’ Overlay Tunnel Manager module is used in order to retrieve the target interface for a jump between the source and the destination DPIDs.

      • Then, egress instructions for that interface are retrieved from Genius’s Interface Manager.

    • There are no next hop rules between compute nodes, only egress instructions (the transport zone tunnels have all the required routing information).

    • Next hop information towards SFs uses mac adresses which are also retrieved from the Genius datastore.

    • The Logical SFF transport processor performs NSH decapsulation in the last switch of the chain.

  3. Post-rendering update of the operational data model

    When the rendering of a chain finishes successfully, the Logical SFF Transport Processor perform two operational datastore modifications in order to provide some relevant runtime information about the chain. The exposed information is the following:

    • Rendered Service Path state: when the chain uses a Logical SFF, DPIDs for the switches in the compute nodes on which the SFs participating in the chain are hosted are added to the hop information.

    • SFF state: A new list of all RSPs which use each DPID is has been added. It is updated on each RSP addition / deletion.

Classifier impacts

This section explains the changes made to the SFC classifier, enabling it to be attached to Logical SFFs.

Refer to the following image to better understand the concept, and the required steps to implement the feature.

Classifier integration with Genius

SFC classifier integration with Genius.

As stated in the SFC User Guide, the classifier needs to be provisioned using logical interfaces as attachment points.

When that happens, MDSAL will trigger an event in the odl-sfc-scf-openflow feature (i.e. the sfc-classifier), which is responsible for installing the classifier flows in the classifier switches.

The first step of the process, is to bind the interfaces to classify in Genius, in order for the desired traffic (originating from the VMs having the provisioned attachment-points) to enter the SFC pipeline. This will make traffic reach table 82 (SFC classifier table), coming from table 0 (table managed by Genius, shared by all applications).

The next step, is deciding which flows to install in the SFC classifier table. A table-miss flow will be installed, having a MatchAny clause, whose action is to jump to Genius’s egress dispatcher table. This enables traffic intended for other applications to still be processed.

The flow that allows the SFC pipeline to continue is added next, having higher match priority than the table-miss flow. This flow has two responsabilities:

  1. Push the NSH header, along with its metadata (required within the SFC pipeline)

    Features the specified ACL matches as match criteria, and push NSH along with its metadata into the Action list.

  2. Advance the SFC pipeline

    Forward the traffic to the first Service Function in the RSP. This steers packets into the SFC domain, and how it is done depends on whether the classifier is co-located with the first service function in the specified RSP.

    Should the classifier be co-located (i.e. in the same compute node), a new instruction is appended to the flow, telling all matches to jump to the transport ingress table.

    If not, Genius’s tunnel manager service is queried to get the tunnel interface connecting the classifier node with the compute node where the first Service Function is located, and finally, Genius’s interface manager service is queried asking for instructions on how to reach that tunnel interface.

    These actions are then appended to the Action list already containing push NSH and push NSH metadata Actions, and written in an Apply-Actions Instruction into the datastore.

YANG Tools Developer Guide
Overview

YANG Tools is set of libraries and tooling providing support for use YANG for Java (or other JVM-based language) projects and applications.

YANG Tools provides following features in OpenDaylight:

  • parsing of YANG sources and semantic inference of relationship across YANG models as defined in RFC6020

  • representation of YANG-modeled data in Java

    • Normalized Node representation - DOM-like tree model, which uses conceptual meta-model more tailored to YANG and OpenDaylight use-cases than a standard XML DOM model allows for.

  • serialization / deserialization of YANG-modeled data driven by YANG models

Architecture

YANG Tools project consists of following logical subsystems:

  • Commons - Set of general purpose code, which is not specific to YANG, but is also useful outside YANG Tools implementation.

  • YANG Model and Parser - YANG semantic model and lexical and semantic parser of YANG models, which creates in-memory cross-referenced represenation of YANG models, which is used by other components to determine their behaviour based on the model.

  • YANG Data - Definition of Normalized Node APIs and Data Tree APIs, reference implementation of these APIs and implementation of XML and JSON codecs for Normalized Nodes.

  • YANG Maven Plugin - Maven plugin which integrates YANG parser into Maven build lifecycle and provides code-generation framework for components, which wants to generate code or other artefacts based on YANG model.

Concepts

Project defines base concepts and helper classes which are project-agnostic and could be used outside of YANG Tools project scope.

Components
  • yang-common

  • yang-data-api

  • yang-data-codec-gson

  • yang-data-codec-xml

  • yang-data-impl

  • yang-data-jaxen

  • yang-data-transform

  • yang-data-util

  • yang-maven-plugin

  • yang-maven-plugin-it

  • yang-maven-plugin-spi

  • yang-model-api

  • yang-model-export

  • yang-model-util

  • yang-parser-api

  • yang-parser-impl

YANG Model API

Class diagram of yang model API

_images/yang-model-api.png

YANG Model API

YANG Parser

Yang Statement Parser works on the idea of statement concepts as defined in RFC6020, section 6.3. We come up here with basic ModelStatement and StatementDefinition, following RFC6020 idea of having sequence of statements, where every statement contains keyword and zero or one argument. ModelStatement is extended by DeclaredStatement (as it comes from source, e.g. YANG source) and EffectiveStatement, which contains other substatements and tends to represent result of semantic processing of other statements (uses, augment for YANG). IdentifierNamespace represents common superclass for YANG model namespaces.

Input of the Yang Statement Parser is a collection of StatementStreamSource objects. StatementStreamSource interface is used for inference of effective model and is required to emit its statements using supplied StatementWriter. Each source (e.g. YANG source) has to be processed in three steps in order to emit different statements for each step. This package provides support for various namespaces used across statement parser in order to map relations during declaration phase process.

Currently, there are two implementations of StatementStreamSource in Yangtools:

  • YangStatementSourceImpl - intended for yang sources

  • YinStatementSourceImpl - intended for yin sources

YANG Data API

Class diagram of yang data API

_images/yang-data-api.png

YANG Data API

YANG Data Codecs

Codecs which enable serialization of NormalizedNodes into YANG-modeled data in XML or JSON format and deserialization of YANG-modeled data in XML or JSON format into NormalizedNodes.

YANG Maven Plugin

Maven plugin which integrates YANG parser into Maven build lifecycle and provides code-generation framework for components, which wants to generate code or other artefacts based on YANG model.

How to / Tutorials
Working with YANG Model

First thing you need to do if you want to work with YANG models is to instantiate a SchemaContext object. This object type describes one or more parsed YANG modules.

In order to create it you need to utilize YANG statement parser which takes one or more StatementStreamSource objects as input and then produces the SchemaContext object.

StatementStreamSource object contains the source file information. It has two implementations, one for YANG sources - YangStatementSourceImpl, and one for YIN sources - YinStatementSourceImpl.

Here is an example of creating StatementStreamSource objects for YANG files, providing them to the YANG statement parser and building the SchemaContext:

StatementStreamSource yangModuleSource == new YangStatementSourceImpl("/example.yang", false);
StatementStreamSource yangModuleSource2 == new YangStatementSourceImpl("/example2.yang", false);

CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild();
reactor.addSources(yangModuleSource, yangModuleSource2);

SchemaContext schemaContext == reactor.buildEffective();

First, StatementStreamSource objects with two constructor arguments should be instantiated: path to the yang source file (which is a regular String object) and a boolean which determines if the path is absolute or relative.

Next comes the initiation of new yang parsing cycle - which is represented by CrossSourceStatementReactor.BuildAction object. You can get it by calling method newBuild() on CrossSourceStatementReactor object (RFC6020_REACTOR) in YangInferencePipeline class.

Then you should feed yang sources to it by calling method addSources() that takes one or more StatementStreamSource objects as arguments.

Finally you call the method buildEffective() on the reactor object which returns EffectiveSchemaContext (that is a concrete implementation of SchemaContext). Now you are ready to work with contents of the added yang sources.

Let us explain how to work with models contained in the newly created SchemaContext. If you want to get all the modules in the schemaContext, you have to call method getModules() which returns a Set of modules. If you want to get all the data definitions in schemaContext, you need to call method getDataDefinitions, etc.

Set<Module> modules == schemaContext.getModules();
Set<DataSchemaNodes> dataSchemaNodes == schemaContext.getDataDefinitions();

Usually you want to access specific modules. Getting a concrete module from SchemaContext is a matter of calling one of these methods:

  • findModuleByName(),

  • findModuleByNamespace(),

  • findModuleByNamespaceAndRevision().

In the first case, you need to provide module name as it is defined in the yang source file and module revision date if it specified in the yang source file (if it is not defined, you can just pass a null value). In order to provide the revision date in proper format, you can use a utility class named SimpleDateFormatUtil.

Module exampleModule == schemaContext.findModuleByName("example-module", null);
// or
Date revisionDate == SimpleDateFormatUtil.getRevisionFormat().parse("2015-09-02");
Module exampleModule == schemaContext.findModuleByName("example-module", revisionDate);

In the second case, you have to provide module namespace in form of an URI object.

Module exampleModule == schema.findModuleByNamespace(new URI("opendaylight.org/example-module"));

In the third case, you provide both module namespace and revision date as arguments.

Once you have a Module object, you can access its contents as they are defined in YANG Model API. One way to do this is to use method like getIdentities() or getRpcs() which will give you a Set of objects. Otherwise you can access a DataSchemaNode directly via the method getDataChildByName() which takes a QName object as its only argument. Here are a few examples.

Set<AugmentationSchema> augmentationSchemas == exampleModule.getAugmentations();
Set<ModuleImport> moduleImports == exampleModule.getImports();

ChoiceSchemaNode choiceSchemaNode == (ChoiceSchemaNode) exampleModule.getDataChildByName(QName.create(exampleModule.getQNameModule(), "example-choice"));

ContainerSchemaNode containerSchemaNode == (ContainerSchemaNode) exampleModule.getDataChildByName(QName.create(exampleModule.getQNameModule(), "example-container"));

The YANG statement parser can work in three modes:

  • default mode

  • mode with active resolution of if-feature statements

  • mode with active semantic version processing

The default mode is active when you initialize the parsing cycle as usual by calling the method newBuild() without passing any arguments to it. The second and third mode can be activated by invoking the newBuild() with a special argument. You can either activate just one of them or both by passing proper arguments. Let us explain how these modes work.

Mode with active resolution of if-features makes yang statements containing an if-feature statement conditional based on the supported features. These features are provided in the form of a QName-based java.util.Set object. In the example below, only two features are supported: example-feature-1 and example-feature-2. The Set which contains this information is passed to the method newBuild() and the mode is activated.

Set<QName> supportedFeatures = ImmutableSet.of(
    QName.create("example-namespace", "2016-08-31", "example-feature-1"),
    QName.create("example-namespace", "2016-08-31", "example-feature-2"));

CrossSourceStatementReactor.BuildAction reactor = YangInferencePipeline.RFC6020_REACTOR.newBuild(supportedFeatures);

In case when no features should be supported, you should provide an empty Set<QName> object.

Set<QName> supportedFeatures = ImmutableSet.of();

CrossSourceStatementReactor.BuildAction reactor = YangInferencePipeline.RFC6020_REACTOR.newBuild(supportedFeatures);

When this mode is not activated, all features in the processed YANG sources are supported.

Mode with active semantic version processing changes the way how YANG import statements work - each module import is processed based on the specified semantic version statement and the revision-date statement is ignored. In order to activate this mode, you have to provide StatementParserMode.SEMVER_MODE enum constant as argument to the method newBuild().

CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild(StatementParserMode.SEMVER_MODE);

Before you use a semantic version statement in a YANG module, you need to define an extension for it so that the YANG statement parser can recognize it.

module semantic-version {
    namespace "urn:opendaylight:yang:extension:semantic-version";
    prefix sv;
    yang-version 1;

    revision 2016-02-02 {
        description "Initial version";
    }
    sv:semantic-version "0.0.1";

    extension semantic-version {
        argument "semantic-version" {
            yin-element false;
        }
    }
}

In the example above, you see a YANG module which defines semantic version as an extension. This extension can be imported to other modules in which we want to utilize the semantic versioning concept.

Below is a simple example of the semantic versioning usage. With semantic version processing mode being active, the foo module imports the bar module based on its semantic version. Notice how both modules import the module with the semantic-version extension.

module foo {
    namespace foo;
    prefix foo;
    yang-version 1;

    import semantic-version { prefix sv; revision-date 2016-02-02; sv:semantic-version "0.0.1"; }
    import bar { prefix bar; sv:semantic-version "0.1.2";}

    revision "2016-02-01" {
        description "Initial version";
    }
    sv:semantic-version "0.1.1";

    ...
}
module bar {
    namespace bar;
    prefix bar;
    yang-version 1;

    import semantic-version { prefix sv; revision-date 2016-02-02; sv:semantic-version "0.0.1"; }

    revision "2016-01-01" {
        description "Initial version";
    }
    sv:semantic-version "0.1.2";

    ...
}

Every semantic version must have the following form: x.y.z. The x corresponds to a major version, the y corresponds to a minor version and the z corresponds to a patch version. If no semantic version is specified in a module or an import statement, then the default one is used - 0.0.0.

A major version number of 0 indicates that the model is still in development and is subject to change.

Following a release of major version 1, all modules will increment major version number when backwards incompatible changes to the model are made.

The minor version is changed when features are added to the model that do not impact current clients use of the model.

The patch version is incremented when non-feature changes (such as bugfixes or clarifications of human-readable descriptions that do not impact model functionality) are made that maintain backwards compatibility.

When importing a module with activated semantic version processing mode, only the module with the newest (highest) compatible semantic version is imported. Two semantic versions are compatible when all of the following conditions are met:

  • the major version in the import statement and major version in the imported module are equal. For instance, 1.5.3 is compatible with 1.5.3, 1.5.4, 1.7.2, etc., but it is not compatible with 0.5.2 or 2.4.8, etc.

  • the combination of minor version and patch version in the import statement is not higher than the one in the imported module. For instance, 1.5.2 is compatible with 1.5.2, 1.5.4, 1.6.8 etc. In fact, 1.5.2 is also compatible with versions like 1.5.1, 1.4.9 or 1.3.7 as they have equal major version. However, they will not be imported because their minor and patch version are lower (older).

If the import statement does not specify a semantic version, then the default one is chosen - 0.0.0. Thus, the module is imported only if it has a semantic version compatible with the default one, for example 0.0.0, 0.1.3, 0.3.5 and so on.

Working with YANG Data

If you want to work with YANG Data you are going to need NormalizedNode objects that are specified in the YANG Data API. NormalizedNode is an interface at the top of the YANG Data hierarchy. It is extended through sub-interfaces which define the behaviour of specific NormalizedNode types like AnyXmlNode, ChoiceNode, LeafNode, ContainerNode, etc. Concrete implemenations of these interfaces are defined in yang-data-impl module. Once you have one or more NormalizedNode instances, you can perform CRUD operations on YANG data tree which is an in-memory database designed to store normalized nodes in a tree-like structure.

In some cases it is clear which NormalizedNode type belongs to which yang statement (e.g. AnyXmlNode, ChoiceNode, LeafNode). However, there are some normalized nodes which are named differently from their yang counterparts. They are listed below:

  • LeafSetNode - leaf-list

  • OrderedLeafSetNode - leaf-list that is ordered-by user

  • LeafSetEntryNode - concrete entry in a leaf-list

  • MapNode - keyed list

  • OrderedMapNode - keyed list that is ordered-by user

  • MapEntryNode - concrete entry in a keyed list

  • UnkeyedListNode - unkeyed list

  • UnkeyedListEntryNode - concrete entry in an unkeyed list

In order to create a concrete NormalizedNode object you can use the utility class Builders or ImmutableNodes. These classes can be found in yang-data-impl module and they provide methods for building each type of normalized node. Here is a simple example of building a normalized node:

// example 1
ContainerNode containerNode == Builders.containerBuilder().withNodeIdentifier(new YangInstanceIdentifier.NodeIdentifier(QName.create(moduleQName, "example-container")).build();

// example 2
ContainerNode containerNode2 == Builders.containerBuilder(containerSchemaNode).build();

Both examples produce the same result. NodeIdentifier is one of the four types of YangInstanceIdentifier (these types are described in the javadoc of YangInstanceIdentifier). The purpose of YangInstanceIdentifier is to uniquely identify a particular node in the data tree. In the first example, you have to add NodeIdentifier before building the resulting node. In the second example it is also added using the provided ContainerSchemaNode object.

ImmutableNodes class offers similar builder methods and also adds an overloaded method called fromInstanceId() which allows you to create a NormalizedNode object based on YangInstanceIdentifier and SchemaContext. Below is an example which shows the use of this method.

YangInstanceIdentifier.NodeIdentifier contId == new YangInstanceIdentifier.NodeIdentifier(QName.create(moduleQName, "example-container");

NormalizedNode<?, ?> contNode == ImmutableNodes.fromInstanceId(schemaContext, YangInstanceIdentifier.create(contId));

Let us show a more complex example of creating a NormalizedNode. First, consider the following YANG module:

module example-module {
    namespace "opendaylight.org/example-module";
    prefix "example";

    container parent-container {
        container child-container {
            list parent-ordered-list {
                ordered-by user;

                key "parent-key-leaf";

                leaf parent-key-leaf {
                    type string;
                }

                leaf parent-ordinary-leaf {
                    type string;
                }

                list child-ordered-list {
                    ordered-by user;

                    key "child-key-leaf";

                    leaf child-key-leaf {
                        type string;
                    }

                    leaf child-ordinary-leaf {
                        type string;
                    }
                }
            }
        }
    }
}

In the following example, two normalized nodes based on the module above are written to and read from the data tree.

TipProducingDataTree inMemoryDataTree ==     InMemoryDataTreeFactory.getInstance().create(TreeType.OPERATIONAL);
inMemoryDataTree.setSchemaContext(schemaContext);

// first data tree modification
MapEntryNode parentOrderedListEntryNode == Builders.mapEntryBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifierWithPredicates(
parentOrderedListQName, parentKeyLeafQName, "pkval1"))
.withChild(Builders.leafBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(parentOrdinaryLeafQName))
.withValue("plfval1").build()).build();

OrderedMapNode parentOrderedListNode == Builders.orderedMapBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(parentOrderedListQName))
.withChild(parentOrderedListEntryNode).build();

ContainerNode parentContainerNode == Builders.containerBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(parentContainerQName))
.withChild(Builders.containerBuilder().withNodeIdentifier(
new NodeIdentifier(childContainerQName)).withChild(parentOrderedListNode).build()).build();

YangInstanceIdentifier path1 == YangInstanceIdentifier.of(parentContainerQName);

DataTreeModification treeModification == inMemoryDataTree.takeSnapshot().newModification();
treeModification.write(path1, parentContainerNode);

// second data tree modification
MapEntryNode childOrderedListEntryNode == Builders.mapEntryBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifierWithPredicates(
childOrderedListQName, childKeyLeafQName, "chkval1"))
.withChild(Builders.leafBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(childOrdinaryLeafQName))
.withValue("chlfval1").build()).build();

OrderedMapNode childOrderedListNode == Builders.orderedMapBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(childOrderedListQName))
.withChild(childOrderedListEntryNode).build();

ImmutableMap.Builder<QName, Object> builder == ImmutableMap.builder();
ImmutableMap<QName, Object> keys == builder.put(parentKeyLeafQName, "pkval1").build();

YangInstanceIdentifier path2 == YangInstanceIdentifier.of(parentContainerQName).node(childContainerQName)
.node(parentOrderedListQName).node(new NodeIdentifierWithPredicates(parentOrderedListQName, keys)).node(childOrderedListQName);

treeModification.write(path2, childOrderedListNode);
treeModification.ready();
inMemoryDataTree.validate(treeModification);
inMemoryDataTree.commit(inMemoryDataTree.prepare(treeModification));

DataTreeSnapshot snapshotAfterCommits == inMemoryDataTree.takeSnapshot();
Optional<NormalizedNode<?, ?>> readNode == snapshotAfterCommits.readNode(path1);
Optional<NormalizedNode<?, ?>> readNode2 == snapshotAfterCommits.readNode(path2);

First comes the creation of in-memory data tree instance. The schema context (containing the model mentioned above) of this tree is set. After that, two normalized nodes are built. The first one consists of a parent container, a child container and a parent ordered list which contains a key leaf and an ordinary leaf. The second normalized node is a child ordered list that also contains a key leaf and an ordinary leaf.

In order to add a child node to a node, method withChild() is used. It takes a NormalizedNode as argument. When creating a list entry, YangInstanceIdentifier.NodeIdentifierWithPredicates should be used as its identifier. Its arguments are the QName of the list, QName of the list key and the value of the key. Method withValue() specifies a value for the ordinary leaf in the list.

Before writing a node to the data tree, a path (YangInstanceIdentifier) which determines its place in the data tree needs to be defined. The path of the first normalized node starts at the parent container. The path of the second normalized node points to the child ordered list contained in the parent ordered list entry specified by the key value “pkval1”.

Write operation is performed with both normalized nodes mentioned earlier. It consist of several steps. The first step is to instantiate a DataTreeModification object based on a DataTreeSnapshot. DataTreeSnapshot gives you the current state of the data tree. Then comes the write operation which writes a normalized node at the provided path in the data tree. After doing both write operations, method ready() has to be called, marking the modification as ready for application to the data tree. No further operations within the modification are allowed. The modification is then validated - checked whether it can be applied to the data tree. Finally we commit it to the data tree.

Now you can access the written nodes. In order to do this, you have to create a new DataTreeSnapshot instance and call the method readNode() with path argument pointing to a particular node in the tree.

Serialization / deserialization of YANG Data

If you want to deserialize YANG-modeled data which have the form of an XML document, you can use the XML parser found in the module yang-data-codec-xml. The parser walks through the XML document containing YANG-modeled data based on the provided SchemaContext and emits node events into a NormalizedNodeStreamWriter. The parser disallows multiple instances of the same element except for leaf-list and list entries. The parser also expects that the YANG-modeled data in the XML source are wrapped in a root element. Otherwise it will not work correctly.

Here is an example of using the XML parser.

InputStream resourceAsStream == ExampleClass.class.getResourceAsStream("/example-module.yang");

XMLInputFactory factory == XMLInputFactory.newInstance();
XMLStreamReader reader == factory.createXMLStreamReader(resourceAsStream);

NormalizedNodeResult result == new NormalizedNodeResult();
NormalizedNodeStreamWriter streamWriter == ImmutableNormalizedNodeStreamWriter.from(result);

XmlParserStream xmlParser == XmlParserStream.create(streamWriter, schemaContext);
xmlParser.parse(reader);

NormalizedNode<?, ?> transformedInput == result.getResult();

The XML parser utilizes the javax.xml.stream.XMLStreamReader for parsing an XML document. First, you should create an instance of this reader using XMLInputFactory and then load an XML document (in the form of InputStream object) into it.

In order to emit node events while parsing the data you need to instantiate a NormalizedNodeStreamWriter. This writer is actually an interface and therefore you need to use a concrete implementation of it. In this example it is the ImmutableNormalizedNodeStreamWriter, which constructs immutable instances of NormalizedNodes.

There are two ways how to create an instance of this writer using the static overloaded method from(). One version of this method takes a NormalizedNodeResult as argument. This object type is a result holder in which the resulting NormalizedNode will be stored. The other version takes a NormalizedNodeContainerBuilder as argument. All created nodes will be written to this builder.

Next step is to create an instance of the XML parser. The parser itself is represented by a class named XmlParserStream. You can use one of two versions of the static overloaded method create() to construct this object. One version accepts a NormalizedNodeStreamWriter and a SchemaContext as arguments, the other version takes the same arguments plus a SchemaNode. Node events are emitted to the writer. The SchemaContext is used to check if the YANG data in the XML source comply with the provided YANG model(s). The last argument, a SchemaNode object, describes the node that is the parent of nodes defined in the XML data. If you do not provide this argument, the parser sets the SchemaContext as the parent node.

The parser is now ready to walk through the XML. Parsing is initiated by calling the method parse() on the XmlParserStream object with XMLStreamReader as its argument.

Finally you can access the result of parsing - a tree of NormalizedNodes containg the data as they are defined in the parsed XML document - by calling the method getResult() on the NormalizedNodeResult object.

Introducing schema source repositories
Writing YANG driven generators
Introducing specific extension support for YANG parser
Diagnostics