Welcome to OpenDaylight Documentation¶
The OpenDaylight documentation site acts as a central clearinghouse for OpenDaylight project and release documentation. If you would like to contribute to documentation, refer to the Documentation Guide.
Getting Started with OpenDaylight¶
OpenDaylight Downloads¶
Supported Releases¶
Sodium-SR4¶
(Current Release)
- Announcement
- Original Release Date
September 24, 2019
- Service Release Date
August 28, 2020
- Downloads
- Documentation
Neon-SR3¶
- Announcement
- Original Release Date
March 26, 2019
- Service Release Date
December 20, 2019
- Downloads
- Documentation
Fluorine-SR3¶
- Announcement
Fluorine Release: Streamlined Support for Cloud, Edge and WAN Solutions
- Original Release Date
August 30, 2018
- Service Release Date
June 21, 2019
- Downloads
- Documentation
Oxygen-SR4¶
- Announcement
- Original Release Date
March 22, 2018
- Service Release Date
Dec 12, 2018
- Downloads
- Documentation
Release Notes¶
Execution¶
OpenDaylight includes Karaf containers, OSGi (Open Service Gateway Initiative) bundles, and Java class files, which are portable and can run on any Java 8-compliant JVM (Java virtual machine). Any add-on project or feature of a specific project may have additional requirements.
Development¶
OpenDaylight is written in Java and utilizes Maven as a build tool. Therefore, the only requirements needed to develop projects within OpenDaylight include:
Java JDK 8-compliant
Apache Maven 3.5.2 or later
If an application or tool is built on top of OpenDaylight’s REST APIs, it does not have any special requirement beyond what is necessary to run the application or tool to make REST calls.
In some instances, OpenDaylight uses the Xtend lamguage. Even though Maven downloads all appropriate tools to build applications; additional plugins may be required to support IDE.
Projects with additional requirements for execution typically have similar or additional requirements for development. See the platforms release notes for details.
Platform Release Notes¶
Sodium Platform Upgrade¶
This document describes the steps to help users upgrade to the Sodium planned platform. Refer to Managed Release Integrated (MRI) project for more information.
Contents
Before performing platform upgrade, do the following to bump the odlparent versions (for example, bump-odl-version):
Update the odlparent version from 4.0.9 to 5.0.4. There should not be any reference to org.opendaylight.odlparent, except for other 5.0.4, including the custom feature.xml template (src/main/feature/feature.xml). The version range there should be “[5,6)” instead of “[4,5]”, “[4.0.5,5]” or any other variation.
bump-odl-version: bump-odl-version odlparent 4.0.9 5.0.4
Update the direct yangtools version references from 2.1.8 to 3.0.7. There should not be any reference to org.opendaylight.yangtools, except for 3.0.7, including the custom feature.xml templates (src/main/feature/feature.xml). The version range there should be “[3,4)” instead of “[2.1,3).”
Update the MDSAL version from 3.0.6 to 4.0.8. There should not be any reference to org.opendaylight.mdsal, except for 4.0.8.
rpl -R 3.0.6 4.0.8
Before performing platform upgrade, users must also install any dependent project. To locally install a dependent project, pull and install the respective sodium-mri changes for any dependent project. At the minimum, pull and install controller, AAA and NETCONF.
Perform the following steps to save time when locally installing any dependent project:
For quick install:
mvn -Pq clean install
If previously installed, go offline and/or use the no-snapshot-update option.
mvn -Pq -o -nsu clean install
The following sub-section describes how to upgrade to the ODL Parent version 5. Refer to the ODL Parent Release Notes for more information.
The following features are required to be replaced:
Change any version range referencing version 4 of ODL Parent to “[5,6]” for ODL Parent 5, for example:
<feature name="odl-infrautils-caches"> <feature version="[5,6)">odl-guava</feature> </feature>
JSR305 annotations are no longer pulled into a project by default. Users have the option of migrating annotations to JDT (@Nullable et al), Checker Framework (@GuardedBy), SpotBugs (@CheckReturnValue) or by simply pulling in the JSR305 dependency into a project by adding the following to each pom.xml the use these annotations.:
<dependency> <groupId>com.google.code.findbugs</groupId> <artifactId>jsr305</artifactId> <optional>true</optional> </dependency>
The findbugs-maven-plugin is no longer supported by odlparent, so upgrade to the spotbugs by changing the following:
<groupId>org.codehaus.mojo</groupId> <artifactId>findbugs-maven-plugin</artifactId>
To:
<groupId>com.github.spotbugs</groupId> <artifactId>spotbugs-maven-plugin</artifactId>
Before declaring dependencies on Hamcrest, make sure to update the order of Junit and Hamcrest references to match the required order http://hamcrest.org/JavaHamcrest/distributables#maven-upgrade-example. Alternatively, remove the declarations completely, since odlparent provides them by default (at scope=test).
An unfortunate interaction exists between powermock-2.0.0 and mockito-2.25.0 where the latter requires a newer byte-buddy library. This leads to an odd exception when powermock tests are run. For example:
13:15:50 Underlying exception : java.lang.IllegalArgumentException: Could not create type 13:15:50 at org.opendaylight.genius.itm.tests.ItmTestModule.configureBindings(ItmTestModule.java:97) 13:15:50 at org.opendaylight.infrautils.inject.guice.testutils.AbstractGuiceJsr250Module.checkedConfigure(AbstractGuiceJsr250Module.java:23) 13:15:50 at org.opendaylight.infrautils.inject.guice.testutils.AbstractCheckedModule.configure(AbstractCheckedModule.java:35) 13:15:50 ... 27 more 13:15:50 Caused by: java.lang.IllegalArgumentException: Could not create type 13:15:50 at net.bytebuddy.TypeCache.findOrInsert(TypeCache.java:154) 13:15:50 at net.bytebuddy.TypeCache$WithInlineExpunction.findOrInsert(TypeCache.java:365) 13:15:50 at net.bytebuddy.TypeCache.findOrInsert(TypeCache.java:174) 13:15:50 at net.bytebuddy.TypeCache$WithInlineExpunction.findOrInsert(TypeCache.java:376) 13:15:50 at org.mockito.internal.creation.bytebuddy.TypeCachingBytecodeGenerator.mockClass(TypeCachingBytecodeGenerator.java:32) 13:15:50 at org.mockito.internal.creation.bytebuddy.SubclassByteBuddyMockMaker.createMockType(SubclassByteBuddyMockMaker.java:71) 13:15:50 at org.mockito.internal.creation.bytebuddy.SubclassByteBuddyMockMaker.createMock(SubclassByteBuddyMockMaker.java:42) 13:15:50 at org.mockito.internal.creation.bytebuddy.ByteBuddyMockMaker.createMock(ByteBuddyMockMaker.java:25) 13:15:50 at org.powermock.api.mockito.mockmaker.PowerMockMaker.createMock(PowerMockMaker.java:41) 13:15:50 at org.mockito.internal.util.MockUtil.createMock(MockUtil.java:35) 13:15:50 at org.mockito.internal.MockitoCore.mock(MockitoCore.java:62) 13:15:50 at org.mockito.Mockito.mock(Mockito.java:1907) 13:15:50 at org.mockito.Mockito.mock(Mockito.java:1816) 13:15:50 ... 30 more 13:15:50 Caused by: java.lang.NoSuchMethodError: net.bytebuddy.dynamic.loading.MultipleParentClassLoader$Builder.appendMostSpecific(Ljava/util/Collection;)Lnet/bytebuddy/dynamic/loading/MultipleParentClassLoader$Builder; 13:15:50 at org.mockito.internal.creation.bytebuddy.SubclassBytecodeGenerator.mockClass(SubclassBytecodeGenerator.java:83) 13:15:50 at org.mockito.internal.creation.bytebuddy.TypeCachingBytecodeGenerator$1.call(TypeCachingBytecodeGenerator.java:37) 13:15:50 at org.mockito.internal.creation.bytebuddy.TypeCachingBytecodeGenerator$1.call(TypeCachingBytecodeGenerator.java:34) 13:15:50 at net.bytebuddy.TypeCache.findOrInsert(TypeCache.java:152) 13:15:50 ... 42 more
The solution is to declare a dependency on mockito-core before the powermock dependency. For example:
<dependency> <groupId>org.mockito</groupId> <artifactId>mockito-core</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.powermock</groupId> <artifactId>powermock-api-mockito2</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.powermock</groupId> <artifactId>powermock-module-junit4</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.powermock</groupId> <artifactId>powermock-reflect</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.powermock</groupId> <artifactId>powermock-core</artifactId> <scope>test</scope> </dependency>
The default configuration of blueprint-maven-plugin was tightened to only consider classes within ${project.groupId}. For classes outside of an assigned namespace, such as netconf has in org.opendaylight.restconf (instead of org.opendaylight.netconf), users must override this configuration:
<plugin> <groupId>org.apache.aries.blueprint</groupId> <artifactId>blueprint-maven-plugin</artifactId> <configuration> <scanPaths> <scanPath>org.opendaylight.restconf</scanPath> </scanPaths> </configuration> </plugin>
The Default configuration of javadoc-maven-plugin was updated. Now, the javadoc generation defaults to HTML5 when built with JDK9+. This can result in a javadoc failures. for example:
/w/workspace/autorelease-release-sodium-mvn35-openjdk11/openflowplugin/extension/openflowplugin-extension-api/src/main/java/org/opendaylight/openflowplugin/extension/api/GroupingLooseResolver.java:71: error: tag not supported in the generated HTML version: tt * @param data expected to match <T extends Augmentable<T>>
To fix this, there are the following two options:
Fix the Javadoc. This is preferred, since it is simple to do.
Add an override for an artifact by creating (and committing to git) an empty file named “odl-javadoc-html5-optout” in an artifact’s root directory (that is, where its pom.xml is located).
To comply with RFC7950, the default YANG parser configuration validates the following construct. This is not a random XPath, and the prefixes must be validly imported.
leaf foo { type leafref { path "/foo:bar"; } }
Beside from the above issue, the following bugs, enhancements and features were delivered to Sodium Simultaneous Release.
Java mapping for “type empty” construct was changed to the following:
leaf foo { type empty; }
Changed from:
java.lang.Boolean isFoo();
to:
org.opendaylight.yangtools.yang.common.Empty getFoo();
In addition, code interacting with these models must be be updated to the following: ProtocolUtile.
The DataContainer.getImplementedInterface() method was renamed to just implementedInterface(). In addition, it is now correctly type-narrowed in generated interfaces, which also provides a default implementation. When implementing a type registry, update the references to point to this new implementedInterface() method.
For hand-crafting interfaces or providing mock implementations, provide a proper implementedInterface() implementation such as this one.
The replacement for getImplementedInterface(), implementedInterface() was narrowed when generated intermediate interfaces. This allows groupings to provide a default implementation in container-like interfaces. For example:
public interface Grp extends DataObject { @Override Class<? extends Grp> implementedInterface(); }
The users are like this:
public interface Cont extends ChildOf<Mdsal437Data>, Augmentable<Cont>, Grp { @Override default Class<Cont> implementedInterface() { return Cont.class; } }
The preceding command works, but unfortunately was seen to trigger a Javac bug (or something forbidden by JLS, the information is not available nor digestible), where the following construct involving two unrelated groupings fails to compile:
<T extends Grp1 & Grp2> void doSomething(Builder<T>);
The intent is to say, “require a Builder of a type T, which extends both Grp1 and Grp2”. It seems javac (tested with JDK8, JDK11) internally performs the equivalent of the following, which fails to compile (with the same error as javac reports in the <T ..> case), since T must do the equivalent of what Cont does; narrow implementedInterface() to solve the ambiguity. That is not a reason to not allow it. For example, Eclipse (that is, JDT compiler) will accept this construct without any issues.
interface T extends Grp1, Grp2 { }
Both binding and DOM definitions of DataBroker was updated to include a createMergingTransactionChain() method, which integrates the functionality formerly provided by the odl:type=”pingpong” data broker instance. In addition, the downstream will need to update to use the default instance to create the appropriate transaction chain manually. Note this impacts only the org.opendaylight.mdsal interfaces, not just the org.opendaylight.controller.
An example of changes can be found AppPeerBenchmark and bgp-app-peer. Note the same broker can be used both ways; thus, the proper place to change the createTransactionChain() call must be updated.
Project Release Notes¶
AAA¶
AAA (Authentication, Authorization, and Accounting) are services that help improve the security posture of an OpenDaylight deployment. By default, the majority of OpenDaylight’s northbound APIs (and all RESTCONF APIs) are protected by AAA after installing the +odl-restconf+ feature. When an API is not protected by AAA, it will be noted in the release notes.
Do you have any external interfaces other than RESTCONF?
No
Other security issues?
No
Is it possible to migrate from the previous release? If so, how?
Yes, no specific steps needed.
Is this release compatible with the previous release?
Yes
Any API changes?
No
Any configuration changes?
No
Bug ID |
Description |
---|---|
Eliminate the Oauth2 Provider Implementation that was based on Apache Oltu. |
List of features/APIs that were EOLed, deprecated, and/or removed from this release.
None
List of standards implemented and to what extent.
N/A
N/A
BGP LS PCEP¶
The OpenDaylight controller provides an implementation of BGP (Border Gateway Protocol), which is based on RFC 4271) as a south-bound protocol plugin. The implementation renders all basic BGP speaker capabilities, including:
inter/intra-AS peering
routes advertising
routes originating
routes storage
The plugin’s north-bound API (REST
/Java
) provides to user:
fully dynamic runtime standardized BGP configuration
read-only access to all RIBs
read-write programmable RIBs
read-only reachability/linkstate topology view
The OpenDaylight Path Computation Element Communication Protocol (PCEP) plugin provides all basic service units necessary to build-up a PCE-based controller. Defined by rfc8231, PCEP offers LSP management functionality for Active Stateful PCE, which is the cornerstone for majority of PCE-enabled SDN solutions. It consists of the following components:
Protocol library
PCEP session handling
Stateful PCE LSP-DB
Active Stateful PCE LSP Operations
Feature URL: BGPCEP BGP
Feature Description: OpenDaylight Border Gateway Protocol (BGP) plugin.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: CSIT
Feature URL: BGPCEP BMP
Feature Description: OpenDaylight BGP Monitoring Protocol (BMP) plugin.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: CSIT
Feature URL: BGPCEP PCEP
Feature Description: OpenDaylight Path Computation Element Configuration Protocol (PCEP) plugin.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: CSIT
None Known: All protocol implements the TCP Authentication Option (TCP MD5)
Sonar Report (72.4%)
The BGP extensions were tested manually with a vendor’s BGP router implementation or other software implementations (exaBGP, bagpipeBGP). Also, they are covered by the unit tests and automated system tests.
No additional migration steps needed.
Is this release compatible with the previous release?
Yes
Any API changes?
No
Any configuration changes?
BGP CSS configuration is not longer supported. BMP CSS configuration is not longer supported. PCEP CSS configuration is not longer supported.
This releases provides the following new and modified features:
BGPCEP-871: RPC to provide PCEP session statistics
BGPCEP-868: Support for draft-ietf-idr-ext-opt-param
BGP CSS Configuration.
PCEP CSS Configuration.
BMP CSS Configuration.
N/A
Data Export/Import¶
Data Export/Import (Daexim) feature allows OpenDaylight administrators to export the current system state to the file system or to import the state from the file system.
This release provides the following features:
Feature URL: Daexim Feature
Feature Description: This wrapper feature includes all the sub-features provided by the Daexim project.
Top Level: Yes
User Facing: Yes
Experimental: Yes
CSIT Test:
User Guide:
Developer Guide:
Do you have any external interfaces other than RESTCONF?
No
Other security issues?
None
Code coverage is 78.8%
There are extensive unit-tests in the code.
Is it possible to migrate from the previous release? If so, how?
Migration should work across all releases.
Is this release compatible with the previous release?
Yes
Any API changes?
No
Any configuration changes?
No
The following table lists the resolved issues fixed in this release.
Key |
Summary |
---|---|
General commit |
Address Sonar warnings found in the code. No behavior changes. |
List of features/APIs that were EOLed, deprecated, and/or removed from this release
None
List of standards implemented.
None
Describe any major shifts in release schedule from the release plan.
None
Distribution¶
The Distribution project is the placeholder for the ODL karaf distribution. The project currently generates 2 artifacts:
The Managed distribution (e.g. karaf-<version>.tar.gz): This includes the Managed projects in OpenDaylight (See Managed Release).
The Common distribution (e.g. opendaylight-<version>.tar.gz): This includes Managed and Self-Managed projects (See Managed Release).
The distribution project is also the placeholder for the distribution scripts. Example of these scripts:
Gitweb URL: Managed Archive
Description: Zip or tar.gz; when extracted, a self-consistent ODL installation with Managed projects is created.
Top Level: Yes.
User Facing: Yes.
Experimental: No.
CSIT Test: CSIT
Gitweb URL: Distribution Archive
Description: Zip or tar.gz; when extracted, a self-consistent ODL installation with all projects is created.
Top Level: Yes.
User Facing: Yes.
Experimental: No.
CSIT Test: CSIT
User Guide
Developer Guide
Every distribution major release comes with new and deprecated project features, as well as new Karaf version. Because of this it is recommend to perform a new ODL installation.
Test features change every release, but these are only intended for distribution test.
No issues were resolved in this release.
-
Successive feature installation from karaf4 console causes bundles refresh.
Workaround:
Use –no-auto-refresh option in the karaf feature install command.
feature:install --no-auto-refresh odl-netconf-topology
List all the features you need in the karaf config boot file.
Install all features at once in console, for example:
feature:install odl-restconf odl-netconf-mdsal odl-mdsal-apidocs odl-clustering-test-app odl-netconf-topology
-
The ssh-dss method is used by Karaf SSH console, but no longer supported by clients such as OpenSSH.
Workaround:
Use the bin/client script, which uses karaf:karaf as the default credentials.
Use this ssh option:
ssh -oHostKeyAlgorithms=+ssh-dss -p 8101 karaf@localhost
After restart, Karaf is unable to re-use the generated host.key file.
Workaround: Delete the etc/host.key file before starting Karaf again.
No standard implemented directly (see upstream projects).
Genius¶
Genius project provides Generic Network Interfaces, Utilities & Services. Any ODL application can use these to achieve interference-free co-existence with other applications using Genius. OpendayLight Genius provides following modules:
Module |
Description |
---|---|
Interface (logical port) Manager |
Allows bindings/registration of multiple services to logical ports/interfaces. |
Overlay Tunnel Manager |
Creates and maintains overlay tunnels between configured tunnel endpoints. |
Aliveness Monitor |
Provides tunnel/nexthop aliveness monitoring services |
ID Manager |
Generates cluster-wide persistent unique integer IDs |
MD-SAL Utils |
Provides common generic APIs for interaction with MD-SAL |
Resource Manager |
Provides a resource sharing framework for applications sharing common resources e.g. table-ids, group-ids etc. |
FCAPS Application |
Generates various alarms and counters for the different genius modules |
FCAPS Framework |
Module collectively fetches all data generated by fcaps application. Any underlying infrastructure can subscribe for its events to have a generic overview of the various alarms and counters. |
Feature URL: ODL API
Feature Description: This feature includes API for all the functionalities provided by Genius.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Tests:
Feature URL: ODL
Feature Description: This feature provides all functionalities provided by genius modules, including interface manager, tunnel manager, resource manager and ID manager and MDSAL Utils. It includes Genius APIs and implementation.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Tests:
In addition, the feature is well tested by the netvirt CSIT suites.
Feature URL: REST
Feature Description: This feature includes RESTCONF with ‘odl-genius’ feature.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test:
Feature URL: FCAPS Application
Feature Description: includes genius FCAPS application.
Top Level: Yes
User Facing: Yes
Experimental: Yes
CSIT Test: None
Feature URL: FCAPS Framework
Feature Description: Includes genius FCAPS framework.
Top Level: Yes
User Facing: Yes
Experimental: Yes
CSIT Test: None
Developer Guide
Do you have any external interfaces other than RESTCONF?
No
Other security issues?
N/A
CSIT Jobs:
Note
Genius is used extensively in NetVirt, so NetVirt’s CSIT also provides confidence in genius.
Is it possible to migrate from the previous release? If so, how?
Yes, a normal upgrade of the software should work.
Is this release compatible with the previous release?
Yes
Any API changes?
No
Any configuration changes?
No
List of features/APIs that were EOLed, deprecated, and/or removed from this release.
None
List of standards implemented.
N/A
Infrautils¶
Infrautils project provides low level utilities for use by other OpenDaylight projects, including:
@Inject DI
Utils incl. org.opendaylight.infrautils.utils.concurrent
Test Utilities
Job Coordinator
Ready Service
Integration Test Utilities (itestutils)
Caches
Diagstatus
Metrics
Feature URL: All features
Feature Description: This feature exposes all infrautils framework features.
Top Level: Yes
User Facing: No
Experimental: Yes
CSIT Test:
Feature URL: Jobcoordinator
Feature Description: This feature provides technical utilities and infrastructures for other projects to use.
Top Level: Yes
User Facing: No
Experimental: No
CSIT Test: Covered by Netvirt and Genius CSITs
Feature URL: Metrics
Feature Description: This feature exposes the new infrautils.metrics API with labels and first implementation based on Dropwizard incl. thread watcher.
Top Level: Yes
User Facing: No
Experimental: Yes
CSIT Test: Covered by Netvirt and Genius CSITs.
Feature URL: Ready
Feature Description: This feature exposes the system readiness framework.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: Covered by Netvirt and Genius CSITs.
Feature URL: Cache
Feature Description: This feature exposes new infrautils.caches API, CLI commands for monitoring, and first implementation based on Guava.
Top Level: Yes
User Facing: Yes
Experimental: Yes
CSIT Test: Covered by Netvirt and Genius CSITs.
Feature URL: Diagstatus
Feature Description: This feature exposes the status and diagnostics framework.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: Covered by Netvirt and Genius CSITs.
Feature URL: Prometheus
Feature Description: This feature exposes metrics by HTTP on /metrics/prometheus from the local ODL to an external Prometheus setup.
Top Level: Yes
User Facing: No
Experimental: Yes
CSIT Test: None
Developer Guide(s):
Do you have any external interfaces other than RESTCONF?
No
Other security issues?
N/A
Project infrautils provides low-level technical framework utilities and therefore no CSIT automated system testing is available. However the same gets covered by the CSIT of users of infrautils (eg : Genius, Netvirt).
CSIT Jobs:
Other manual testing and QA information.
N/A
Is it possible to migrate from the previous release? If so, how?
Yes, a normal upgrade of the software should work.
Is this release compatible with the previous release?
Yes
Any API changes?
No
Any configuration changes?
No
There were no significant bugs fixed since the previous release.
List of features/APIs that were EOLed, deprecated, and/or removed from this release.
Counters infrastructure (replaced by metrics).
List of standards implemented and to what extent.
N/A
LISP Flow Mapping¶
LISP (Locator ID Separation Protocol) Flow Mapping service provides mapping services, including LISP Map-Server and LISP Map-Resolver services that store and serve mapping data to dataplane nodes and to OpenDaylight applications. Mapping data can include mapping of virtual addresses to physical network addresses where the virtual nodes are reachable or hosted. Mapping data can also include a variety of routing policies including traffic engineering and load balancing. To leverage this service, OpenDaylight applications and services can use the northbound REST API to define the mappings and policies in the LISP Mapping Service. Dataplane devices capable of LISP control protocol can leverage this service through a southbound LISP plugin. LISP-enabled devices must be configured to use this OpenDaylight service, since their Map- Server and/or Map-Resolver.
Southbound LISP plugin supports the LISP control protocol (that is, Map-Register, Map-Request, Map-Reply messages). It can also be used to register mappings in the OpenDaylight mapping service.
User Guide(s):
Do you have any external interfaces other than RESTCONF?
Yes, the southbound plugin.
If so, how are they secure?
LISP southbound plugin follows LISP RFC6833 security guidelines.
What port numbers do they use?
Port used: 4342
Other security issues?
None
Sonar Report (59.6%)
All modules have been unit tested. Integration tests have been performed for all major features. System tests have been performed on most major features.
Registering and retrieval of basic mappings have been tested more thoroughly. More complicated mapping policies have gone through less testing.
Is it possible to migrate from the previous release? If so, how?
LISP Flow Mapping service will auto-populate the data structures from existing MD-SAL data upon service start if the data has already been migrated separately. No automated way for transferring the data is provided in this release.
Is this release compatible with the previous release?
Yes
Any API changes?
No
Any configuration changes?
No
None
List of features/APIs that were EOLed, deprecated, and/or removed from this release.
N/A
The LISP implementation module and southbound plugin conforms to the IETF RFC6830 and RFC6833, with the following exceptions:
In Map-Request message, M bit(Map-Reply Record exist in the MapRequest) is processed, but any mapping data at the bottom of a Map-Request are discarded.
LISP LCAFs are limited to only up to one level of recursion, as described in the IETF LISP YANG draft.
No standards exist for the LISP Mapping System northbound API as of this date.
NETCONF¶
For each top-level feature, identify the name, URL, description, etc. User-facing features are used directly by end users.
Feature URL: NETCONF Topology
Feature Description: NETCONF southbound plugin single-node, configuration through MD-SAL.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: NETCONF CSIT
Feature URL: Clustered Topology
Feature Description: NETCONF southbound plugin clustered, configuration through MD-SAL.
Top Level: Yes
User Facing: Yes
Experimental: Yes
CSIT Test: Cluster CSIT
Feature URL: Console
Feature Description: NETCONF southbound configuration with Karaf CLI.
Top Level: Yes
User Facing: Yes
Experimental: Yes
Feature URL: MD-SAL
Feature Description: NETCONF server for MD-SAL.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: MD-SAL CSIT
Feature URL: RESTCONF
Feature Description: RESTCONF
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: Tested by any suite that uses RESTCONF.
Feature URL: API Docs
Feature Description: MD-SAL - apidocs
Top Level: Yes
User Facing: Yes
Experimental: No
Feature URL: YANG Lib
Feature Description: Yanglib server.
Top Level: Yes
User Facing: Yes
Experimental: No
Feature URL: Call Home SSH
Feature Description: NETCONF Call Home.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: Call Home CSIT.
The following list are the new and modified features introduced in this release:
An option was provided in YANG tools that preserves the ordering of requests as defined in the YANG file when formulating the NETCONF payload. This help devices that are strict on the ordering of elements. To do this, the JAVA parameter “org.opendaylight.yangtools.yang.data.impl.schema.builder.retain-child-order” needs to be set to true before starting Karaf.
NETCONF-608: Change NETCONF keepalives are not sent during any large payload reply. Stop to send the keepalive RPC to device, while ODL is waiting/processing the response from the device.
An item was added to optionally not issue lock/unlock for NETCONF edit-config issues. This is only for devices that can handle multiple requests through a queue. Please contact the vendor before enabling this option, since all transaction semantics are off by default if this option is set for a device. This option can be set by issuing a PUT RESTCONF call. For example:
/restconf/config/netconf-node-optional:netconf-node-fields-optional/topology/topology-netconf/node/{node-id}/datastore-lock { "netconf-node-optional:datastore-lock" : { "datastore-lock-allowed" : false } }
An option was added at the device mount time to lock or unlock the datastore before issuing an edit-config command. Default value is true. If set to false, then do not issue a lock/unlock before issuing edit-config.
The get-config RPC functionality of the ietf-netconf.yang file is available for mounted NETCONF devices. This functionality enables users to get around not supported features on Restconf, such as NETCONF filtering. Using this method, users can custom construct any NETCONF request.
A flexible mount point naming strategy was added, so that users can now configure mount point names to either contain IP address and port (default), or just the IP address. This feature was added for the NETCONF call-home feature.
User Guide:
Developer Guide:
Do you have any external interfaces other than RESTCONF?
Yes, we have MD-SAL and CSS NETCONF servers. Also, a server for NETCONF Call Home.
If so, how are they secure?
NETCONF over SSH
What port numbers do they use?
Refer to Ports. NETCONF Call Home uses TCP port 6666.
Other security issues?
None
Sonar Report Test coverage percent: 64.8%
Is it possible to migrate from the previous release? If so, how?
Yes. No additional steps required.
Is this release compatible with the previous release?
Yes
Any API changes?
No
Any configuration changes?
No
Bug ID |
Description |
---|---|
There is an assumption that a RESTCONF URL behaves just as an HTTP does by squashing multiple slashes into one. However, an error is still thrown when there is an empty element in this case. |
|
The query parameter field does not work when there is more than one nested field. |
|
An output-less RPC must either return an output element or status code 204. Currently, this does not occur. |
|
Support for a YANG1.1 action should be added to MDSAL. |
|
Currently, netconf-testtool uses /tmp directory to save temporary key file. However, writing temporary data to a file system must be avoided, because it makes some test tool deployments difficult. |
|
The netconf-testtool configuration should accept Set<YangModuleInfo> as a model list. Currently, this does not occur. |
|
Currently, NETCONF keepalives are sent during large payload replies. This should not occur. |
|
In corner cases, there is a security issue when logging passwords in plain text. |
|
In some cases, an attempt is made by NETCONF to remount regardless of the error-type. |
|
In corner cases, a NETCONF mount failed in the master. |
|
In rare cases, adding a device configuration using POST failed in Sodium. |
|
The NETCONF callhome server does not display the disconnect cause. |
|
Callhome will throw NPEs in DTCL. |
|
Yangtools does not process the output of get-config RPC in the ietf-netconf YANG model. |
|
Implementing code changed for YANG1.1 action for Restconf Layer. |
|
An action contained in an augment-prepare of a request failed. |
|
Starting Karaf in latest distribution failed with an exception. |
|
Currently, it is not possible to receive notifications through the RESTCONF RFC8040 implementation. |
|
In corner cases, the NETCONF testtool did not connect to OpenDaylight. |
|
Currently, there is no support for disabling of the lock/unlock feature for NETCONF requests. |
|
The aacceptance/E2E test needs to be added to the testtool. |
|
Updates are required for the user guide with the information on how to use custom RPC with test-tool. |
|
In some cases, RESTCONF does not initialize when the used models have deviations. |
Bug ID |
Description |
---|---|
In some cases, the standard edit-config failed when the module augmenting base NETCONF was retrieved from a device. |
List of features/APIs that were EOLed, deprecated, and/or removed from this release:
N/A
NetVirt¶
Feature Name: odl-netvirt-openstack
Feature URL: odl-netvirt-openstack
Feature Description: NetVirt is a network virtualization solution that includes the following components:
Open vSwitch based virtualization for software switches.
Hardware VTEP for hardware switches.
Service Function Chaining support within a virtualized environment.
Support for OVS and DPDK-accelerated.
OVS data paths, L3VPN (BGPVPN), EVPN, ELAN, distributed L2 and L3, NAT and Floating IPs, IPv6, Security Groups, MAC and IP learning.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: NetVirt CSIT
User Guide(s):
Developer Guide(s):
Contributor Guide(s):
No known issues.
Nothing beyond general migration requirements.
Nothing beyond general compatibility requirements.
Both SFC Netvirt and COE Netvirt Integration are reaching an EOL due to lack of support from their respective projects. COE Netvirt CSIT jobs are already disabled, and SFC is deprecated for Sodium and will be removed for Magnesium if support does not come from the SFC project.
N/A
OpenFlow Plugin¶
The OpenFlow Plugin project provides the following functionality:
OpenFlow 1.0/1.3 Implementation Project provides the implementation of the OpenFlow 1.0 and OpenFlow 1.3 specification.
ONF Approved extensions Project provides the implementation of following ONF OpenFlow 1.4 feature, which is approved as extensions for the OpenFlow 1.3 specification.
OpenFlow 1.4 Bundle Feature:
Nicira Extensions Project provides the implementation of the Nicira Extensions. Some of the important extensions implemented are Connection Tracking Extension and Group Add-Mod Extension
OpenFlow-Based Applications Project provides the following applications that user can leverage out-of-the-box in developing their application or as a direct end consumer:
Forwarding Rules Manager: Application provides functionality to add/remove/update flow/groups/meters.
LLDP Speaker: Application sends periodic LLDP packet out on each OpenFlow switch port for link discovery.
Topology LLDP Discovery: Application intercept the LLDP packets and discover the link information.
Topology Manager: Application receives the discovered links information from Topology LLDP Discovery application and stores in the topology yang model datastore.
Reconciliation Framework: Framework that exposes the APIs that consumer application (in-controller) can leverage to participate in the switch reconciliation process in the event of switch connection/reconnection.
Arbitrator Reconciliation: Application exposes the APIs that consumer application or direct user can leverage to trigger the device configuration reconciliation.
OpenFlow Java Library Project provides the OpenFlow Java Library that converts the data based on OpenFlow plugin data models to the OpenFlow java models before sending it down the wire to the device.
This release provides the following new and modified features:
Feature: OVS based NA Responder for IPv6 default gateway.
Feature Description: Feature implements an OVS based service that respond to Neighbor Advertisement request for IPv6 default gateway.
Feature URL: JAVA Protocol
Feature Description: OpenFlow protocol implementation.
Top Level: Yes
User Facing: No
Experimental: No
CSIT Test: JAVA CSIT
Feature URL: Config Pusher
Feature Description: Pushes node configuration changes to OpenFlow device.
Top Level: Yes
User Facing: No
Experimental: No
CSIT Test: Pusher CSIT
Feature URL: Forwarding Rules Manager
Feature Description: Sends changes in config datastore to OpenFlow device incrementally. forwardingrules-manager can be replaced with forwardingrules-sync and vice versa.
Top Level: Yes
User Facing: No
Experimental: No
CSIT Test: FR Manager CSIT
Feature URL: Forwarding Rules Sync
Feature Description: Sends changes in config datastore to OpenFlow devices taking previous state in account and doing diffs between previous and new state. forwardingrules-sync can be replaced with forwardingrules-manager and vice versa.
Top Level: Yes
User Facing: No
Experimental: Yes
CSIT Test: FR Sync CSIT
Feature URL: Miss Enforcer
Feature Description: Sends table miss flows to OpenFlow device when it connects.
Top Level: Yes
User Facing: No
Experimental: No
CSIT Test: Enforcer CSIT
Feature URL: App Topology
Feature Description: Discovers topology of connected OpenFlow devices. It a wrapper feature that loads the following features:
odl-openflowplugin-app-lldp-speaker
odl-openflowplugin-app-topology-lldp-discovery
odl-openflowplugin-app-topology-manager).
Top Level: Yes
User Facing: No
Experimental: No
CSIT Test: App Topology CSIT
Feature URL: LLDP Speaker
Feature Description: Send periodic LLDP packets on all the ports of all the connected OpenFlow devices.
Top Level: Yes
User Facing: No
Experimental: No
CSIT Test: LLDP Speaker CSIT
Feature URL: LLDP Discovery
Feature Description: Receives the LLDP packet sent by LLDP speaker service and generate the link information and publish to the downstream services looking for link notifications.
Top Level: Yes
User Facing: No
Experimental: No
CSIT Test: LLDP Discovery CSIT
Feature URL: Topology Manager
Feature Description: Listen to the link added/removed notification and node connect/disconnection notification and update the link information in the OpenFlow topology.
Top Level: Yes
User Facing: No
Experimental: No
CSIT Test: Topology Manager CSIT
Feature URL: NXM Extensions
Feature Description: Support for OpenFlow Nicira Extensions.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: NXM Extensions CSIT
Feature URL: ONF Extensions
Feature Description: Support for Open Networking Foundation Extensions.
Top Level: Yes
User Facing: Yes
Experimental: Yes
CSIT Test: No
Feature URL: Flow Services
Feature Description: Wrapper feature for standard applications.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: Flow Services CSIT
Feature URL: Flow Services Rest
Feature Description: Wrapper + REST interface.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: Flow Services Rest CSIT
Feature URL: Serices UI
Feature Description: Wrapper + REST interface + UI.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test: Flow Services UI CSIT
Feature URL: Southbound
Feature Description: Southbound API implementation.
Top Level: Yes
User Facing: No
Experimental: No
CSIT Test: Southbound CSIT
Features Documentation:
Do you have any external interfaces other than RESTCONF?
Yes, OpenFlow devices
Other security issues?
N/A
Sonar Report (67.6%)
Is it possible to migrate from the previous release? If so, how?
Yes, APIs from the previous release are supported in the Sodium release.
Is this release compatible with the previous release? Yes
List key known issues with workarounds:
Bug ID |
Description |
---|---|
Group tx-chain closed by port event thread. |
|
Table stats not available after a switch flap. |
List of features/APIs that were EOLed, deprecated, and/or removed from this release.
None
OVSDB Project¶
The OVSDB Project provides the following functionality:
OVSDB Southbound Plugin handles the OVS device that supports the OVSDB schema and uses the OVSDB protocol. This feature provides the implementation of the defined YANG models. Developers developing the in-controller application and want to leverage OVSDB for device configuration can leverage this functionality.
HWvTep Southbound Plugin handles the OVS device that supports the OVSDB Hardware vTEP schema and uses OVSDB protocol. This feature provides the implementation of the project defined YANG models. Developers developing the in-controller application and want to leverage OVSDB Hardware vTEP plugin for device configuration can leverage this functionality.
Feature URL: Southbound API
Feature Description: This feature provides the YANG models for northbound users to configure the OVSDB device. These YANG models are designed based on the OVSDB schema. This feature does not provide the implementation of YANG models. If user/developer prefer to write their own implementation they can use this feature to load the YANG models in the controller.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test:
Feature URL: Southbound IMPL
Feature Description: This feature is the main feature of the OVSDB Southbound plugin. This plugin handles the OVS device that supports the OVSDB schema and uses the OVSDB protocol. This feature provides the implementation of the defined YANG models. Developers developing the in-controller application that want to leverage OVSDB for device configuration can add a dependency on this feature and all the required modules will be loaded.
Top Level: Yes
User Facing: No
Experimental: No
CSIT Test:
Feature URL: Southbound IMPL Rest
Feature Description: This feature is the wrapper feature that installs the odl-ovsdb-southbound-api & odl-ovsdb-southbound-impl feature with other required features for restconf access to provide a functional OVSDB southbound plugin. Users who want to develop applications that manage the OVSDB supported devices but want to run the application outside of the OpenDaylight controller must install this feature.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test:
Feature URL: HWVT Southbound API
Feature Description: This feature provides the YANG models for northbound users to configure the device that supports OVSDB Hardware vTEP schema. These YANG models are designed based on the OVSDB Hardware vTEP schema. This feature does not provide the implementation of YANG models. If user/developer prefer to write their own implementation of the defined YANG model, they can use this feature to install the YANG models in the controller.
Top Level: Yes
User Facing: Yes
Experimental: Yes
CSIT Test: Minimal set of CSIT test is already in place. More work is in progress and will have better functional coverage in future release: CSIT
Feature URL: HWVTEP Southbound
Feature Description: This feature is the main feature of the OVSDB Hardware vTep Southbound plugin. This plugin handles the OVS device that supports the OVSDB Hardware vTEP schema and uses the OVSDB protocol. This feature provides the implementation of the defined YANG models. Developers developing the in-controller application that want to leverage OVSDB Hardware vTEP plugin for device configuration can add a dependency on this feature, and all the required modules will be loaded.
Top Level: Yes
User Facing: No
Experimental: Yes
CSIT Test: Minimal set of CSIT test is already in place. More work is in progress and will have better functional coverage in future release. CSIT
Feature URL: HWVTEP Southbound Rest
Feature Description: This feature is the wrapper feature that installs the odl-ovsdb-hwvtepsouthbound-api & odl-ovsdb-hwvtepsouthbound features with other required features for restconf access to provide a functional OVSDB Hardware vTEP plugin. Users who want to develop applications that manage the Hardware vTEP supported devices but want to run the applications outside of the OpenDaylight controller must install this feature.
Top Level: Yes
User Facing: Yes
Experimental: Yes
CSIT Test: Minimal set of CSIT test is already in place. More work is in progress and will have better functional coverage in future release. CSIT
N/A
Do you have any external interfaces other than RESTCONF?
Yes, Southbound Connection to OVSDB/Hardware vTEP devices.
Other security issues?
Plugin’s connection to device is by default unsecured. Users need to explicitly enable the TLS support through ovsdb library configuration file. Users can refer to the wiki page here for the instructions.
Sonar Report (57%)
OVSDB southbound plugin is extensively tested through Unit Tests, IT test and system tests. OVSDB southbound plugin is tested in both a single-node and three-node cluster setup. Hardware vTEP plugin is currently tested through:
Unit testing
CSIT testing
NetVirt project L2 Gateway features CSIT tests
Manual testing
Is it possible to migrate from the previous release? If so, how?
Yes. User facing features and interfaces are not changed, only enhancements are done.
Is this release compatible with the previous release?
Yes
Any API changes?
No changes in the YANG models from previous release.
Any configuration changes?
No
There were no significant issues resolved in the sodium release.
List of features/APIs that were EOLed, deprecated, and/or removed from this release.
N/A
SERVICEUTILS¶
The ServiceUtils infrastructure project provides the utilities that assist in the operation and maintenance of different services that are provided by OpenDaylight. A service is a functionality provided by the ODL controller. These services can be categorized as Networking services (that is, L2, L3/VPN, NAT, etc.) and Infra services (that is, Openflow). These services are provided by different ODL projects, such as Netvirt, Genius and the Openflow plugin. They are comprised of a set of Java Karaf bundles and associated MD-SAL datastores.
Feature URL: SRM
Feature Description: This feature provides service recovery functionality for ODL services.
Top Level: Yes
User Facing: Yes
Experimental: No
CSIT Test:
ServiceRecovery is tested by Genius CSIT.
Developer Guide:
Do you have any external interfaces other than RESTCONF?
No
Other security issues?
N/A
Link to CSIT Jobs:
Note
Serviceutils is used extensively in Genius, NetVirt and SFC, so the respective project CSITs cover the serviceutils functionality.
Other manual testing and QA information.
N/A
Is it possible to migrate from the previous release? If so, how?
Yes, a normal upgrade of the software should work.
Is this release compatible with the previous release?
Yes
Any API changes?
No
Any configuration changes?
No
There were no significant issues resolved in the sodium release.
There were no significant issues known in the sodium release.
List of features/APIs that were EOLed, deprecated, and/or removed from this release.
None
List of standards implemented.
N/A
Transport PCE¶
Bug ID |
Description |
---|---|
Service Handler |
Translates WDM optical services creation requests, so they can be treated by the different modules - northbound API based on Open ROADM service models. |
Topology management |
Provides topology management. |
Path Calculation Engine (PCE) |
Provides a different meaning than the BGPCEP project, since it is not based on (G)MPLS) |
Renderer |
Responsible for the path configuration through optical equipment, based on the NETCONF protocol and Open ROADM specifications. Southbound plugin. |
Optical Line Management (OLM) |
Provides an optical fiber line monitoring and management. |
User Guide(s):
Developer Guide(s):
There are no security issues found.
functional tests: look at the jenkins releng tox job or download sources and launch tox from the root folder.
Supports the OpenROADM device version 2.2.1 (this support was experimental in Neon)
Openroadm and transport PCE are now based on IETF RFC8345 standard official network models (contrary to Fluorine which was relying on IETF I2RS draft).
Discrepancies between the topology db and the portmapping has been fixed in this release.
Transport PCE uses flexmap since Neon. The sodium release fixes a bug in the map formula used by Neon. https://git.opendaylight.org/gerrit/c/transportpce/+/84197
Transport PCE now relies on the new ODL databroker implementation instead of the deprecated controller one: 83996
Others deprecated functions related to Transaction services have also been migrated, refer to 83839
N/A
Do you have any external interfaces other than RESTCONF?
No
Other security issues?
N/A
N/A
Is it possible to migrate from the previous release? If so, how?
Yes, a normal upgrade of the software should work.
Is this release compatible with the previous release?
Yes
Any API changes?
No
Any configuration changes?
No
N/A
N/A
List of features/APIs that were EOLed, deprecated, and/or removed from this release
N/A
List of standards implemented.
N/A
N/A
Service Release Notes¶
Sodium-SR1 Release Notes¶
This page details changes and bug fixes between the Sodium Release and the Sodium Stability Release 1 (Sodium-SR1) of OpenDaylight.
51c8d31e67 CONTROLLER-1919 : Use careful byte-masking/shifting in Mg Input
1be865aa2c : Disable slf4j SSL link
6d0313e289 : Register MXBean only during start
d6875afc77 CONTROLLER-1920 : Split up transaction chunks
bc6c86ebab : Use ReusableNormalizedNodeReceiver
6168e3ada0 : Fixup docs references
36b223eaa4 : Do not update term from unreachable members
219d4854c0 CONTROLLER-1919 : Define PayloadVersion.MAGNESIUM
17c2cdf512 CONTROLLER-1919 : Define DataStoreVersions.MAGNESIUM_VERSION
27cd8569a6 CONTROLLER-1919 : Add cds-access-api MAGNESIUM version
f19d751c62 CONTROLLER-1919 : Define PayloadVersion.SODIUM_SR1
9e0a82e03d CONTROLLER-1919 : Switch ABIVersion/DataStoreVersions back to Neon SR2
816b8424db CONTROLLER-1919 : Define DataStoreVersions.SODIUM_SR1_VERSION
1650d581fe CONTROLLER-1919 : Add cds-access-api SODIUM_SR1 version
66086b54b2 CONTROLLER-1919 : Add Magnesium stream version
33b722cd65 CONTROLLER-1919 : Add LithiumSR1 input/output support
d250ac0243 CONTROLLER-1919 : Add Magnesium encoding tokens
2d9e3b5a95 CONTROLLER-1919 : Add an explicit namespace sharing test
82503776b9 CONTROLLER-1919 : Move Neon SR2 tokens into its own class
a9b52bf8b7 CONTROLLER-1919 : Rename ValueTypes to LithiumValue
2059bc2827 CONTROLLER-1919 : Rename PathArgumentTypes to LithiumPathArgument
86519138c9 CONTROLLER-1919 : Rename NodeTypes to LithiumNode
03d96bfe16 CONTROLLER-1919 : Move Lithium tokens to their own class
2b957f3805 : Suppress modernization
49809b5cff : Migrate from YangInstanceIdentifier.EMPTY
e1ab83451c : Mark historic DataStoreVersions deprecated
e7583b7058 : Bump mdsal to 4.0.6
61631d3719 : Bump yangtools to 3.0.5
c3839997c6 : Bump odlparent to 5.0.2
45ee083e86 CONTROLLER-1919 : Use explicit versioning in MetadataShardDataTreeSnapshot
28c91e1e6c : Add more serialization assertions
2c60dab1b2 CONTROLLER-1919 : Add a 100K-entry test
b843cd2e6a CONTROLLER-1919 : Add encoding size asserts
d71ff207d7 : Separate out AbstractNormalizedNodeDataInput
3e54a7ee64 : Add @SupressFBWarnings around Await.result()
5d9451d60b : Make sure we know the version we encountered
199bd8bd24 : Optimize anyxml output
da733a0fd4 : Move Lithium-specific logic from AbstractNormalizedNodeDataOutput
4bab20f395 : Remove ensureHeaderWritten() from writeNode()
7086c00686 : Reorganize AbstractNormalizedNodeDataOutput
e9fbfe9cca : Split out AbstractLithiumDataInput
86aed4a9a3 : Cleanup PathArgumentTypes
239cc7b989 : AbstractNormalizedNodeDataOutput fails to write out header
0c7f9afaca : Tighten AbstractLithiumDataOutput.writeString()
65f9a729fd : Remove NormalizedNodeOutputStreamWriter
5750501908 : Disconnect {Lithium,NeonSR2} implementations
44c2a672ae : Lower ValueTypes constant visibility
05f092890b : Fix checkstyle/spotbugs violations
51404ca5fc : Cleanup ValueTypes lookup
b0d6c04dfc CONTROLLER-1908 : Deduplicate MapNode key leaf values
4ceac9e1c4 : Move common code to startNode()
dc73c367b4 : Reduce reliance on Guava Fuction/Supplier
37cdbd3d1e : Fix modernization issues
a28080b967 : Clean up opendaylight-inventory model
294c1ccaad : Revert “Bug 3871: Deprecate opendaylight-inventory model.”
4c48966122 : Fixup chunk offset movement on resend
887d45367b : Lost commit index when a snapshot is captured
1032f539c0 : Drop public modifier from NodeTypes
84c278d49a : Rename SodiumNormalizedNode* classes
c28d67e3ba : Move DataNormalizationOperation methods
a04084f57c : Final bits of NodeIdentifier migration
940361985c : Another round of checkstyle fixes
975f420ff : Add pagination for mounted resources of apidocs
723a83ca8 NETCONF-352 : Reorganize transactionChainHandler usage.
6f5deb203 : Migrate YangInstanceIdentifier.EMPTY users
7dd051ef0 : Remove use NodeIdentifierWithPredices.getKeyValues()
47fc3bf9d : Separate out DeviceSources(Resolver)
892276900 : Simplify base schema lookups
ef66f2aad NETCONF-639 : Fix choice action request test
8aa0cfe74 : Propagate MountPointContext to NetconfMessageTransformer
75e306196 : Update NodeIdentifierWithPredicates construction
4e77b03ae NETCONF-639 : Fix action lookups
eafd00e52 : Teach BaseSchema about schema mounts
f53a84015 : More SchemaContext reuse
121008c97 : Reuse schemacontext in ListenerAdapterTest
1576b451b : Reuse SchemaContext in RuntimeRpcTest
50c0a463d : Reuse schemaContext in mdsal-netconf-connector tests
66c5a4233 : Reuse SchemaContext in NetconfCommandsImplTest
14757a264 : Reuse SchemaContext in NetconfDeviceTopologyAdapterTest
4435526f5 : Share test model SchemaContext
e5b8c699a : Close module URL stream as soon as possible
cea6b159d : Use constant NodeIdentifiers in LibraryModulesSchemas
24f9babdf : Reduce code duplication in LibraryModulesSchemas
5350d2516 : Shorten nested class references
e44407442 : Simplify guessJsonFromFileName()
fd287393a : LibraryModulesSchemas.availableModels is immutable
920a998c2 : Cleanup state checking
9ce3a5679 : Centralize NETCONF_(DATA)_QNAME
cd90b42ac : Simplify GET_SCHEMAS_RPC initialization
7aa9f6ba7 : Improve action lookup
bfb98ea90 : Make NetconfMessageTransformer.getActions() static
abccfa85e : Reuse schema in NetconfMessageTransformerTest
6c177b8a0 : Remove unneeded type arguments
e700e3106 : Cleanup toRpcResult()
f67f8c229 : Make mappedRpcs an ImmutableMap
1ea17d0dc : Make notification filter a simple lambda
21f231413 : Fix schema source registrations not being cleared
50e58b477 : Introduce CREATE_SUBSCRIPTION_RPC_PATH
9cba5885e : Fix mdsal reference
4f496bbf4 : Bump mdsal to 4.0.6
9dca3efa9 : Bump yangtools to 3.0.5
a1dc9a431 : Bump odlparent to 5.0.2
a8d8326c8 NETCONF-618 : Teach RFC8040 restconf about actions
218bcbb83 : Fix checkstyle
a88ce37a5 : Fix checkstyle
f0525c56b NETCONF-635 : Teach NETCONF about YANG 1.1 actions in cluster topology
ce55cfb19 NETCONF-538 : Teach AbstractGet how to transform MapNodes
a1b5f0e56 : Simplify RestconfValidationUtils users
Sodium-SR2 Release Notes¶
This page details changes and bug fixes between the Sodium Stability Release 1 (Sodium-SR1) and the Sodium Stability Release 2 (Sodium-SR2) of OpenDaylight.
ad7885e2 : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11
5b45485a : Drop dependencies on commons-text
e2bf56b7 : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8
f5320468 : Updated the user guide after testing
9d954789 : Remove comons-beanutils overrides
aed88fc4 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7
6790a800 : Remove install/deploy plugin configuration
653a7430 : Fixup aaa-cert-mdsal pyang warnings
3df33ea7 : Update docs header to Sodium in stable/sodium
de1990344 BGPCEP-893 : Fix buffer read for unsupported LLGR Safi
61083064d BGPCEP-892 : Ignore unknown subobjects while parsing RRO/ERO objects in PCEP messages
57741dafc : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11
d37ac3d4b : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8
b40db38a6 BGPCEP-878 : Fix CSIT regression due to BGPCEP-878 fix
3aa922d3e BGPCEP-889 : Register PCEP session to stats handler only after it is fully initialized
12f9f73ab BGPCEP-878 : Fix NPE while accessing DomTxChain when bgp/app peer singleton service is restarted
d50e0d3a4 BGPCEP-884 : Address deadlock scenarios in BGP peer, session mgmt code
aa5d512b5 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7
78e77c6ef BGPCEP-880 : Fix bgp-segment-routing
265383c5a BGPCEP-880 : Fix rsvp.yang
6a9c0f66e : Remove unused imports
7bda69703 : Fixup ProtectionCommonParser.serializeBody()
1c3bbe5b76 CONTROLLER-1927 : Revert “Leader should always apply modifications as local”
8b74aa768b CONTROLLER-1927 : Leader should always apply modifications as local
b167cd30f0 : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11
6294d3756c : Remove unneeded checkstyle suppression
0c8422a41d : Remove jsr173-ri from dependencies
4aa141bc1c : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8
2dbaea6573 : Bail faster on not found module
07c18ba075 : Add javadoc links to yangtools-docs and mdsal-docs
2cb624b5f1 CONTROLLER-1914 : Allow shard settle timeout to be tuned
a0e649cf20 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7
b182c31152 CONTROLLER-1889 : Move DataTreeCandidate extraction to where it is applied
f1ecd20014 CONTROLLER-1889 : Rework AbstractNormalizedNodePruner
67afa21bc1 CONTROLLER-1626 : Allow AbstractClientActor generation to start from non-zero
e99739f7ee CONTROLLER-1626 : Add the ability to report known connected clients
f176c27a04 : Add locate-shard RPC
ee49d6d1b1 : Cleanup cluster-admin.yang
aa92ac9de4 : Use ConcurrentHashMap.newKeySet()
90ab895c08 : Remove unneeded version declaration
f81d95dd83 : Remove unused model imports
d575a48 : Bump odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11
366f17f : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8
822fc17 : Update version after sodium SR1
15acae6 : Add missing packaging pom
f5f03af INTDIST-106 : Add Sodium ONAP distribution
def120f : Re-add TPCE to sodium
527ca66 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7
29f7c07 : Fixup platform versions
fc011b75e : Fixed wrong exception types
dde16f406 : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11
4500c9cbb NETCONF-652 : Add namespace to action request XML
ad3308e23 : Remove jsr173-ri from dependencies
75908d20b : Remove websocket-server override
42366fd3b : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8
60da4823e : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7
9d3a276b7 : Update for sshd-2.3.0 changes
8f20fa402 : Correctly close NormalizedNodeStreamWriters
f4cee0dda : Properly close stream writer
189d139d9 : Do not use toString() in looging messages
2442f207c : Fix config/oper reconciliation for leaf-lists
98620c855 : Lower visibility to package
bbaf1cca0 : Acquire RFC8528 mount point map
27887ec99 : Apply modernizations
349af093f : Untangle NetconfDevice setup
6fad3d14d : Convert to using requireNonNull()
a3e16e30a : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11
2a5da734f : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8
f82e3f867 NETVIRT-1636 : Check network presence
6d7370b36 NETVIRT-1636 : Fix another VpnSubnetRouteHandler NPE source
eed19f721 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7
8fdf7aba9 NETVIRT-1636 : Fix VpnSubnetRouteHandler handling of getSubnetToDpn()
6a1bd2bd0 NETVIRT-1636 : Fix Acl.getAccessListEntries() NPE
e10c2f298 : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11
226e45a26 : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8
2fe595fdd : Failed to cancel service reconciliation, When controller become slave.
f50ff6361 OPNFLWPLUG-1078 : OPNFLWPLUG-1078: Notify device TLS authentication failure messages
48475e2dc OPNFLWPLUG-1075 : OPNFLWPLUG-1075: Making Device Oper transactions atomic
bb626f8e7 : Read action throwing NPE
0a7f87bd5 : Use String(byte[], Charset)
0690fb0ce : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7
2c10245e2 : Fix meter-id overlap
e71e31449 OVSDB-454 : Get rid of useless (Hwvtep)SouthboundProvider thread
75ca1ad0c OVSDB-454 : Migrate OvsdbDataTreeChangeListenerTest
9b597af70 OVSDB-331 : Add support for using epoll Netty transport
85b6d1a08 OVSDB-411 : Add NettyBootstrapFactory to hold OVSDB network threads
fd925bf08 OVSDB-428 : Eliminate TransactionInvokerImpl.successfulTransactionQueue
8310eabe7 : Bump to odlparent-5.0.5/yangtools-3.0.9/mdsal-4.0.11
a0f2e7018 : Bump odlparent/yangtools/mdsal to 5.0.4/3.0.7/4.0.8
9930827c4 : Rework TypedRowInvocationHandler invocation path
c6f7bc7bc : Migrate TyperUtils.getTableSchema() users
dfb657b23 : Simplify exception instantiation
9af87d9b0 : Migrate TyperUtils methods to TypedDatabaseSchemaImpl
5ee9ed22e : Make OvsdbClient return TypedDatabaseSchemas
c1c79b70c : Extract TypedRowInvocationHandler
7a6fe0e5c : Eliminate OvsdbClientImpl duplication
82723d831 : De-confuse InvocationHandler and target methods
e57992121 : Hide TyperUtils.extractRowUpdates()
8a8f8cfdf : Add TypedReflections
d97430282 : Add @NonNull annotation to OvsdbConnectionListener.connected()
9f030b429 : Add TypedDatabaseSchema
8115ecf71 : Turn DatabaseSchema into an interface
562d45084 : Make TableSchema/DatabaseSchema immutable
32d9f1ad9 : Split out BaseTypeFactories
11f8540ae : Use singleton BaseType instances for simple definitions
91b242822 : Split out BaseTypes
db4b48270 : Do not use reflection in TransactCommandAggregator
f9ba04906 : Reuse StringEncoders for all connections
4424150e6 : Reuse MappingJsonFactory across all sessions
2e9ba8f8b : Cleanup HwvtepConnectionManager.getHwvtepGlobalTableEntry()
eb330aac7 : Do not allow DatabaseSchema name/version to be mutated
88adf2528 : Do not allow TableSchema columns to be directly set
0ff47ed78 : Refactor ColumnType
aac8875db : Cleanup ColumnSchema
cb6c0ea4e : Add generated serialVersionUUID to exceptions
1ee2e4bfe : Make GenericTableSchema.fromJson() a factory method
d306338b5 : Move ObjectMapper to JsonRpcEndpoint
2c95ccc22 : Improve schemas population
16ff45fde : Turn JsonRpcEndpoint into a proper OvsdbRPC implementation
e8adc8639 : Reuse ObjectMapper across all connections
12a1c60ae : Use a constant ObjectMapper in UpdateNotificationDeser
4650cff9a : Use proper constant in JsonUtils
de91d31e7 : Do not reconfigure ObjectMapper in FutureTransformUtils
1c06606a7 : Bump to odlparent-5.0.3/yangtools-3.0.6/mdsal-4.0.7
c2919d47d : Do not use Foo.toString() when logging
Sodium-SR3 Release Notes¶
This page details changes and bug fixes between the Sodium Stability Release 2 (Sodium-SR2) and the Sodium Stability Release 3 (Sodium-SR3) of OpenDaylight.
701c04d9 : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14
28c6a5ff AAA-194 : AAA-194 Fix for Pattern Matching in Shiro
1bd4f300 : Remove jetty-servlet-tester references
44a4cc40 : Migrate OSGi compendium references
092b77c9 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13
2dfd1182 : Fix variable name s/newUser/new_user/
ae2e14242 : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14
97eafeff8 : Upgrade compendium dependency
246fb0e27 BGPCEP-900 : Handle race-conditions in BGP shutdown code
99fa6030b : Remove use of projectinfo property
7abbf30ff : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13
5893a9396 : Use HashMap.computIfAbsent() in getNode()
c3ea32ecbe CONTROLLER-911 : Add documentation about per-shard settings
f728bdc4d2 CONTROLLER-1932 : Add tests for new RootDataTreeChangeListenerProxy and Actor
35f4829f85 CONTROLLER-1935 : Do not bump follower term while it is isolated
4f0fc3a788 CONTROLLER-1932 : Add support for root DTCL listening on all shards in DS
ea3c45eb94 CONTROLLER-1934 : Fix DeleteEntries persisting with wrong index
01bbed0e8c : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14
d0e35ca48b : Migrate OSGI compendium reference
78e33f2690 CONTROLLER-1919 : Switch current {ABI,DataStore,Payload}Version to Sodium SR1
afe60bac11 CONTROLLER-1927 : Fixup “Leader should always apply modifications as local” regression
7c8ba32614 : Allow programmatic module sharding configuration
42c4744ef1 AAA-195 : Do not use passive connections
02161666c9 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13
4a8aa00e56 CONTROLLER-1929 : Expose more fine-grained shutdown methods
b4d8af6a74 CONTROLLER-1929 : Propagate TimeoutException when ActorSystem fails to terminate
6b3fe87 : Enable SM projects for Sodium SR3
06130ca : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14
4567e52 : Add cluster scripts to ONAP distribution
15fcd55 : Update common versions for Sodium SR3
da082b6 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13
484309c : Update platform versions
d6bcd4e : Add dlux for Sodium SR2
c09fd58 : Bump TPCE project
d2e6a73b0 NETCONF-676 : Correct POST Location with lists
5b2a2e768 NETCONF-641 : Allow SshClient to be customized via NetconfClientConfiguration
b74237267 : Clean up PostDataTransactionUtil
30231f156 NETCONF-663 : Get notification streams error.
3622c671c : Eliminate useless allocation
f6c58f4e5 NETCONF-338 : NETCONF southbound requires notifications.yang model to be present on the device
8a3d6ca7c NETCONF-643 : Add Markers.confidential() to netconf protocol messages
d1145cd25 : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14
65f621ffc NETCONF-686 : Adjust window on read
f9f2f83fa NETCONF-674 : Re-integrate ssh client
99c85b9d6 : Upgrade mina-sshd to 2.4.0
c383955c2 NETCONF-677 : Shade mina-sshd
48973379f : Eliminate CallHomeSessionContext.nettyChannel
e4d58f680 NETCONF-674 : Do not require NetconfSessionImpl
979f5dfc1 : Add sources to shaded-exificient
3fe280884 : Exclude xmlpull’s MANIFEST.MF
39e314458 : Remove unneeded sshd dependency
359ffe454 : Add eddsa dependency to netconf-testtool
5d3e0e488 : Remove unneeded blueprint-core dependency
6db74b29d : Move eddsa dependency
d7e66a237 : Pull eddsa into netconf-netty-util
155954a2f NETCONF-657 : Add plain PATCH capability to RFC8040 server
d2facdbf8 : Reuse SchemaContext.NAME for base NETCONF data qname
2b7482c46 NETCONF-665 : Add a dedicated AuthenticationFailedException
d442f2c30 NETCONF-664 : Fix defensive subscriber removal
5a24eec00 : Files should not be executable
50803c603 NETCONF-497 : Do not consider query depth in initial namespace
a002f5a40 : Fix default value check
c20e3a3b4 : Remove references to sal-common-impl
68640d232 : Migrate Compendium reference
d7db13578 : Bring doc building up to python3
df7f08126 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13
d5b99cffb NETCONF-125 : NETCONF-125 connection timeout and between timeout are fixed
5f4aab80d : Remove unneeded override
359c24a5f NETCONF-653 : Reject multiple sessions with the same host key
00b782700 NETCONF-568 : Do not attempt to parse empty RPC/action reply
3c0626b42 NETCONF-641 : Add option to provide custom SshClient for netconf-client
de6bc64fd NETCONF-610 : Custom scheme-cache-directory yang models are not replicated among cluster members
a63030b7c : Bump odlparent/yangtools/mdsal to 5.0.7/3.0.11/4.0.14
c63e6d659 : Bump odlparent/yangtools/mdsal to 5.0.6/3.0.10/4.0.13
4e6394c5e OPNFLWPLUG-1086 : OPNFLWPLUG-1086: Reconciliation framework failure when starting cbench tool for the first time
79477e580 OPNFLWPLUG-1084 : OPNFLWPLUG-1084 Device operational is not getting created if device reconciliation is not enabled
2d5f53916 OPNFLWPLUG-1074 : OPNFLWPLUG-1074: table stats not available after a switch flap
b21d86660 OPNFLWPLUG-1083 : OPNFLWPLUG-1083: Stats frozen after applying 2 sec delay in OF channel
Sodium-SR4 Release Notes¶
This page details changes and bug fixes between the Sodium Stability Release 3 (Sodium-SR3) and the Sodium Stability Release 4 (Sodium-SR4) of OpenDaylight.
aa617da07 BGPCEP-910 : Non ipv4 advertising peer causes BGP session flaps
c075b12c0 BGPCEP-915 : Process open mesage more defensively
d0834d968 : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17
7e452e6e3 BGPCEP-916 : Add an explanatory messages around TCP-MD5
9d219c452 : Attach sources to test-jar
c0c9fa6bc : Do not fail on warnings for docs-linkcheck
7ccb00121 BGPCEP-901 : Prevent deadlock when updating PCEP stats when Tx chain fails
78dbae481c : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17
438014a18e : Do not fail on warnings for docs-linkcheck
5c78ed0702 : Do not deploy opendaylight/model pom
4aec5ce1f2 : Fix intermittent IT hangs
688da70993 CONTROLLER-1913 : Enable overwrite test suite
75aa76c635 CONTROLLER-1950 : Split modifications on datastore root
678cb6ade2 : Simplify LocalTransactionContext
8ec5f25a20 CONTROLLER-1951 : Refactor TransactionContext.executeModification()
918285d16d : Split TransactionChainProxy.combineWithPriorReadOnlyTxFutures()
8c3a40e5c5 CONTROLLER-1913 : Add a missing space
d63c249c30 CONTROLLER-1950 : Move checking/logging out of executeModification()
443a2f66bd : Clean up TransactionProxy a bit
2114b53e07 : Fix NoSuchElementException in toString()
b7ead23301 CONTROLLER-1913 : Add an option to trigger snapshot creation on root overwrites
e830296759 CONTROLLER-1949 : Add option to disable default ActorSystemQuarantinedEvent handling
11abfbefca : Add UnsignedLongRangeSet.toString()
16c36370f6 CONTROLLER-1941 : Apply a workaround for the isolation of quarantined node
0aa788f7c4 : Add sender actor to the ForwardingDataTreeChangeListener
2562991c45 CONTROLLER-1915 : Allow incremental recovery
47a9c28947 CONTROLLER-1943 : Fix OpsManager registry instantiation
3d815f72 : Bump odlparent to 5.0.11
944b65bf INFRAUTILS-65 : Remove unneeded dependencies on odl-infrautils-inject
afc11ac7 INFRAUTILS-65 : Split out ready-guice
924aa272 INFRAUTILS-65 : Do not pollute annotations to runtime
355dda90 : Fix hangs in integration tests
8ba93b740 NETCONF-312 : Return Location in resp header for notif subscrip
ebe6e071c : Bump odlparent/yangtools/mdsal to 5.0.11/3.0.16/4.0.17
0f42b1589 : Expand rsa-ssh2 signatures
60534520f NETCONF-700 : Fix missing stream leaf value
40fcd1009 NETCONF-696 : Fix Nested YANG 1.1 Action invocation
7d79923bd : Bump mina-sshd to 2.5.1
890c30119 NETCONF-603 : Bind operation prefix to correct namespace
d56cfd2f8 : Add rsa-sha2 signatures to default client
512463195 NETCONF-682 : Report HTTP status 409 on DATA_MISSING error
599966a95 NETCONF-666 : Handle multiple rpc-error in the same rpc-reply
5ce9c1989 NETCONF-702 : Revert “Fix nested YANG 1.1 Action invocation”
2b148710e NETCONF-694 : Use censor attribute for CLI commands
fce007c3b NETCONF-696 : Fix nested YANG 1.1 Action invocation
4a6984171 NETCONF-685 : NETCONF-685 : Correctly propagate ‘pageNum’ query parameter
About this Document¶
This document is a verbatim copy of the Project Lifecycle &0 Releases Document <http://www.opendaylight.org/project-lifecycle-releases#MatureReleaseProcess> section, which has information about the release review document.
Both the release plan and release review document are intended to be short and simple. They are both posted publicly on the ODL wiki to assist in project coordination.
Important
When copying, remove the entire “About this Document” section and fill out the next sections. In addition, do not remove any section. Also, use short sentences and not “n/a” or “none,” since it is confusing to the reader as to whether that means there are no issues or you did not address the issue.
Project Name¶
The overview section is for users to identify and describe the features that will be used by end-users (remove this paragraph).
This releases provides the following new and modified features:
Feature URL: https://git.opendaylight.org/gerrit/gitweb?p=sample.git;a=blob;f=features/src/main/features/features.xml
Feature Description: This sample feature performs various sample tasks to provide the implementation of RFC 0000.
Top Level: Yes
User Facing: Yes
Experimental: Yes
CSIT Test: https://jenkins.opendaylight.org/releng/view/sample/job/sample-csit-1node-feature-all-carbon/
The following table lists the resolved issues fixed this release.
Key |
Summary |
---|---|
<bug ID> |
The following table lists the known issues that exist in this release.
Key |
Summary |
---|---|
<bug ID> |
Getting Started Guide¶
Introduction¶
The OpenDaylight project is an open source platform for Software Defined Networking (SDN) that uses open protocols to provide centralized, programmatic control and network device monitoring.
Much as your operating system provides an interface for the devices that comprise your computer, OpenDaylight provides an interface that allows you to control and manage network devices.
What’s different about OpenDaylight¶
Major distinctions of OpenDaylight’s SDN compared to other SDN options are the following:
A microservices architecture, in which a “microservice” is a particular protocol or service that a user wants to enable within their installation of the OpenDaylight controller, for example:
A plugin that provides connectivity to devices via the OpenFlow protocols (openflowplugin).
A platform service such as Authentication, Authorization, and Accounting (AAA).
A network service providing VM connectivity for OpenStack (netvirt).
Support for a wide and growing range of network protocols: OpenFlow, P4 BGP, PCEP, LISP, NETCONF, OVSDB, SNMP and more.
Model Driven Service Abstraction Layer (MD-SAL). Yang models play a key role in OpenDaylight and are used for:
Creating datastore schemas (tree based structure).
Generating application REST API (RESTCONF).
Automatic code generation (Java interfaces and Data Transfer Objects).
OpenDaylight concepts and tools¶
In this section we discuss some of the concepts and tools you encounter with basic use of OpenDaylight. The guide walks you through the installation process in a subsequent section, but for now familiarize yourself with the information below.
To date, OpenDaylight developers have formed more than 50 projects to address ways to extend network functionality. The projects are a formal structure for developers from the community to meet, document release plans, code, and release the functionality they create in an OpenDaylight release.
The typical OpenDaylight user will not join a project team, but you should know what projects are as we refer to their activities and the functionality they create. The Karaf features to install that functionality often share the project team’s name.
Apache Karaf provides a lightweight runtime to install the Karaf features you want to implement and is included in the OpenDaylight platform software. By default, OpenDaylight has no pre-installed features.
Features and feature repositories can be managed in the Karaf configuration file
etc/org.apache.karaf.features.cfg
using thefeaturesRepositories
andfeaturesBoot
variables.Model-Driven Service Abstraction Layer (MD-SAL) is the OpenDaylight framework that allows developers to create new Karaf features in the form of services and protocol drivers and connects them to one another. You can think of the MD-SAL as having the following two components:
A shared datastore that maintains the following tree-based structures:
The Config Datastore, which maintains a representation of the desired network state.
The Operational Datastore, which is a representation of the actual network state based on data from the managed network elements.
A message bus that provides a way for the various services and protocol drivers to notify and communicate with one another.
If you’re interacting with OpenDaylight through the REST APIs while using the OpenDaylight interfaces, the microservices architecture allows you to select available services, protocols, and REST APIs.
Installing OpenDaylight¶
You complete the following steps to install your networking environment, with specific instructions provided in the subsections below.
Before detailing the instructions for these, we address the following: Java Runtime Environment (JRE) and operating system information Target environment Known issues and limitations
Install OpenDaylight¶
Downloading and installing OpenDaylight¶
The default distribution can be found on the OpenDaylight software download page: https://docs.opendaylight.org/en/latest/downloads.html
The Karaf distribution has no features enabled by default. However, all of the features are available to be installed.
Note
For compatibility reasons, you cannot enable all the features simultaneously. We try to document known incompatibilities in the Install the Karaf features section below.
To run the Karaf distribution:
Unzip the zip file.
Navigate to the directory.
run
./bin/karaf
.
For Example:
$ ls karaf-0.8.x-Oxygen.zip
karaf-0.8.x-Oxygen.zip
$ unzip karaf-0.8.x-Oxygen.zip
Archive: karaf-0.8.x-Oxygen.zip
creating: karaf-0.8.x-Oxygen/
creating: karaf-0.8.x-Oxygen/configuration/
creating: karaf-0.8.x-Oxygen/data/
creating: karaf-0.8.x-Oxygen/data/tmp/
creating: karaf-0.8.x-Oxygen/deploy/
creating: karaf-0.8.x-Oxygen/etc/
creating: karaf-0.8.x-Oxygen/externalapps/
...
inflating: karaf-0.8.x-Oxygen/bin/start.bat
inflating: karaf-0.8.x-Oxygen/bin/status.bat
inflating: karaf-0.8.x-Oxygen/bin/stop.bat
$ cd distribution-karaf-0.8.x-Oxygen
$ ./bin/karaf
________ ________ .__ .__ .__ __
\_____ \ ______ ____ ____ \______ \ _____ ___.__.\| \| \|__\| ____ \| \|___/ \|_
/ \| \\____ \_/ __ \ / \ \| \| \\__ \< \| \|\| \| \| \|/ ___\\| \| \ __\
/ \| \ \|_> > ___/\| \| \\| ` \/ __ \\___ \|\| \|_\| / /_/ > Y \ \|
\_______ / __/ \___ >___\| /_______ (____ / ____\|\|____/__\___ /\|___\| /__\|
\/\|__\| \/ \/ \/ \/\/ /_____/ \/
Press
tab
for a list of available commandsTyping
[cmd] --help
will show help for a specific command.Press
ctrl-d
or typesystem:shutdown
orlogout
to shutdown OpenDaylight.
Note
Please take a look at the Deployment Recommendations and following sections under Security Considerations if you’re planning on running OpenDaylight outside of an isolated test lab environment.
Install the Karaf features¶
To install a feature, use the following command, where feature1 is the feature name listed in the table below:
feature:install <feature1>
You can install multiple features using the following command:
feature:install <feature1> <feature2> ... <featureN-name>
Note
For compatibility reasons, you cannot enable all Karaf features simultaneously. The table below documents feature installation names and known incompatibilities.Compatibility values indicate the following:
all - the feature can be run with other features.
self+all - the feature can be installed with other features with a value of all, but may interact badly with other features that have a value of self+all. Not every combination has been tested.
Uninstalling features¶
To uninstall a feature, you must shut down OpenDaylight, delete the data directory, and start OpenDaylight up again.
Important
Uninstalling a feature using the Karaf feature:uninstall command is not supported and can cause unexpected and undesirable behavior.
Listing available features¶
To find the complete list of Karaf features, run the following command:
feature:list
To list the installed Karaf features, run the following command:
feature:list -i
The decription of these features is in the project specific release notes Project-specific Release Notes section.
Karaf running on Windows 10¶
Windows 10 cannot be identify by Karaf (equinox). Issue occurs during installation of karaf features e.g.:
opendaylight-user@root>feature:install odl-restconf
Error executing command: Can't install feature odl-restconf/0.0.0:
Could not start bundle mvn:org.fusesource.leveldbjni/leveldbjni-all/1.8-odl in feature(s) odl-akka-leveldb-0.7: The bundle "org.fusesource.leveldbjni.leveldbjni-all_1.8.0 [300]" could not be resolved. Reason: No match found for native code: META-INF/native/windows32/leveldbjni.dll; processor=x86; osname=Win32, META-INF/native/windows64/leveldbjni.dll; processor=x86-64; osname=Win32, META-INF/native/osx/libleveldbjni.jnilib; processor=x86; osname=macosx, META-INF/native/osx/libleveldbjni.jnilib; processor=x86-64; osname=macosx, META-INF/native/linux32/libleveldbjni.so; processor=x86; osname=Linux, META-INF/native/linux64/libleveldbjni.so; processor=x86-64; osname=Linux, META-INF/native/sunos64/amd64/libleveldbjni.so; processor=x86-64; osname=SunOS, META-INF/native/sunos64/sparcv9/libleveldbjni.so; processor=sparcv9; osname=SunOS
Workaround is to add
org.osgi.framework.os.name = Win32
to the karaf file
etc/system.properties
The workaround and further info are in this thread: https://stackoverflow.com/questions/35679852/karaf-exception-is-thrown-while-installing-org-fusesource-leveldbjni
Setting Up Clustering¶
Clustering Overview¶
Clustering is a mechanism that enables multiple processes and programs to work together as one entity. For example, when you search for something on google.com, it may seem like your search request is processed by only one web server. In reality, your search request is processed by may web servers connected in a cluster. Similarly, you can have multiple instances of OpenDaylight working together as one entity.
Advantages of clustering are:
Scaling: If you have multiple instances of OpenDaylight running, you can potentially do more work and store more data than you could with only one instance. You can also break up your data into smaller chunks (shards) and either distribute that data across the cluster or perform certain operations on certain members of the cluster.
High Availability: If you have multiple instances of OpenDaylight running and one of them crashes, you will still have the other instances working and available.
Data Persistence: You will not lose any data stored in OpenDaylight after a manual restart or a crash.
The following sections describe how to set up clustering on both individual and multiple OpenDaylight instances.
Multiple Node Clustering¶
The following sections describe how to set up multiple node clusters in OpenDaylight.
Deployment Considerations¶
To implement clustering, the deployment considerations are as follows:
To set up a cluster with multiple nodes, we recommend that you use a minimum of three machines. You can set up a cluster with just two nodes. However, if one of the two nodes fail, the cluster will not be operational.
Note
This is because clustering in OpenDaylight requires a majority of the nodes to be up and one node cannot be a majority of two nodes.
Every device that belongs to a cluster needs to have an identifier. OpenDaylight uses the node’s
role
for this purpose. After you define the first node’s role as member-1 in theakka.conf
file, OpenDaylight uses member-1 to identify that node.Data shards are used to contain all or a certain segment of a OpenDaylight’s MD-SAL datastore. For example, one shard can contain all the inventory data while another shard contains all of the topology data.
If you do not specify a module in the
modules.conf
file and do not specify a shard inmodule-shards.conf
, then (by default) all the data is placed in the default shard (which must also be defined inmodule-shards.conf
file). Each shard has replicas configured. You can specify the details of where the replicas reside inmodule-shards.conf
file.If you have a three node cluster and would like to be able to tolerate any single node crashing, a replica of every defined data shard must be running on all three cluster nodes.
Note
This is because OpenDaylight’s clustering implementation requires a majority of the defined shard replicas to be running in order to function. If you define data shard replicas on two of the cluster nodes and one of those nodes goes down, the corresponding data shards will not function.
If you have a three node cluster and have defined replicas for a data shard on each of those nodes, that shard will still function even if only two of the cluster nodes are running. Note that if one of those remaining two nodes goes down, the shard will not be operational.
It is recommended that you have multiple seed nodes configured. After a cluster member is started, it sends a message to all of its seed nodes. The cluster member then sends a join command to the first seed node that responds. If none of its seed nodes reply, the cluster member repeats this process until it successfully establishes a connection or it is shut down.
After a node is unreachable, it remains down for configurable period of time (10 seconds, by default). Once a node goes down, you need to restart it so that it can rejoin the cluster. Once a restarted node joins a cluster, it will synchronize with the lead node automatically.
Clustering Scripts¶
OpenDaylight includes some scripts to help with the clustering configuration.
Note
Scripts are stored in the OpenDaylight distribution/bin folder, and maintained in the distribution project repository in the folder distribution-karaf/src/main/assembly/bin/.
Configure Cluster Script¶
This script is used to configure the cluster parameters (e.g. akka.conf, module-shards.conf) on a member of the controller cluster. The user should restart the node to apply the changes.
Note
The script can be used at any time, even before the controller is started for the first time.
Usage:
bin/configure_cluster.sh <index> <seed_nodes_list>
index: Integer within 1..N, where N is the number of seed nodes. This indicates which controller node (1..N) is configured by the script.
seed_nodes_list: List of seed nodes (IP address), separated by comma or space.
The IP address at the provided index should belong to the member executing the script. When running this script on multiple seed nodes, keep the seed_node_list the same, and vary the index from 1 through N.
Optionally, shards can be configured in a more granular way by modifying the file “custom_shard_configs.txt” in the same folder as this tool. Please see that file for more details.
Example:
bin/configure_cluster.sh 2 192.168.0.1 192.168.0.2 192.168.0.3
The above command will configure the member 2 (IP address 192.168.0.2) of a cluster made of 192.168.0.1 192.168.0.2 192.168.0.3.
Setting Up a Multiple Node Cluster¶
To run OpenDaylight in a three node cluster, perform the following:
First, determine the three machines that will make up the cluster. After that, do the following on each machine:
Copy the OpenDaylight distribution zip file to the machine.
Unzip the distribution.
Open the following .conf files:
configuration/initial/akka.conf
configuration/initial/module-shards.conf
In each configuration file, make the following changes:
Find every instance of the following lines and replace _127.0.0.1_ with the hostname or IP address of the machine on which this file resides and OpenDaylight will run:
netty.tcp { hostname = "127.0.0.1"
Note
The value you need to specify will be different for each node in the cluster.
Find the following lines and replace _127.0.0.1_ with the hostname or IP address of any of the machines that will be part of the cluster:
cluster { seed-nodes = ["akka.tcp://opendaylight-cluster-data@${IP_OF_MEMBER1}:2550", <url-to-cluster-member-2>, <url-to-cluster-member-3>]
Find the following section and specify the role for each member node. Here we assign the first node with the member-1 role, the second node with the member-2 role, and the third node with the member-3 role:
roles = [ "member-1" ]
Note
This step should use a different role on each node.
Open the configuration/initial/module-shards.conf file and update the replicas so that each shard is replicated to all three nodes:
replicas = [ "member-1", "member-2", "member-3" ]
For reference, view a sample config files <<_sample_config_files,below>>.
Move into the +<karaf-distribution-directory>/bin+ directory.
Run the following command:
JAVA_MAX_MEM=4G JAVA_MAX_PERM_MEM=512m ./karaf
Enable clustering by running the following command at the Karaf command line:
feature:install odl-mdsal-clustering
OpenDaylight should now be running in a three node cluster. You can use any of the three member nodes to access the data residing in the datastore.
Sample akka.conf
file:
odl-cluster-data {
bounded-mailbox {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.MeteredBoundedMailbox"
mailbox-capacity = 1000
mailbox-push-timeout-time = 100ms
}
metric-capture-enabled = true
akka {
loglevel = "DEBUG"
loggers = ["akka.event.slf4j.Slf4jLogger"]
actor {
provider = "akka.cluster.ClusterActorRefProvider"
serializers {
java = "akka.serialization.JavaSerializer"
proto = "akka.remote.serialization.ProtobufSerializer"
}
serialization-bindings {
"com.google.protobuf.Message" = proto
}
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "10.194.189.96"
port = 2550
maximum-frame-size = 419430400
send-buffer-size = 52428800
receive-buffer-size = 52428800
}
}
cluster {
seed-nodes = ["akka.tcp://opendaylight-cluster-data@10.194.189.96:2550",
"akka.tcp://opendaylight-cluster-data@10.194.189.98:2550",
"akka.tcp://opendaylight-cluster-data@10.194.189.101:2550"]
auto-down-unreachable-after = 10s
roles = [
"member-2"
]
}
}
}
odl-cluster-rpc {
bounded-mailbox {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.MeteredBoundedMailbox"
mailbox-capacity = 1000
mailbox-push-timeout-time = 100ms
}
metric-capture-enabled = true
akka {
loglevel = "INFO"
loggers = ["akka.event.slf4j.Slf4jLogger"]
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "10.194.189.96"
port = 2551
}
}
cluster {
seed-nodes = ["akka.tcp://opendaylight-cluster-rpc@10.194.189.96:2551"]
auto-down-unreachable-after = 10s
}
}
}
Sample module-shards.conf
file:
module-shards = [
{
name = "default"
shards = [
{
name="default"
replicas = [
"member-1",
"member-2",
"member-3"
]
}
]
},
{
name = "topology"
shards = [
{
name="topology"
replicas = [
"member-1",
"member-2",
"member-3"
]
}
]
},
{
name = "inventory"
shards = [
{
name="inventory"
replicas = [
"member-1",
"member-2",
"member-3"
]
}
]
},
{
name = "toaster"
shards = [
{
name="toaster"
replicas = [
"member-1",
"member-2",
"member-3"
]
}
]
}
]
Cluster Monitoring¶
OpenDaylight exposes shard information via MBeans, which can be explored with
JConsole, VisualVM, or other JMX clients, or exposed via a REST API using
Jolokia, provided by the
odl-jolokia
Karaf feature. This is convenient, due to a significant focus
on REST in OpenDaylight.
The basic URI that lists a schema of all available MBeans, but not their content itself is:
GET /jolokia/list
To read the information about the shards local to the queried OpenDaylight instance use the following REST calls. For the config datastore:
GET /jolokia/read/org.opendaylight.controller:type=DistributedConfigDatastore,Category=ShardManager,name=shard-manager-config
For the operational datastore:
GET /jolokia/read/org.opendaylight.controller:type=DistributedOperationalDatastore,Category=ShardManager,name=shard-manager-operational
The output contains information on shards present on the node:
{
"request": {
"mbean": "org.opendaylight.controller:Category=ShardManager,name=shard-manager-operational,type=DistributedOperationalDatastore",
"type": "read"
},
"value": {
"LocalShards": [
"member-1-shard-default-operational",
"member-1-shard-entity-ownership-operational",
"member-1-shard-topology-operational",
"member-1-shard-inventory-operational",
"member-1-shard-toaster-operational"
],
"SyncStatus": true,
"MemberName": "member-1"
},
"timestamp": 1483738005,
"status": 200
}
The exact names from the “LocalShards” lists are needed for further
exploration, as they will be used as part of the URI to look up detailed info
on a particular shard. An example output for the
member-1-shard-default-operational
looks like this:
{
"request": {
"mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-default-operational,type=DistributedOperationalDatastore",
"type": "read"
},
"value": {
"ReadWriteTransactionCount": 0,
"SnapshotIndex": 4,
"InMemoryJournalLogSize": 1,
"ReplicatedToAllIndex": 4,
"Leader": "member-1-shard-default-operational",
"LastIndex": 5,
"RaftState": "Leader",
"LastCommittedTransactionTime": "2017-01-06 13:19:00.135",
"LastApplied": 5,
"LastLeadershipChangeTime": "2017-01-06 13:18:37.605",
"LastLogIndex": 5,
"PeerAddresses": "member-3-shard-default-operational: akka.tcp://opendaylight-cluster-data@192.168.16.3:2550/user/shardmanager-operational/member-3-shard-default-operational, member-2-shard-default-operational: akka.tcp://opendaylight-cluster-data@192.168.16.2:2550/user/shardmanager-operational/member-2-shard-default-operational",
"WriteOnlyTransactionCount": 0,
"FollowerInitialSyncStatus": false,
"FollowerInfo": [
{
"timeSinceLastActivity": "00:00:00.320",
"active": true,
"matchIndex": 5,
"voting": true,
"id": "member-3-shard-default-operational",
"nextIndex": 6
},
{
"timeSinceLastActivity": "00:00:00.320",
"active": true,
"matchIndex": 5,
"voting": true,
"id": "member-2-shard-default-operational",
"nextIndex": 6
}
],
"FailedReadTransactionsCount": 0,
"StatRetrievalTime": "810.5 μs",
"Voting": true,
"CurrentTerm": 1,
"LastTerm": 1,
"FailedTransactionsCount": 0,
"PendingTxCommitQueueSize": 0,
"VotedFor": "member-1-shard-default-operational",
"SnapshotCaptureInitiated": false,
"CommittedTransactionsCount": 6,
"TxCohortCacheSize": 0,
"PeerVotingStates": "member-3-shard-default-operational: true, member-2-shard-default-operational: true",
"LastLogTerm": 1,
"StatRetrievalError": null,
"CommitIndex": 5,
"SnapshotTerm": 1,
"AbortTransactionsCount": 0,
"ReadOnlyTransactionCount": 0,
"ShardName": "member-1-shard-default-operational",
"LeadershipChangeCount": 1,
"InMemoryJournalDataSize": 450
},
"timestamp": 1483740350,
"status": 200
}
The output helps identifying shard state (leader/follower, voting/non-voting), peers, follower details if the shard is a leader, and other statistics/counters.
The ODLTools team is maintaining a Python based tool, that takes advantage of the above MBeans exposed via Jolokia.
Geo-distributed Active/Backup Setup¶
An OpenDaylight cluster works best when the latency between the nodes is very small, which practically means they should be in the same datacenter. It is however desirable to have the possibility to fail over to a different datacenter, in case all nodes become unreachable. To achieve that, the cluster can be expanded with nodes in a different datacenter, but in a way that doesn’t affect latency of the primary nodes. To do that, shards in the backup nodes must be in “non-voting” state.
The API to manipulate voting states on shards is defined as RPCs in the cluster-admin.yang file in the controller project, which is well documented. A summary is provided below.
Note
Unless otherwise indicated, the below POST requests are to be sent to any single cluster node.
To create an active/backup setup with a 6 node cluster (3 active and 3 backup nodes in two locations) there is an RPC to set voting states of all shards on a list of nodes to a given state:
POST /restconf/operations/cluster-admin:change-member-voting-states-for-all-shards
This RPC needs the list of nodes and the desired voting state as input. For creating the backup nodes, this example input can be used:
{
"input": {
"member-voting-state": [
{
"member-name": "member-4",
"voting": false
},
{
"member-name": "member-5",
"voting": false
},
{
"member-name": "member-6",
"voting": false
}
]
}
}
When an active/backup deployment already exists, with shards on the backup nodes in non-voting state, all that is needed for a fail-over from the active “sub-cluster” to backup “sub-cluster” is to flip the voting state of each shard (on each node, active AND backup). That can be easily achieved with the following RPC call (no parameters needed):
POST /restconf/operations/cluster-admin:flip-member-voting-states-for-all-shards
If it’s an unplanned outage where the primary voting nodes are down, the “flip” RPC must be sent to a backup non-voting node. In this case there are no shard leaders to carry out the voting changes. However there is a special case whereby if the node that receives the RPC is non-voting and is to be changed to voting and there’s no leader, it will apply the voting changes locally and attempt to become the leader. If successful, it persists the voting changes and replicates them to the remaining nodes.
When the primary site is fixed and you want to fail back to it, care must be
taken when bringing the site back up. Because it was down when the voting
states were flipped on the secondary, its persisted database won’t contain
those changes. If brought back up in that state, the nodes will think they’re
still voting. If the nodes have connectivity to the secondary site, they
should follow the leader in the secondary site and sync with it. However if
this does not happen then the primary site may elect its own leader thereby
partitioning the 2 clusters, which can lead to undesirable results. Therefore
it is recommended to either clean the databases (i.e., journal
and
snapshots
directory) on the primary nodes before bringing them back up or
restore them from a recent backup of the secondary site (see section
Backing Up and Restoring the Datastore).
If is also possible to gracefully remove a node from a cluster, with the following RPC:
POST /restconf/operations/cluster-admin:remove-all-shard-replicas
and example input:
{
"input": {
"member-name": "member-1"
}
}
or just one particular shard:
POST /restconf/operations/cluster-admin:remove-shard-replica
with example input:
{
"input": {
"shard-name": "default",
"member-name": "member-2",
"data-store-type": "config"
}
}
Now that a (potentially dead/unrecoverable) node was removed, another one can be added at runtime, without changing the configuration files of the healthy nodes (requiring reboot):
POST /restconf/operations/cluster-admin:add-replicas-for-all-shards
No input required, but this RPC needs to be sent to the new node, to instruct it to replicate all shards from the cluster.
Note
While the cluster admin API allows adding and removing shards dynamically,
the module-shard.conf
and modules.conf
files are still used on
startup to define the initial configuration of shards. Modifications from
the use of the API are not stored to those static files, but to the journal.
Extra Configuration Options¶
Name |
Type |
Default |
Description |
---|---|---|---|
max-shard-data-change-executor-queue-size |
uint32 (1..max) |
1000 |
The maximum queue size for each shard’s data store data change notification executor. |
max-shard-data-change-executor-pool-size |
uint32 (1..max) |
20 |
The maximum thread pool size for each shard’s data store data change notification executor. |
max-shard-data-change-listener-queue-size |
uint32 (1..max) |
1000 |
The maximum queue size for each shard’s data store data change listener. |
max-shard-data-store-executor-queue-size |
uint32 (1..max) |
5000 |
The maximum queue size for each shard’s data store executor. |
shard-transaction-idle-timeout-in-minutes |
uint32 (1..max) |
10 |
The maximum amount of time a shard transaction can be idle without receiving any messages before it self-destructs. |
shard-snapshot-batch-count |
uint32 (1..max) |
20000 |
The minimum number of entries to be present in the in-memory journal log before a snapshot is to be taken. |
shard-snapshot-data-threshold-percentage |
uint8 (1..100) |
12 |
The percentage of Runtime.totalMemory() used by the in-memory journal log before a snapshot is to be taken |
shard-hearbeat-interval-in-millis |
uint16 (100..max) |
500 |
The interval at which a shard will send a heart beat message to its remote shard. |
operation-timeout-in-seconds |
uint16 (5..max) |
5 |
The maximum amount of time for akka operations (remote or local) to complete before failing. |
shard-journal-recovery-log-batch-size |
uint32 (1..max) |
5000 |
The maximum number of journal log entries to batch on recovery for a shard before committing to the data store. |
shard-transaction-commit-timeout-in-seconds |
uint32 (1..max) |
30 |
The maximum amount of time a shard transaction three-phase commit can be idle without receiving the next messages before it aborts the transaction |
shard-transaction-commit-queue-capacity |
uint32 (1..max) |
20000 |
The maximum allowed capacity for each shard’s transaction commit queue. |
shard-initialization-timeout-in-seconds |
uint32 (1..max) |
300 |
The maximum amount of time to wait for a shard to initialize from persistence on startup before failing an operation (eg transaction create and change listener registration). |
shard-leader-election-timeout-in-seconds |
uint32 (1..max) |
30 |
The maximum amount of time to wait for a shard to elect a leader before failing an operation (eg transaction create). |
enable-metric-capture |
boolean |
false |
Enable or disable metric capture. |
bounded-mailbox-capacity |
uint32 (1..max) |
1000 |
Max queue size that an actor’s mailbox can reach |
persistent |
boolean |
true |
Enable or disable data persistence |
shard-isolated-leader-check-interval-in-millis |
uint32 (1..max) |
5000 |
the interval at which the leader of the shard will check if its majority followers are active and term itself as isolated |
These configuration options are included in the etc/org.opendaylight.controller.cluster.datastore.cfg configuration file.
Persistence and Backup¶
Set Persistence Script¶
This script is used to enable or disable the config datastore persistence. The default state is enabled but there are cases where persistence may not be required or even desired. The user should restart the node to apply the changes.
Note
The script can be used at any time, even before the controller is started for the first time.
Usage:
bin/set_persistence.sh <on/off>
Example:
bin/set_persistence.sh off
The above command will disable the config datastore persistence.
Backing Up and Restoring the Datastore¶
The same cluster-admin API described in the cluster guide for managing shard voting states has an RPC allowing backup of the datastore in a single node, taking only the file name as a parameter:
POST /restconf/operations/cluster-admin:backup-datastore
RPC input JSON:
{
"input": {
"file-path": "/tmp/datastore_backup"
}
}
Note
This backup can only be restored if the YANG models of the backed-up data are identical in the backup OpenDaylight instance and restore target instance.
To restore the backup on the target node the file needs to be placed into the
$KARAF_HOME/clustered-datastore-restore
directory, and then the node
restarted. If the directory does not exist (which is quite likely if this is a
first-time restore) it needs to be created. On startup, ODL checks if the
journal
and snapshots
directories in $KARAF_HOME
are empty, and
only then tries to read the contents of the clustered-datastore-restore
directory, if it exists. So for a successful restore, those two directories
should be empty. The backup file name itself does not matter, and the startup
process will delete it after a successful restore.
The backup is node independent, so when restoring a 3 node cluster, it is best to restore it on each node for consistency. For example, if restoring on one node only, it can happen that the other two empty nodes form a majority and the cluster comes up with no data.
Security Considerations¶
This document discusses the various security issues that might affect OpenDaylight. The document also lists specific recommendations to mitigate security risks.
This document also contains information about the corrective steps you can take if you discover a security issue with OpenDaylight, and if necessary, contact the Security Response Team, which is tasked with identifying and resolving security threats.
Overview of OpenDaylight Security¶
There are many different kinds of security vulnerabilities that could affect an OpenDaylight deployment, but this guide focuses on those where (a) the servers, virtual machines or other devices running OpenDaylight have been properly physically (or virtually in the case of VMs) secured against untrusted individuals and (b) individuals who have access, either via remote logins or physically, will not attempt to attack or subvert the deployment intentionally or otherwise.
While those attack vectors are real, they are out of the scope of this document.
What remains in scope is attacks launched from a server, virtual machine, or device other than the one running OpenDaylight where the attack does not have valid credentials to access the OpenDaylight deployment.
The rest of this document gives specific recommendations for deploying OpenDaylight in a secure manner, but first we highlight some high-level security advantages of OpenDaylight.
Separating the control and management planes from the data plane (both logically and, in many cases, physically) allows possible security threats to be forced into a smaller attack surface.
Having centralized information and network control gives network administrators more visibility and control over the entire network, enabling them to make better decisions faster. At the same time, centralization of network control can be an advantage only if access to that control is secure.
Note
While both previous advantages improve security, they also make an OpenDaylight deployment an attractive target for attack making understanding these security considerations even more important.
The ability to more rapidly evolve southbound protocols and how they are used provides more and faster mechanisms to enact appropriate security mitigations and remediations.
OpenDaylight is built from OSGi bundles and the Karaf Java container. Both Karaf and OSGi provide some level of isolation with explicit code boundaries, package imports, package exports, and other security-related features.
OpenDaylight has a history of rapidly addressing known vulnerabilities and a well-defined process for reporting and dealing with them.
OpenDaylight Security Resources¶
If you have any security issues, you can send a mail to security@lists.opendaylight.org.
For the list of current OpenDaylight security issues that are either being fixed or resolved, refer to https://wiki-archive.opendaylight.org/view/Security:Advisories.
To learn more about the OpenDaylight security issues policies and procedure, refer to https://wiki-archive.opendaylight.org/view/Security:Main
Deployment Recommendations¶
We recommend that you follow the deployment guidelines in setting up OpenDaylight to minimize security threats.
The default credentials should be changed before deploying OpenDaylight.
OpenDaylight should be deployed in a private network that cannot be accessed from the internet.
Separate the data network (that connects devices using the network) from the management network (that connects the network devices to OpenDaylight).
Note
Deploying OpenDaylight on a separate, private management network does not eliminate threats, but only mitigates them. By construction, some messages must flow from the data network to the management network, e.g., OpenFlow packet_in messages, and these create an attack surface even if it is a small one.
Implement an authentication policy for devices that connect to both the data and management network. These are the devices which bridge, likely untrusted, traffic from the data network to the management network.
Securing OSGi bundles¶
OSGi is a Java-specific framework that improves the way that Java classes interact within a single JVM. It provides an enhanced version of the java.lang.SecurityManager (ConditionalPermissionAdmin) in terms of security.
Java provides a security framework that allows a security policy to grant permissions, such as reading a file or opening a network connection, to specific code. The code maybe classes from the jarfile loaded from a specific URL, or a class signed by a specific key. OSGi builds on the standard Java security model to add the following features:
A set of OSGi-specific permission types, such as one that grants the right to register an OSGi service or get an OSGi service from the service registry.
The ability to dynamically modify permissions at runtime. This includes the ability to specify permissions by using code rather than a text configuration file.
A flexible predicate-based approach to determining which rules are applicable to which ProtectionDomain. This approach is much more powerful than the standard Java security policy which can only grant rights based on a jarfile URL or class signature. A few standard predicates are provided, including selecting rules based upon bundle symbolic-name.
Support for bundle local permissions policies with optional further constraints such as DENY operations. Most of this functionality is accessed by using the OSGi ConditionalPermissionAdmin service which is part of the OSGi core and can be obtained from the OSGi service registry. The ConditionalPermissionAdmin API replaces the earlier PermissionAdmin API.
For more information, refer to https://www.osgi.org
Securing the Karaf container¶
Apache Karaf is a OSGi-based runtime platform which provides a lightweight container for OpenDaylight and applications. Apache Karaf uses either Apache Felix Framework or Eclipse Equinox OSGi frameworks, and provide additional features on top of the framework.
Apache Karaf provides a security framework based on Java Authentication and Authorization Service (JAAS) in compliance with OSGi recommendations, while providing RBAC (Role-Based Access Control) mechanism for the console and Java Management Extensions (JMX).
The Apache Karaf security framework is used internally to control the access to the following components:
OSGi services
console commands
JMX layer
WebConsole
The remote management capabilities are present in Apache Karaf by default, however they can be disabled by using various configuration alterations. These configuration options may be applied to the OpenDaylight Karaf distribution.
Note
Refer to the following list of publications for more information on implementing security for the Karaf container.
For role-based JMX administration, refer to https://karaf.apache.org/manual/latest/#_monitoring
For remote SSH access configuration, refer to https://karaf.apache.org/manual/latest/#_remote
For WebConsole access, refer to https://karaf.apache.org/manual/latest/#_webconsole
For Karaf security features, refer to https://karaf.apache.org/manual/latest/#_security_framework
Disabling the remote shutdown port¶
You can lock down your deployment post installation. Set
karaf.shutdown.port=-1
in etc/custom.properties
or etc/config.properties
to
disable the remote shutdown port.
Securing Southbound Plugins¶
Many individual southbound plugins provide mechanisms to secure their communication with network devices. For example, the OpenFlow plugin supports TLS connections with bi-directional authentication and the NETCONF plugin supports connecting over SSH. Meanwhile, the Unified Secure Channel plugin provides a way to form secure, remote connections for supported devices.
When deploying OpenDaylight, you should carefully investigate the secure mechanisms to connect to devices using the relevant plugins.
Securing OpenDaylight using AAA¶
AAA stands for Authentication, Authorization, and Accounting. All three of these services can help improve the security posture of an OpenDaylight deployment.
The vast majority of OpenDaylight’s northbound APIs (and all RESTCONF APIs) are protected by AAA by default when installing the +odl-restconf+ feature. In the cases that APIs are not protected by AAA, this will be noted in the per-project release notes.
By default, OpenDaylight has only one user account with the username and password admin. This should be changed before deploying OpenDaylight.
Securing RESTCONF using HTTPS¶
To secure Jetty RESTful services, including RESTCONF, you must configure the Jetty server to utilize SSL by performing the following steps.
Issue the following command sequence to create a self-signed certificate for use by the ODL deployment.
keytool -keystore .keystore -alias jetty -genkey -keyalg RSA Enter keystore password: 123456 What is your first and last name? [Unknown]: odl What is the name of your organizational unit? [Unknown]: odl What is the name of your organization? [Unknown]: odl What is the name of your City or Locality? [Unknown]: What is the name of your State or Province? [Unknown]: What is the two-letter country code for this unit? [Unknown]: Is CN=odl, OU=odl, O=odl, L=Unknown, ST=Unknown, C=Unknown correct? [no]: yes
After the key has been obtained, make the following changes to the
etc/custom.properties
file to set a few default properties.org.osgi.service.http.secure.enabled=true org.osgi.service.http.port.secure=8443 org.ops4j.pax.web.ssl.keystore=./etc/.keystore org.ops4j.pax.web.ssl.password=123456 org.ops4j.pax.web.ssl.keypassword=123456
Then edit the
etc/jetty.xml
file with the appropriate HTTP connectors.For example:
<?xml version="1.0"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting// DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd"> <Configure id="Server" class="org.eclipse.jetty.server.Server"> <!-- Use this connector for many frequently idle connections and for threadless continuations. --> <New id="http-default" class="org.eclipse.jetty.server.HttpConfiguration"> <Set name="secureScheme">https</Set> <Set name="securePort"> <Property name="jetty.secure.port" default="8443" /> </Set> <Set name="outputBufferSize">32768</Set> <Set name="requestHeaderSize">8192</Set> <Set name="responseHeaderSize">8192</Set> <!-- Default security setting: do not leak our version --> <Set name="sendServerVersion">false</Set> <Set name="sendDateHeader">false</Set> <Set name="headerCacheSize">512</Set> </New> <Call name="addConnector"> <Arg> <New class="org.eclipse.jetty.server.ServerConnector"> <Arg name="server"> <Ref refid="Server" /> </Arg> <Arg name="factories"> <Array type="org.eclipse.jetty.server.ConnectionFactory"> <Item> <New class="org.eclipse.jetty.server.HttpConnectionFactory"> <Arg name="config"> <Ref refid="http-default"/> </Arg> </New> </Item> </Array> </Arg> <Set name="host"> <Property name="jetty.host"/> </Set> <Set name="port"> <Property name="jetty.port" default="8181"/> </Set> <Set name="idleTimeout"> <Property name="http.timeout" default="300000"/> </Set> <Set name="name">jetty-default</Set> </New> </Arg> </Call> <!-- =========================================================== --> <!-- Configure Authentication Realms --> <!-- Realms may be configured for the entire server here, or --> <!-- they can be configured for a specific web app in a context --> <!-- configuration (see $(jetty.home)/contexts/test.xml for an --> <!-- example). --> <!-- =========================================================== --> <Call name="addBean"> <Arg> <New class="org.eclipse.jetty.jaas.JAASLoginService"> <Set name="name">karaf</Set> <Set name="loginModuleName">karaf</Set> <Set name="roleClassNames"> <Array type="java.lang.String"> <Item>org.apache.karaf.jaas.boot.principal.RolePrincipal</Item> </Array> </Set> </New> </Arg> </Call> <Call name="addBean"> <Arg> <New class="org.eclipse.jetty.jaas.JAASLoginService"> <Set name="name">default</Set> <Set name="loginModuleName">karaf</Set> <Set name="roleClassNames"> <Array type="java.lang.String"> <Item>org.apache.karaf.jaas.boot.principal.RolePrincipal</Item> </Array> </Set> </New> </Arg> </Call> </Configure>
The configuration snippet above adds a connector that is protected by SSL on
port 8443. You can test that the changes have succeeded by restarting Karaf,
issuing the following curl
command, and ensuring that the 2XX HTTP status
code appears in the returned message.
curl -u admin:admin -v -k https://localhost:8443/restconf/modules
Security Considerations for Clustering¶
While OpenDaylight clustering provides many benefits including high availability, scale-out performance, and data durability, it also opens a new attack surface in the form of the messages exchanged between the various instances of OpenDaylight in the cluster. In the current OpenDaylight release, these messages are neither encrypted nor authenticated meaning that anyone with access to the management network where OpenDaylight exchanges these clustering messages can forge and/or read the messages. This means that if clustering is enabled, it is even more important that the management network be kept secure from any untrusted entities.
What to Do with OpenDaylight¶
OpenDaylight (ODL) is a modular open platform for customizing and automating networks of any size and scale.
The following section provides links to documentation with examples of OpenDaylight deployment use cases.
Note
If you are an OpenDaylight contributor, we encourage you to add links to documentation with examples of interesting OpenDaylight deployment use cases in this section.
How to Get Help¶
Users and developers can get support from the OpenDaylight community through the mailing lists, IRC and forums.
Create your question on ServerFault or Stackoverflow with the tag #opendaylight.
Note
It is important to tag questions correctly to ensure that the questions reach individuals subscribed to the tag.
Mail discuss@lists.opendaylight.org or dev@lists.opendaylight.org.
Directly mail the PTL as indicated on the specific projects page.
IRC: Connect to #opendaylight or #opendaylight-meeting channel on freenode. The Linux Foundation’s IRC guide may be helpful. You’ll need an IRC client, or can use the freenode webchat, or perhaps you’ll like IRCCloud.
For infrastructure and release engineering queries, mail helpdesk@opendaylight.org. IRC: Connect to #lf-releng channel on freenode.
Developing Apps on the OpenDaylight controller¶
This section provides information that is required to develop apps on the OpenDaylight controller.
You can either develop apps within the controller using the model-driven SAL (MD-SAL) archetype or develop external apps and use the RESTCONF to communicate with the controller.
Overview¶
This section enables you to get started with app development within the OpenDaylight controller. In this example, you perform the following steps to develop an app.
Create a local repository for the code using a simple build process.
Start the OpenDaylight controller.
Test a simple remote procedure call (RPC) which you have created based on the principle of hello world.
Pre requisites¶
This example requires the following.
A development environment with following set up and working correctly from the shell:
Maven 3.5.2 or later
Java 8-compliant JDK
An appropriate Maven settings.xml file. A simple way to get the default OpenDaylight settings.xml file is:
cp -n ~/.m2/settings.xml{,.orig} ; wget -q -O - https://raw.githubusercontent.com/opendaylight/odlparent/master/settings.xml > ~/.m2/settings.xml
Note
If you are using Linux or Mac OS X as your development OS, your local repository is ~/.m2/repository. For other platforms the local repository location will vary.
Building an example module¶
To develop an app perform the following steps.
Create an Example project using Maven and an archetype called the opendaylight-startup-archetype. If you are downloading this project for the first time, then it will take sometime to pull all the code from the remote repository.
mvn archetype:generate -DarchetypeGroupId=org.opendaylight.archetypes -DarchetypeArtifactId=opendaylight-startup-archetype \ -DarchetypeCatalog=remote -DarchetypeVersion=<VERSION>
The correct VERSION depends on desired Simultaneous Release:
Archetype versions¶ OpenDaylight Simultaneous Release
opendaylight-startup-archetype version
Sodium
1.2.0
Sodium SR1
1.2.1
Sodium SR2
1.2.2
Sodium SR3 Development
1.2.3-SNAPSHOT
2. Update the properties values as follows. Ensure that the values for the groupId and the artifactId are in lower case.
Define value for property 'groupId': : org.opendaylight.example Define value for property 'artifactId': : example Define value for property 'version': 1.0-SNAPSHOT: : 1.0.0-SNAPSHOT Define value for property 'package': org.opendaylight.example: : Define value for property 'classPrefix': ${artifactId.substring(0,1).toUpperCase()}${artifactId.substring(1)} Define value for property 'copyright': : Copyright (c) 2015 Yoyodyne, Inc.
Accept the default value of classPrefix that is,
(${artifactId.substring(0,1).toUpperCase()}${artifactId.substring(1)})
. The classPrefix creates a Java Class Prefix by capitalizing the first character of the artifactId.Note
In this scenario, the classPrefix used is “Example”. Create a top-level directory for the archetype.
${artifactId}/ example/ cd example/ api/ artifacts/ features/ impl/ karaf/ pom.xml
Build the example project.
Note
Depending on your development machine’s specification this might take a little while. Ensure that you are in the project’s root directory,
example/
, and then issue the build command, shown below.mvn clean install
Start the example project for the first time.
cd karaf/target/assembly/bin ls ./karaf
Wait for the karaf cli that appears as follows. Wait for OpenDaylight to fully load all the components. This can take a minute or two after the prompt appears. Check the CPU on your dev machine, specifically the Java process to see when it calms down.
opendaylight-user@root>
Verify if the “example” module is built and search for the log entry which includes the entry ExampleProvider Session Initiated.
log:display | grep Example
Shutdown OpenDaylight through the console by using the following command.
shutdown -f
Defining a Simple Hello World RPC¶
- Build a hello example from the Maven archetype opendaylight-startup-archetype, same as above.
Now view the entry point to understand where the log line came from. The entry point is in the impl project:
impl/src/main/java/org/opendaylight/hello/impl/HelloProvider.java
Add any new things that you are doing in your implementation by using the
HelloProvider.init
method. It’s analogous to an Activator./** * Method called when the blueprint container is created. */ public void init() { LOG.info("HelloProvider Session Initiated"); }
Add a simple HelloWorld RPC API¶
Navigate to the file.
api/src/main/yang/hello.yang
Edit this file as follows. In the following example, we are adding the code in a YANG module to define the hello-world RPC:
module hello { yang-version 1; namespace "urn:opendaylight:params:xml:ns:yang:hello"; prefix "hello"; revision "2019-11-27" { description "Initial revision of hello model"; } rpc hello-world { input { leaf name { type string; } } output { leaf greeting { type string; } } } }
Return to the hello/api directory and build your API as follows.
cd ../../../ mvn clean install
Implement the HelloWorld RPC API¶
Define the HelloService, which is invoked through the hello-world API.
cd ../impl/src/main/java/org/opendaylight/hello/impl/
Create a new file called
HelloWorldImpl.java
and add in the code below.package org.opendaylight.hello.impl; import com.google.common.util.concurrent.ListenableFuture; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloService; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloWorldInput; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloWorldOutput; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloWorldOutputBuilder; import org.opendaylight.yangtools.yang.common.RpcResult; import org.opendaylight.yangtools.yang.common.RpcResultBuilder; public class HelloWorldImpl implements HelloService { @Override public ListenableFuture<RpcResult<HelloWorldOutput>> helloWorld(HelloWorldInput input) { HelloWorldOutputBuilder helloBuilder = new HelloWorldOutputBuilder(); helloBuilder.setGreeting("Hello " + input.getName()); return RpcResultBuilder.success(helloBuilder.build()).buildFuture(); } }
The
HelloProvider.java
file is in the current directory. Register the RPC that you created in the hello.yang file in theHelloProvider.java
file. You can either edit theHelloProvider.java
to match what is below or you can simple replace it with the code below./* * Copyright(c) Yoyodyne, Inc. and others. All rights reserved. * * This program and the accompanying materials are made available under the * terms of the Eclipse Public License v1.0 which accompanies this distribution, * and is available at http://www.eclipse.org/legal/epl-v10.html */ package org.opendaylight.hello.impl; import org.opendaylight.mdsal.binding.api.DataBroker; import org.opendaylight.mdsal.binding.api.RpcProviderService; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloService; import org.opendaylight.yangtools.concepts.ObjectRegistration; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class HelloProvider { private static final Logger LOG = LoggerFactory.getLogger(HelloProvider.class); private final DataBroker dataBroker; private ObjectRegistration<HelloService> helloService; private RpcProviderService rpcProviderService; public HelloProvider(final DataBroker dataBroker, final RpcProviderService rpcProviderService) { this.dataBroker = dataBroker; this.rpcProviderService = rpcProviderService; } /** * Method called when the blueprint container is created. */ public void init() { LOG.info("HelloProvider Session Initiated"); helloService = rpcProviderService.registerRpcImplementation(HelloService.class, new HelloWorldImpl()); } /** * Method called when the blueprint container is destroyed. */ public void close() { LOG.info("HelloProvider Closed"); if (helloService != null) { helloService.close(); } } }
Optionally, you can also build the Java classes which will register the new RPC. This is useful to test the edits you have made to HelloProvider.java and HelloWorldImpl.java.
cd ../../../../../../../ mvn clean install
Return to the top level directory
cd ../
Build the entire hello again, which will pickup the changes you have made and build them into your project:
mvn clean install
Execute the hello project for the first time¶
Run karaf
cd ../karaf/target/assembly/bin ./karaf
Wait for the project to load completely. Then view the log to see the loaded Hello Module:
log:display | grep Hello
Test the hello-world RPC via REST¶
There are a lot of ways to test your RPC. Following are some examples.
Using the API Explorer through HTTP
Using a browser REST client
Using the API Explorer through HTTP¶
- Navigate to apidoc UI with your web browser.NOTE: In the URL mentioned above, Change localhost to the IP/Host name to reflect your development machine’s network address.
Select
hello(2015-01-05)
Select
POST /operations/hello:hello-world
Provide the required value.
{"hello:input": { "name":"Your Name"}}
Click the button.
6. Enter the username and password. By default the credentials are admin/admin.
In the response body you should see.
{ "output": { "greeting": "Hello Your Name" } }
Using a browser REST client¶
POST: http://localhost:8181/restconf/operations/hello:hello-world
Header:
Accept: application/json
Content-Type: application/json
Authorization: Basic admin admin
Body:
{"input": {
"name": "Andrew"
}
}
In the response body you should see:
{
"output": {
"greeting": "Hello Your Name"
}
}
Troubleshooting¶
If you get a response code 501 while attempting to POST /operations/hello:hello-world, check the file: HelloProvider.java and make sure the helloService member is being set. By not invoking “session.addRpcImplementation()” the REST API will be unable to map /operations/hello:hello-world url to HelloWorldImpl.
OpenDaylight Contributor Guides¶
Documentation Guide¶
This guide provides details on how to contribute to the OpenDaylight documentation. OpenDaylight currently uses reStructuredText for documentation and Sphinx to build it. These documentation tools are widely used in open source communities to produce both HTML and PDF documentation and can be easily versioned alongside the code. reStructuredText also offers similar syntax to Markdown, which is familiar to many developers.
Contents
Style Guide¶
This section serves two purposes:
A guide for those writing documentation.
A guide for those reviewing documentation.
Note
When reviewing content, assuming that the content is usable, the documentation team is biased toward merging the content rather than blocking it due to relatively minor editorial issues.
Formatting Preferences¶
In general, when reviewing content, the documentation team ensures that it is comprehensible but tries not to be overly pedantic. Along those lines, while it is preferred that the following formatting preferences are followed, they are generally not an exclusive reason to give a “-1” reply to a patch in Gerrit:
No trailing whitespace
Line wrapping at something reasonable, that is, 72–100 characters
Key terms¶
Functionality: something useful a project provides abstractly
Feature: a Karaf feature that somebody could install
Project: a project within OpenDaylight; projects ship features to provide functionality
OpenDaylight: this refers to the software we release; use this in place of OpenDaylight controller, the OpenDaylight controller, not ODL, not ODC
Since there is a controller project within OpenDaylight, using other terms is hard.
Common writing style mistakes¶
In per-project user documentation, you should never say git clone, but should assume people have downloaded and installed the controller per the getting started guide and start with
feature:install <something>
Avoid statements which are true about part of OpenDaylight, but not generally true.
For example: “OpenDaylight is a NETCONF controller.” It is, but that is not all it is.
In general, developer documentation should target external developers to your project so should talk about what APIs you have and how they could use them. It should not document how to contribute to your project.
Grammar Preferences¶
Avoid contractions: Use “cannot” instead of “can’t”, “it is” instead of “it’s”, and so on.
Word Choice¶
Note
The following word choice guidelines apply when using these terms in text. If these terms are used as part of a URL, class name, or any instance where modifying the case would create issues, use the exact capitalization and spacing associated with the URL or class name.
ACL: not Acl or acl
API: not api
ARP: not Arp or arp
datastore: not data store, Data Store, or DataStore (unless it is a class/object name)
IPsec, not IPSEC or ipsec
IPv4 or IPv6: not Ipv4, Ipv6, ipv4, ipv6, IPV4, or IPV6
Karaf: not karaf
Linux: not LINUX or linux
NETCONF: not Netconf or netconf
Neutron: not neutron
OSGi: not osgi or OSGI
Open vSwitch: not OpenvSwitch, OpenVSwitch, or Open V Switch.
OpenDaylight: not Opendaylight, Open Daylight, or OpenDayLight.
Note
Also, avoid Opendaylight abbreviations like ODL and ODC.
OpenFlow: not Openflow, Open Flow, or openflow.
OpenStack: not Open Stack or Openstack
QoS: not Qos, QOS, or qos
RESTCONF: not Restconf or restconf
RPC not Rpc or rpc
URL not Url or url
VM: not Vm or vm
YANG: not Yang or yang
reStructuredText-based Documentation¶
When using reStructuredText, follow the Python documentation style guidelines. See: https://devguide.python.org/documenting/
One of the best references for reStrucutedText syntax is the Sphinx Primer on reStructuredText.
To build and review the reStructuredText documentation locally, you must have the following packages installed locally:
python
python-tox
Note
Both packages should be available in most distribution package managers.
Then simply run tox
and open the HTML produced by using your favorite web
browser as follows:
git clone https://git.opendaylight.org/gerrit/docs
cd docs
git submodule update --init
tox
firefox docs/_build/html/index.html
Directory Structure¶
The directory structure for the reStructuredText documentation is
rooted in the docs
directory inside the docs
git
repository.
Note
There are guides hosted directly in the docs
git
repository and there are guides hosted in remote git
repositories.
Documentation hosted in remote git
repositories are generally
provided for project-specific information.
For example, here is the directory layout on June, 28th 2016:
$ tree -L 2
.
├── Makefile
├── conf.py
├── documentation.rst
├── getting-started-guide
│ ├── api.rst
│ ├── concepts_and_tools.rst
│ ├── experimental_features.rst
│ ├── index.rst
│ ├── installing_opendaylight.rst
│ ├── introduction.rst
│ ├── karaf_features.rst
│ ├── other_features.rst
│ ├── overview.rst
│ └── who_should_use.rst
├── index.rst
├── make.bat
├── opendaylight-with-openstack
│ ├── images
│ ├── index.rst
│ ├── openstack-with-gbp.rst
│ ├── openstack-with-ovsdb.rst
│ └── openstack-with-vtn.rst
└── submodules
└── releng
└── builder
The getting-started-guide
and opendaylight-with-openstack
directories correspond to two guides hosted in the docs
repository,
while the submodules/releng/builder
directory houses documentation
for the RelEng/Builder project.
Each guide includes an index.rst
file, which uses a toctree
directive that includes the other files associated with the guide. For example:
.. toctree::
:maxdepth: 1
getting-started-guide/index
opendaylight-with-openstack/index
submodules/releng/builder/docs/index
This example creates a table of contents on that page where each heading of the table of contents is the root of the files that are included.
Note
When including .rst
files using the toctree
directive, omit
the .rst
file extension at the end of the file name.
Adding a submodule¶
If you want to import a project underneath the documentation project so
that the docs can be kept in the separate repo, you can do it by using the
git submodule add
command as follows:
git submodule add -b master ../integration/packaging docs/submodules/integration/packaging
git commit -s
Note
Most projects will not want to use -b master
, but instead
use the branch .
, which tracks whatever branch
of the documentation project you happen to be on.
Unfortunately, -b .
does not work, so you have to manually
edit the .gitmodules
file to add branch = .
and then
commit it. For example:
<edit the .gitmodules file>
git add .gitmodules
git commit --amend
When you’re done you should have a git commit something like:
$ git show
commit 7943ce2cb41cd9d36ce93ee9003510ce3edd7fa9
Author: Daniel Farrell <dfarrell@redhat.com>
Date: Fri Dec 23 14:45:44 2016 -0500
Add Int/Pack to git submodules for RTD generation
Change-Id: I64cd36ca044b8303cb7fc465b2d91470819a9fe6
Signed-off-by: Daniel Farrell <dfarrell@redhat.com>
diff --git a/.gitmodules b/.gitmodules
index 91201bf6..b56e11c8 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -38,3 +38,7 @@
path = docs/submodules/ovsdb
url = ../ovsdb
branch = .
+[submodule "docs/submodules/integration/packaging"]
+ path = docs/submodules/integration/packaging
+ url = ../integration/packaging
+ branch = master
diff --git a/docs/submodules/integration/packaging b/docs/submodules/integration/packaging
new file mode 160000
index 00000000..fd5a8185
--- /dev/null
+++ b/docs/submodules/integration/packaging
@@ -0,0 +1 @@
+Subproject commit fd5a81853e71d45945471d0f91bbdac1a1444386
As usual, you can push it to Gerrit with git review
.
Important
It is critical that the Gerrit patch be merged before the git commit hash of the submodule changes. Otherwise, Gerrit is not able to automatically keep it up-to-date for you.
Documentation Layout and Style¶
As mentioned previously, OpenDaylight aims to follow the Python documentation style guidelines, which defines a few types of sections:
# with overline, for parts
* with overline, for chapters
=, for sections
-, for subsections
^, for subsubsections
", for paragraphs
OpenDaylight documentation is organized around the following structure based on that recommendation:
docs/index.rst -> entry point
docs/____-guide/index.rst -> part
docs/____-guide/<chapter>.rst -> chapter
In the ____-guide/index.rst we use the #
with overline at the very top
of the file to determine that it is a part and then within each chapter
file we start the document with a section using *
with overline to
denote that it is the chapter heading and then everything in the rest of
the chapter should use:
=, for sections
-, for subsections
^, for subsubsections
", for paragraphs
Referencing Sections¶
This section provides a quick primer for creating references in OpenDaylight documentation. For more information, refer to Cross-referencing documents.
Within a single document, you can reference another section simply by:
This is a reference to `The title of a section`_
Assuming that somewhere else in the same file, there a is a section title something like:
The title of a section
^^^^^^^^^^^^^^^^^^^^^^
It is typically better to use :ref:
syntax and labels to provide
links as they work across files and are resilient to sections being
renamed. First, you need to create a label something like:
.. _a-label:
The title of a section
^^^^^^^^^^^^^^^^^^^^^^
Note
The underscore (_) before the label is required.
Then you can reference the section anywhere by simply doing:
This is a reference to :ref:`a-label`
or:
This is a reference to :ref:`a section I really liked <a-label>`
Note
When using :ref:
-style links, you don’t need a trailing
underscore (_).
Because the labels have to be unique, a best practice is to prefix
the labels with the project name to help share the label space; for example,
use sfc-user-guide
instead of just user-guide
.
Troubleshooting¶
Nested formatting does not work¶
As stated in the reStructuredText guide, inline markup for bold, italic, and fixed-width font cannot be nested. Furthermore, inline markup cannot be mixed with hyperlinks, so you cannot have a link with bold text.
This is tracked in a Docutils FAQ question, but there is no clear current plan to fix this.
Make sure you have cloned submodules¶
If you see an error like this:
./build-integration-robot-libdoc.sh: line 6: cd: submodules/integration/test/csit/libraries: No such file or directory
Resource file '*.robot' does not exist.
It means that you have not pulled down the git submodule for the integration/test project. The fastest way to do that is:
git submodule update --init
In some cases, you might wind up with submodules which are somehow out-of-sync. In that case, the easiest way to fix them is to delete the submodules directory and then re-clone the submodules:
rm -rf docs/submodules/
git submodule update --init
Warning
These steps delete any local changes or information you made in the submodules, which would only occur if you manually edited files in that directory.
Clear your tox directory and try again¶
Sometimes, tox will not detect when your requirements.txt
file has
changed and so will try to run things without the correct dependencies.
This issue usually manifests as No module named X
errors or
an ExtensionError
and can be fixed by deleting the .tox
directory and building again:
rm -rf .tox
tox
Builds on Read the Docs¶
Read the Docs builds do not automatically clear the file structure between builds and clones. The result is that you may have to clean up the state of old runs of the build script.
As an example, refer to the following patch: https://git.opendaylight.org/gerrit/c/docs/+/41679/
This patch fixed the issue that caused builds to fail because they were taking too long removing directories associated with generated javadoc files that were present from previous runs.
Errors from Coala¶
As part of running tox
, two environments run: coala
which does a variety
of reStructuredText (and other) linting, and docs
, which runs Sphinx to
build HTML and PDF documentation. You can run them independently by doing
tox -ecoala
or tox -edocs
.
The coala
linter for reStructuredText is not always the most helpful in
explaining why it failed. So, here are some common ones. There should also be
Jenkins Failure Cause Management rules that will highlight these for you.
Coala checks that git commit messages adhere to the following rules:
Shortlog (1st line of commit message) is less than 50 characters
Shortlog (1st line of commit message) is in the imperative mood. For example, “Add foo unit test” is good, but “Adding foo unit test is bad”“
Body (all lines but 1st line of commit message) are less than 72 characters. Some exceptions seem to exist, such as for long URLs.
Some examples of those being logged are:
- ::
Project wide: | | [NORMAL] GitCommitBear: | | Shortlog of HEAD commit isn’t in imperative mood! Bad words are ‘Adding’
- ::
Project wide: | | [NORMAL] GitCommitBear: | | Body of HEAD commit contains too long lines. Commit body lines should not exceed 72 characters.
If you see an error like this:
- ::
docs/gerrit.rst | 89| ···..·code-block::·bash | | [MAJOR] RSTcheckBear: | | (ERROR/3) Error in “code-block” directive:
It means that the relevant code-block is not valid for the
language specified, in this case bash
.
Note
If you do not specify a language, the default language is Python. If
you want the code-block to not be an any particular language, instead
use the ::
directive. For example:
- ::
- ::
This is a code block that will not be parsed in any particluar langauge
Project Documentation Requirements¶
Submitting Documentation Outlines (M2)¶
Determine the features your project will have and which ones will be ‘’user-facing’‘.
In general, a feature is user-facing if it creates functionality that a user would directly interact with.
For example,
odl-openflowplugin-flow-services-ui
is likely user-facing since it installs user-facing OpenFlow features, whileodl-openflowplugin-flow-services
is not because it provides only developer-facing features.
Determine pieces of documentation that you need to provide based on the features your project will have and which ones will be user-facing.
The kinds of required documentation can be found below in the Requirements for projects section.
Note
You might need to create multiple documents for the same kind of documentation. For example, the controller project will likely want to have a developer section for the config subsystem as well as for the MD-SAL.
Clone the docs repo:
git clone https://git.opendaylight.org/gerrit/docs
For each piece of documentation find the corresponding template in the docs repo.
For user documentation:
docs.git/docs/templates/template-user-guide.rst
For developer documentation:
ddocs/templates/template-developer-guide.rst
For installation documentation (if any):
docs/templates/template-install-guide.rst
Note
You can find the rendered templates here:
<Feature> User Guide¶
Refer to this template to identify the required sections and information that you should provide for a User Guide. The user guide should contain configuration, administration, management, using, and troubleshooting sections for the feature.
Overview¶ Provide an overview of the feature and the use case. Also include the audience who will use the feature. For example, audience can be the network administrator, cloud administrator, network engineer, system administrators, and so on.
<Feature> Architecture¶ Provide information about feature components and how they work together. Also include information about how the feature integrates with OpenDaylight. An architecture diagram could help.
Note
Please do not include detailed internals that somebody using the feature wouldn’t care about. For example, the fact that there are four layers of APIs between a user command and a message being sent to a device is probably not useful to know unless they have some way to influence how those layers work and a reason to do so.
Configuring <feature>¶ Describe how to configure the feature or the project after installation. Configuration information could include day-one activities for a project such as configuring users, configuring clients/servers and so on.
Administering or Managing <feature>¶ Include related command reference or operations that you could perform using the feature. For example viewing network statistics, monitoring the network, generating reports, and so on.
For example:
To configure L2switch components perform the following steps.
Step 1:
Step 2:
Step 3:
Tutorials¶ optional
If there is only one tutorial, you skip the “Tutorials” section and instead just lead with the single tutorial’s name. If you do, also increase the header level by one, i.e., replace the carets (^^^) with dashes (- - -) and the dashes with equals signs (===).
<Tutorial Name>¶ Ensure that the title starts with a gerund. For example using, monitoring, creating, and so on.
Overview¶ An overview of the use case.
Prerequisites¶ Provide any prerequisite information, assumed knowledge, or environment required to execute the use case.
Target Environment¶ Include any topology requirement for the use case. Ideally, provide visual (abstract) layout of network diagrams and any other useful visual aides.
Instructions¶ Use case could be a set of configuration procedures. Including screenshots to help demonstrate what is happening is especially useful. Ensure that you specify them separately. For example:
Configuring the environment¶ To configure the system perform the following steps.
Step 1
Step 2
Step 3
<Feature> Developer Guide¶
Overview¶ Provide an overview of the feature, what it logical functionality it provides and why you might use it as a developer. To be clear the target audience for this guide is a developer who will be using the feature to build something separate, but not somebody who will be developing code for this feature itself.
Note
More so than with user guides, the guide may cover more than one feature. If that is the case, be sure to list all of the features this covers.
<Feature> Architecture¶ Provide information about feature components and how they work together. Also include information about how the feature integrates with OpenDaylight. An architecture diagram could help. This may be the same as the diagram used in the user guide, but it should likely be less abstract and provide more information that would be applicable to a developer.
Key APIs and Interfaces¶ Document the key things a user would want to use. For some features, there will only be one logical grouping of APIs. For others there may be more than one grouping.
Assuming the API is MD-SAL- and YANG-based, the APIs will be available both via RESTCONF and via Java APIs. Giving a few examples using each is likely a good idea.
API Group 1¶ Provide a description of what the API does and some examples of how to use it.
API Group 2¶ Provide a description of what the API does and some examples of how to use it.
API Reference Documentation¶ Provide links to JavaDoc, REST API documentation, etc.
<Feature> Installation Guide¶
Note
Only use this template if installation is more complicated than simply installing a feature in the Karaf distribution. Otherwise simply provide the names of all user-facing features in your M3 readout.
This is a template for installing a feature or a project developed in the ODL project. The feature could be interfaces, protocol plug-ins, or applications.
Overview¶ Add overview of the feature. Include Architecture diagram and the positioning of this feature in overall controller architecture. Highlighting the feature in a different color within the overall architecture must help. Include information to describe if the project is within ODL installation package or to be installed separately.
Pre Requisites for Installing <Feature>¶ Hardware Requirements
Software Requirements
Preparing for Installation¶ Include any pre configuration, database, or other software downloads required to install <feature>.
Installing <Feature>¶ Include if you have separate procedures for Windows and Linux
Verifying your Installation¶ Describe how to verify the installation.
Post Installation Configuration¶ Post Installation Configuration section must include some basic (must-do) procedures if any, to get started.
Mandatory instructions to get started with the product.
Logging in
Getting Started
Integration points with controller
Upgrading From a Previous Release¶ Text goes here.
Uninstalling <Feature>¶ Text goes here.
Copy the template into the appropriate directory for your project.
For user documentation:
docs.git/docs/user-guide/${feature-name}-user-guide.rst
For developer documentation:
docs.git/docs/developer-guide/${feature-name}-developer-guide.rst
For installation documentation (if any):
docs.git/docs/getting-started-guide/project-specific-guides/${project-name}.rst
Note
These naming conventions are not set in stone, but are used to maintain a consistent document taxonomy. If these conventions are not appropriate or do not make sense for a document in development, use the convention that you think is more appropriate and the documentation team will review it and give feedback on the gerrit patch.
Edit the template to fill in the outline of what you will provide using the suggestions in the template. If you feel like a section is not needed, feel free to omit it.
Link the template into the appropriate core
.rst
file.For user documentation:
docs.git/docs/user-guide/index.rst
For developer documentation:
docs.git/docs/developer-guide/index.rst
For installation documentation (if any):
docs.git/docs/getting-started-guide/project-specific-guides/index.rst
In each file, it should be pretty clear what line you need to add. In general if you have an
.rst
fileproject-name.rst
, you include it by adding a new lineproject-name
without the.rst
at the end.
Make sure the documentation project still builds.
Run
tox
from the root of the cloned docs repo.After that, you should be able to find the HTML version of the docs at
docs.git/docs/_build/html/index.html
.See reStructuredText-based Documentation for more details about building the docs.
The reStructuredText Troubleshooting section provides common errors and solutions.
If you still have problems e-mail the documentation group at documentation@lists.opendaylight.org
Commit and submit the patch.
Commit using:
git add --all && git commit -sm "Documentation outline for ${project-shortname}"
Submit using:
git review
See the Git-review Workflow page if you don’t have git-review installed.
Wait for the patch to be merged or to get feedback
If you get feedback, make the requested changes and resubmit the patch.
When you resubmit the patch, it is helpful if you also post a “+0” reply to the patch in Gerrit, stating what patch set you just submitted and what you fixed in the patch set.
Expected Output From Documentation Project¶
The expected output is (at least) 3 PDFs and equivalent web-based documentation:
User/Operator Guide
Developer Guide
Installation Guide
These guides will consist of “front matter” produced by the documentation group and the per-project/per-feature documentation provided by the projects.
Note
This requirement is intended for the person responsible for the documentation and should not be interpreted as preventing people not normally in the documentation group from helping with front matter nor preventing people from the documentation group from helping with per-project/per-feature documentation.
Project Documentation Requirements¶
Content Types¶
These are the expected kinds of documentation and target audiences for each kind.
User/Operator: for people looking to use the feature without writing code
Should include an overview of the project/feature
Should include description of availble configuration options and what they do
Developer: for people looking to use the feature in code without modifying it
Should include API documentation, such as, enunciate for REST, Javadoc for Java, ??? for RESTCONF/models
Contributor: for people looking to extend or modify the feature’s source code
Note
You can find this information on the wiki.
Installation: for people looking for instructions to install the feature after they have downloaded the ODL release
Note
The audience for this content is the same as User/Operator docs
For most projects, this will be just a list of top-level features and options
As an example, l2switch-switch as the top-level feature with the -rest and -ui options
Features should also note if the options should be checkboxes (that is, they can each be turned on/off independently) or a drop down (that is, at most one can be selected)
What other top-level features in the release are incompatible with each feature
This will likely be presented as a table in the documentation and the data will likely also be consumed by automated installers/configurators/downloaders
For some projects, there is extra installation instructions (for external components) and/or configuration
In that case, there will be a (sub)section in the documentation describing this process.
HowTo/Tutorial: walk throughs and examples that are not general-purpose documentation
Generally, these should be done as a (sub)section of either user/operator or developer documentation.
If they are especially long or complex, they may belong on their own
Release Notes:
Release notes are required as part of each project’s release review. They must also be translated into reStructuredText for inclusion in the formal documentation.
Requirements for projects¶
Projects must provide reStructuredText documentation including:
Developer documentation for every feature
Most projects will want to logically nest the documentation for individual features under a single project-wide chapter or section
The feature documentation can be provided as a single
.rst
file or multiple.rst
files if the features fall into different groupsFeature documentation should start with appromimately 300 word overview of the project and include references to any automatically-generated API documentation as well as more general developer information (see Content Types).
User/Operator documentation for every every user-facing feature (if any)
This documentation should be per-feature, not per-project. Users should not have to know which project a feature came from.
Intimately related features can be documented together. For example, l2switch-switch, l2switch-switch-rest, and l2switch-switch-ui, can be documented as one noting the differences.
This documentation can be provided as a single
.rst
file or multiple.rst
files if the features fall into different groups
Installation documentation
Most projects will simply provide a list of user-facing features and options. See Content Types above.
Release Notes (both on the wiki and reStructuredText) as part of the release review.
Documentation must be contributed to the docs repo (or possibly imported from the project’s own repo with tooling that is under development)
Projects may be encouraged to instead provide this from their own repository if the tooling is developed
Projects choosing to meet the requirement in this way must provide a patch to docs repo to import the project’s documentation
Projects must cooperate with the documentation group on edits and enhancements to documentation
Timeline for Deliverables from Projects¶
M2: Documentation Started
The following tasks for documentation deliverables must be completed for the M2 readout:
The kinds of documentation that will be provided and for what features must be identified.
Note
Release Notes are not required until release reviews at RC2
The appropriate
.rst
files must be created in the docs repository (or their own repository if the tooling is available).An outline for the expected documentation must be completed in those
.rst
files including the relevant (sub)sections and a sentence or two explaining what will be contained in these sections.Note
If an outline is not provided, delivering actual documentation in the (sub)sections meets this requirement.
M2 readouts should include
the list of kinds of documentation
the list of corresponding
.rst
files and their location, including repo and paththe list of commits creating those
.rst
filesthe current word counts of those
.rst
files
M3: Documentation Continues
The readout at M3 should include the word counts of all
.rst
files with links to commitsThe goal is to have draft documentation complete at the M3 readout so that the documentation group can comment on it.
M4: Documentation Complete
All (sub)sections in all
.rst
files have complete, readable, usable content.Ideally, there should have been some interaction with the documentation group about any suggested edits and enhancements
RC2: Release notes
Projects must provide release notes in
.rst
format pushed to integration (or locally in the project’s repository if the tooling is developed)
OpenDaylight Release Process Guide¶
Overview¶
This guide provides details on the various release processes related to OpenDaylight. It documents the steps used by OpenDaylight release engineers when performing release operations.
Release Planning¶
Managed Release¶
Managed Release Summary¶
The Managed Release Process will facilitate timely, stable OpenDaylight releases by allowing the release team to focus on closely managing a small set of core OpenDaylight projects while not imposing undue requirements on projects that prefer more autonomy.
Managed Release Goals¶
The Managed Release Model will allow the release team to focus their efforts on a smaller set of more stable, more responsive projects.
The Managed Release Model will reduce the overhead both on projects taking part in the Managed Release and Self-Managed Projects.
Managed Projects will have fewer, smaller checkpoints consisting of only information that is maximally helpful for driving the release process. Much of the information collected at checkpoints will be automatically scraped, requiring minimal to no effort from projects. Additionally, Managed Release projects should have a more stable development environment, as the projects that can break the jobs they depend on will be a smaller set, more stable and more responsive.
Projects that are Self-Managed will have less overhead and reporting. They will be free to develop in their own way, providing their artifacts to include in the final release or choosing to release on their own schedule. They will not be required to submit any checkpoints and will not be expected to work closely with the rest of the OpenDaylight community to resolve breakages.
The Managed Release Process will reduce the set of projects that must simultaneously become stable at release time. The release and test teams will be able to focus on orchestrating a quality release for a smaller set of more stable, more responsive projects. The release team will also have greater latitude to step in and help projects that are required for dependency reasons but may not have a large set of active contributors.
Managed Projects¶
Managed Projects Summary¶
Managed Projects are either required by most applications for dependency reasons or are mature, stable, responsive projects that are consistently able to take part in releases without jeopardizing them. Managed Projects will receive additional support from the test and release teams to further their stability and make sure OpenDaylight releases go out on-time. To enable this extra support, Managed Projects will be given less autonomy than OpenDaylight projects have historically been granted.
Some projects are required by almost all other OpenDaylight projects. These projects must be in the Managed Release for it to support almost every OpenDaylight use case. Such projects will not have a choice about being in the Managed Release, the TSC will decide they are critical to the OpenDaylight platform and include them. They may not always meet the requirements that would normally be imposed on projects that wish to join the Managed Release. Since they cannot be kicked out of the release, the TSC, test and release teams will do their best to help them meet the Managed Release Requirements. This may involve giving Linux Foundation staff temporary committer rights to merge patches on behalf of unresponsive projects, or appointing committers if projects continue to remain unresponsive. The TSC will strive to work with projects and member companies to proactively keep projects healthy and find active contributors who can become committers in the normal way without the need to appoint them in times of crisis.
Some Managed Projects may decide to release on their own, not as a part of the Simultaneous Release with Snapshot Integrated Projects. Such Release Integrated projects will still be subject to Managed Release Requirements, but will need to follow a different release process.
For implementation reasons, the projects that are able to release independently must depend only on other projects that release independently. Therefore the Release Integrated Projects will form a tree starting from odlparent. Currently odlparent, yangtools and mdsal are the only Release Integrated Projects, but others may join them in the future.
Managed Projects should strive to have a healthy community.
Managed Projects should be responsive over email, IRC, Gerrit, Jira and in regular meetings.
All committers should be subscribed to their project’s mailing list and the release mailing list.
For the following particularly time-sensitive events, an appropriate response is expected within two business days.
RC or SR candidate feedback.
Major disruptions to other projects where a Jira weather item was not present and the pending breakage was not reported to the release mailing list.
If anyone feels that a Managed Project is not responsive, a grievance process is in place to clearly handle the situation and keep a record for future consideration by the TSC.
Managed Projects should have sufficient active committers to review contributions in a timely manner, support potential contributors, keep CSIT healthy and generally effectively drive the project.
If a project that the TSC deems is critical to the Managed Release is shown to not have sufficient active committers the TSC may step in and appoint additional committers. Projects that can be dropped from the Managed Release will be dropped instead of having additional committers appointed.
Managed Projects should regularly prune their committer list to remove inactive committers, following the Committer Removal Process.
Managed Projects are required to send a representative to attend TSC meetings.
To facilitate quickly acting on problems identified during TSC meetings, representatives must be a committer to the project they are representing. A single person can represent any number of projects.
Representatives will make the following entry into the meeting minutes to record their presence:
#project <project ID>
TSC minutes will be scraped per-release to gather attendance statistics. If a project does not provide a representative for at least half of TSC meetings a grievance will be filed for future consideration.
Managed Projects must submit information required for checkpoints on-time. Submissions must be correct and adequate, as judged by the release team and the TSC. Inadequate or missing submissions will result in a grievance.
Managed Projects are required to have the following jobs running and healthy.
Distribution check job (voting)
Validate autorelease job (voting)
Merge job (non-voting)
Sonar job (non-voting)
CLM job (non-voting)
Managed Projects should only depend on other Managed Projects.
If a project wants to be Managed but depends on Self-Managed Projects, they should work with their dependencies to become Managed at the same time or drop any Self-Managed dependencies.
Managed Projects are required to produce a user guide, developer guide and release notes for each release.
Managed Projects are required to handle CLM (Component Lifecycle Management) violations in a timely manner.
Checkpoints are designed to be mostly automated, to be maximally effective at driving the release process and to impose as little overhead on projects as possible.
There will be an initial checkpoint two weeks after the start of the release, a midway checkpoints one month before code freeze and a final checkpoint at code freeze.
An initial checkpoint will be collected two weeks after the start of each release. The release team will review the information collected and report it to the TSC at the next TSC meeting.
Projects will need to create the following artifacts:
High-level, human-readable description of what the project plans to do in this release. This should be submitted as a Jira Project Plan issue against the TSC project.
Select your project in the ODL Project field
Select the release version in the ODL Release field
Select the appropriate value in the ODL Participation field:
SNAPSHOT_Integrated (Managed)
orRELEASE_Integrated (Managed)
Select the value
Initial
in the ODL Checkpoint fieldIn the Summary field, put something like:
Project-X Fluorine Release Plan
In the Description field, fill in the details of your plan:
This should list a high-level, human-readable summary of what a project plans to do in a release. It should cover the project's planned major accomplishments during the release, such as features, bugfixes, scale, stability or longevity improvements, additional test coverage, better documentation or other improvements. It may cover challenges the project is facing and needs help with from other projects, the TSC or the LFN umbrella. It should be written in a way that makes it amenable to use for external communication, such as marketing to users or a report to other LFN projects or the LFN Board.
If a project is transitioning from Self-Managed to Managed or applying for the first time into a Managed release, raise a Jira Project Plan issue against the TSC project highlighting the request.
Select your project in the ODL Project field
Select the release version in the ODL Release field
Select the
NOT_Integrated (Self-Managed)
value in the ODL Participation fieldSelect the appropriate value in the ODL New Participation field:
SNAPSHOT_Integrated (Managed)
orRELEASE_Integrated (Managed)
In the Summary field, put something like:
Project-X joining/moving to Managed Release for Fluorine
In the Description field, fill in the details using the template below:
Summary This is an example of a request for a project to move from Self-Managed to Managed. It should be submitted no later than the start of the release. The request should make it clear that the requesting project meets all of the Managed Release Requirements. Healthy Community The request should make it clear that the requesting project has a healthy community. The request may also highlight a history of having a healthy community. Responsiveness The request should make it clear that the requesting project is responsive over email, IRC, Jira and in regular meetings. All committers should be subscribed to the project's mailing list and the release mailing list. The request may also highlight a history of responsiveness. Active Committers The request should make it clear that the requesting project has a sufficient number of active committers to review contributions in a timely manner, support potential contributors, keep CSIT healthy and generally effectively drive the project. The requesting project should also make it clear that they have pruned any inactive committers. The request may also highlight a history of having sufficient active committers and few inactive committers. TSC Attendance The request should acknowledge that the requesting project is required to send a committer to represent the project to at least 50% of TSC meetings. The request may also highlight a history of sending representatives to attend TSC meetings. Checkpoints Submitted On-Time The request should acknowledge that the requesting project is required to submit checkpoints on time. The request may also highlight a history of providing deliverables on time. Jobs Required for Managed Projects Running The request should show that the requesting project has the required jobs for Managed Projects running and healthy. Links should be provided. Depend only on Managed Projects The request should show that the requesting project only depends on Managed Projects. Documentation The request should acknowledge that the requesting project is required to produce a user guide, developer guide and release notes for each release. The request may also highlight a history of providing quality documentation. CLM The request should acknowledge that the requesting project is required to handle Component Lifecycle Violations in a timely manner. The request should show that the project's CLM job is currently healthy. The request may also show that the project has a history of dealing with CLM violations in a timely manner.
If a project is transitioning from Managed to Self-Managed, raise a Jira Project Plan issue against the TSC project highlighting the request.
Select your project in the ODL Project field
Select the release version in the ODL Release field
Select the appropriate value in the ODL Participation field:
SNAPSHOT_Integrated (Managed)
orRELEASE_Integrated (Managed)
Select the
NOT_Integrated (Self-Managed)
value in the ODL New Participation fieldIn the Summary field, put something like:
Project-X Fluorine Joining/Moving to Self-Manged for Fluorine
In the Description field, fill in the details:
This is a request for a project to move from Managed to Self-Managed. It should be submitted no later than the start of the release. The request does not require any additional information, but it may be helpful for the requesting project to provide some background and their reasoning.
Weather items that may impact other projects should be submitted as Jira issues. For a weather item, raise a Jira Weather Item issue against the TSC project highlighting the details.
Select your project in the ODL Project field
Select the release version in the ODL Release field
For the ODL Impacted Projects field, fill in the impacted projects using label values - each label value should correspond to the respective project prefix in Jira, e.g. netvirt is NETVIRT. If all projects are impacted, use the label value
ALL
.Fill in the expected date of weather event in the ODL Expected Date field
Select the appropriate value for ODL Checkpoint (may skip)
In the Summary field, summarize the weather event
In the Description field, provide the details of the weather event. Provide as much relevant information as possible.
The remaining artifacts will be automatically scraped:
Blocker bugs that were raised between the previous code freeze and release.
Grievances raised against the project during the last release.
One month before code freeze, a midway checkpoint will be collected. The release team will review the information collected and report it to the TSC at the next TSC meeting. All information for midway checkpoint will be automatically collected.
Open Jira bugs marked as blockers.
Open Jira issues tracking weather items.
Statistics about jobs. * Autorelease failures per-project. * CLM violations.
Grievances raised against the project since the last checkpoint.
Since the midway checkpoint is fully automated, the release team may collect this information more frequently, to provide trends over time.
At 2 weeks after code freeze a final checkpoint will be collected by the release team and presented to the TSC at the next TSC meeting.
Projects will need to create the following artifacts:
High-level, human-readable description of what the project did in this release. This should be submitted as a Jira Project Plan issue against the TSC project. This will be reused for external communication/marketing for the release.
Select your project in the ODL Project field
Select the release version in the ODL Release field
Select the appropriate value in the ODL Participation field:
SNAPSHOT_Integrated (Managed)
orRELEASE_Integrated (Managed)
Select the value
Final
in the ODL Checkpoint fieldIn the Summary field, put something like:
Project-X Fluorine Release details
In the Description field, fill in the details of your accomplishments:
This should be a high-level, human-readable summary of what a project did during a release. It should cover the project's major accomplishments, such as features, bugfixes, scale, stability or longevity improvements, additional test coverage, better documentation or other improvements. It may cover challenges the project has faced and needs help in the future from other projects, the TSC or the LFN umbrella. It should be written in a way that makes it amenable to use for external communication, such as marketing to users or a report to other LFN projects or the LFN Board.
In the ODL Gerrit Patch field, fill in the Gerrit patch URL to your project release notes
Release notes, user guide, developer guide submitted to the docs project.
The remaining artifacts will be automatically scraped:
Open Jira bugs marked as blockers.
Open Jira issues tracking weather items.
Statistics about jobs. * Autorelease failures per-project.
Statistics about patches. * Number of patches submitted during the release. * Number of patches merged during the release. * Number of reviews per-reviewer.
Grievances raised against the project since the start of the release.
Managed Projects that release independently (Release Integrated Projects), not as a part of the Simultaneous Release with Snapshot Integrated Projects, will need to follow a different release process.
Managed Release Integrated (MRI) Projects will provide the releases they want the Managed Snapshot Integrated (MSI) Projects to consume no later than two weeks after the start of the Managed Release. The TSC will decide by a majority vote whether to bump MSI versions to consume the new MRI releases. This should happen as early in the release as possible to get integration woes out of the way and allow projects to focus on developing against a stable base. If the TSC decide to consume the proposed MRI releases, all MSI Projects are required to bump to the new versions within a two day window. If some projects fail to merge version bump patches in time, the TSC will instruct Linux Foundation staff to temporarily wield committer rights and merge version bump patches. The TSC vote at any time to back out of a version bump if the new releases are found to be unsuitable.
MRI Projects are expected to provide bugfixes via minor or patch version updates during the release, but should strive to not expect MSI Projects to consume another major version update during the release.
MRI Projects are free to follow their own release cadence as they develop new features during the Managed Release. They need only have a stable version ready for the MSI Projects to consume by the next integration point.
The MRI Projects will follow similar checkpoints as the MSI Projects, but the timing will be different. At the time MRI Projects provide the releases they wish MSI Projects to consume for the next release, they will also provide their final checkpoints. Their midway checkpoints will be scraped one month before the deadline for them to deliver their artifacts to the MSI Projects. Their initial checkpoints will be due no later two weeks following the deadline for their delivery of artifacts to the MSI Projects. Their initial checkpoints will cover everything they expect to do in the next Managed Release, which may encompass any number of major version bumps for the MRI Projects.
Self-Managed Projects can request to become Managed by submitting a Project_Plan issue to the TSC project in Jira. See details as described under the Initial Checkpoint section above. Requests should be submitted before the start of a release. The requesting project should make it clear that they meet the Managed Release Project Requirements.
The TSC will evaluate requests to become Managed and inform projects of the result and the TSC’s reasoning no later than the start of the release or one week after the request was submitted, whichever comes last.
For the first release, the TSC will bootstrap the Managed Release with projects that are critical to the OpenDaylight platform. Other projects will need to follow the normal application process defined above.
The following projects are deemed critical to the OpenDaylight platform:
aaa
controller
infrautils
mdsal
netconf
odlparent
yangtools
Self-Managed Projects¶
In general there are two types of Self-Managed (SM) projects:
Self-Managed projects that want to participate in the formal (major or service) OpenDaylight release distribution. This section includes the requirements and release process for these projects.
Self-Managed projects that want to manage their own release schedule or provide their release distribution and installation instructions by the time of the release. There are no specific requirements for these projects.
Self-Managed Projects can consume whichever version of their upstream dependencies they want during most of the release cycle, but if they want to be included in the formal (major or service) release distribution they must have their upstream versions bumped to SNAPSHOT and build successfully no later than one week before the first Managed release candidate (RC) is created. Since bumping and integrating with upstream takes time, it is strongly recommended Self-Managed projects start this work early enough. This is no later than the middle checkpoint if they want to be in a major release, or by the previous release if they want to be in a service release (e.g. by the major release date if they want to be in SR1).
Note
To help with the integration effort, the Weather Page includes API and other important changes during the release cycle. After the formal release, the release notes also include this information.
In order to be included in the formal (major or service) release distribution, Self-Managed Projects must be in the common distribution pom.xml file and the distribution sanity test (see Add Projects to distribution) no later than one week before the first Managed release candidate (RC) is created. Projects should only be added to the final distribution pom.xml after they have succesfully published artifacts using upstream SNAPSHOTs. See Use of SNAPSHOT versions.
Note
It is very important Self-Managed projects do not miss the deadlines for upstream integration and final distribution check, otherwise there are high chances for missing the formal release distribution. See Release the project artifacts.
Self-Managed projects wanting to use the existing release job to release their artifacts (see Release the project artifacts) must have an stable branch in the major release (fluorine, neon, etc) they are targeting. It is highly recommended to cut the stable branch before the first Managed release candidate (RC) is created.
After creating the stable branch Self-Managed projects should:
Bump master branch version to X.Y+1.0-SNAPSHOT, this way any new merge in master will not interfere with the new created stable branch artifacts.
Update .gitreview for stable branch: change defaultbranch=master to stable branch. This way folks running “git review” will get the right branch.
Update their jenkins jobs: current release should point to the new created stable branch and next release should point to master branch. If you do not know how to do this please open a ticket to opendaylight helpdesk.
Self-Managed projects wanting to participate in the formal (major or service) release distribution must release and publish their artifacts to nexus in the week after the Managed release is published to nexus.
Self-Managed projects having an stable branch with latest upstream SNAPSHOT (see previous requirements) can use the release job in Project Standalone Release to release their artifacts.
After creating the release, Self-Managed projects should bump the stable branch version to X.Y.Z+1-SNAPSHOT, this way any new merge in the stable branch will not interfere with pre-release artifacts.
Note
Self-Managed Projects will not have any leeway for missing deadlines. If projects are not in the final distribution in the allocated time (normally one week) after the Managed projects release, they will not be included in the release distribution.
There are no checkpoints for Self-Managed Projects.
Managed Projects that are not required for dependency reasons can submit a Project_Plan issue to be Self-Managed to the TSC project in Jira. See details in the Initial Checkpoint section above. Requests should be submitted before the start of a release. Requests will be evaluated by the TSC.
The TSC may withdraw a project from the Managed Release at any time.
Self-Managed Projects will have their artifacts included in the final release if they are available on-time, but they will not be available to be installed until the user does a repo:add.
To install an Self-Managed Project feature, find the feature description in the system directory. For example, NetVirt’s main feature:
system/org/opendaylight/netvirt/odl-netvirt-openstack/0.6.0-SNAPSHOT/
Then use the Karaf shell to repo:add the feature:
feature:repo-add mvn:org.opendaylight.netvirt/odl-netvirt-openstack/0.6.0 -SNAPSHOT/xml/features
Grievances¶
For requirements that are difficult to automatically ascertain if a Managed Project is following or not, there should be a clear reporting process.
Grievance reports should be filed against the TSC project in Jira. Very urgent grievances can additionally be brought to the TSC’s attention via the TSC’s mailing list.
If a Managed Project does not meet the Responsiveness Requirements, a Grievance issue should be filed against the TSC project in Jira.
Unresponsive project reports should include (at least):
Select the project being reported in the ODL_Project field
Select the release version in the ODL_Release field
In the Summary field, put something like:
Grievance against Project-X
In the Description field, fill in the details:
Document the details that show ExampleProject was slow to review a change. The report should include as much relevant information as possible, including a description of the situation, relevant Gerrit change IDs and relevant public email list threads.
In the ODL_Gerrit_Patch, put in a URL to a Gerrit patch, if applicable
Vocabulary Reference¶
Managed Release Process: The release process described in this document.
Managed Project: A project taking part in the Managed Release Process.
Self-Managed Project: A project not taking part in the Managed Release Process.
Simultaneous Release: Event wherein all Snapshot Integrated Project versions are rewriten to release versions and release artifacts are produced.
Snapshot Integrated Project: Project that integrates with OpenDaylight projects using snapshot version numbers. These projects release together in the Simultaneous Release.
Release Integrated Project: Project that releases independently of the Simultaneous Release. These projects are consumed by Snapshot Integrated Projects based on release version numbers, not snapshot versions.
Release Schedule¶
OpenDaylight releases twice per year. The six-month cadence is designed to synchronize OpenDaylight releases with OpenStack and OPNFV releases. Dates are adjusted to match current resources and requirements from the current OpenDaylight users. Dates are also adjusted when they conflict with holidays, overlap with other releases or are otherwise problematic. Dates include the release of both managed and self-managed projects.
Event |
Sodium Dates |
Relative Dates |
Start-Relative Dates |
Description |
---|---|---|---|---|
Release Start |
2019-03-07 |
Start Date |
Start Date +0 |
Declare Intention: Submit Project_Plan Jira item in TSC project |
Initial Checkpoint |
2019-03-21 |
Start Date + 2 weeks |
Start Date +2 weeks |
Initial Checkpoint. All Managed Projects must have completed Project_Plan Jira items in TSC project. |
Release Integrated Deadline |
2019-04-11 |
Initial Checkpoint + 2 weeks |
Start Date +4 weeks |
Deadline for Release Integrated Projects (currently, ODLPARENT, YANGTOOLS and MDSAL) to provide the desired version deliverables for downstream Snapshot Integrated Projects to consume. For Sodium, this is +1 more week to resolve conflict with ONS NA 2019. |
Version Bump |
2019-04-12 |
Release Integrated Deadline + 1 day |
Start Date +4 weeks 1 day |
Prepare version bump patches and merge them in (RelEng team). Spend the next 2 weeks to get green build for all MSI Projects and a healthy distribution. |
Version Bump Checkpoint |
2019-04-25 |
Release Integrated Deadline + 2 weeks |
Start Date +6 weeks |
Check status of MSI Projects to see if we have green builds and a healthy distribution. Revert the MRI deliverables if deemed necessary. |
CSIT Checkpoint |
2019-05-09 |
Version Bump Checkpoint + 2 weeks |
Start Date +8 weeks |
All Managed Release CSIT should be in good shape - get all MSI Projects’ CSIT results as they were before the version bump. This is the final opportunity to revert the MRI deliverables if deemed necessary. |
Middle Checkpoint |
2019-07-04 |
CSIT Checkpoint + 8 weeks (sometimes +2 weeks to avoid December holidays) |
Start Date +16 weeks (sometimes +2 weeks to avoid December holidays) |
Checkpoint for status of Managed Projects - especially Snapshot Integrated Projects. |
Code Freeze |
2019-08-01 |
Middle Checkpoint + 4 weeks |
Start Date +20 weeks |
Code freeze for all Managed Projects - cut and lock release branch. Only allow blocker bugfixes in release branch. |
Final Checkpoint |
2019-08-15 |
Code Freeze + 2 weeks |
Start Date +22 weeks |
Final Checkpoint for all Managed Projects. |
Formal Release |
2019-09-24 |
6 months after Start Date |
Start Date +6 months |
Formal release |
Service Release 1 |
2019-11-12 |
1.5 month after Formal Release |
Start Date +7.5 months |
Service Release 1 (SR1) |
Service Release 2 |
2020-02-17 |
3 months after SR1 |
Start Date +10.5 months |
Service Release 2 (SR2) |
Service Release 3 |
2020-05-28 (actual: 2020-06-03) |
4 months after SR2 |
Start Date +14 months |
Service Release 3 (SR3) |
Service Release 4 |
2020-08-28 |
Not Applicable |
Not Applicable |
Service Release 4 (SR4) - Final Service Release |
Release End of Life |
2020-09-05 |
4 months after SR3 |
Start Date +18 months |
End of Life - coincides with the Formal Release of the current release+2 versions and the start of the current release+3 versions |
Fluorine Release Goals¶
Purpose¶
This document outlines OpenDaylight’s project-level goals for Fluorine. It is meant for consumption by fellow LFN projects, the LFN TAC and the LFN Board.
Goals¶
OpenDaylight has major infrastructure requirements that can’t be mitigated due to the large number of tests the community has developed over time. The Integration/Test and RelEng/Builder projects have always strove to use resources efficiently, to make OpenDaylight’s increasingly large test suite fit in the same resource allocation. However, OpenDaylight’s recent move to LFN and the Managed Release model may have unlocked new opportunities to achieve equally good or better test coverage at a lower cost.
A few ideas are outlined below, although it’s expected others will emerge.
Getting feedback about the impact of efficiency efforts is critical. OpenDaylight has requested that LFN start sending out infrastructure spending reports. These will allow the community to make data-driven decisions about which changes have substantial impacts and which aren’t a good work-vs-reward trade-off.
Reports should be provided as frequently as possible and should include all available data, like per-flavor usage, to help target efforts.
Other LFN projects may find it helpful to request similar reports.
OpenDaylight currently spends significant infrastructure and developer resources maintaining our own Devstack-based OpenStack deployment logic. OPNFV installer projects already produce VM images with master branch versions of OpenStack and OpenDaylight installed via production tooling. OpenDaylight would like to move to doing our OpenStack testing using these images, updating the version of OpenDaylight to the build under test. Using a pre-baked OpenStack deployment vs deploying it ourselves in every job would result in substantial cost savings, and not having to maintain Devstack deployment logic would make our jobs much more stable and save developer time.
This change wasn’t possible in our previous Rackspace-hosted infrastructure, but we hope it will be enabled by our recent move to Vexxhost or by running jobs that require OpenStack on LFN-managed hardware.
As part of OpenDaylight’s move to the Managed Release model, the Test team will have greater freedom to step in and directly manage project’s tests. This may enable the Test team to disable tests that are not actively watched and make other jobs run less frequently.
OpenDaylight pioneered Cross-Project CI/CD (XCI) in LFN with OPNFV shortly after that project’s creation. Since then, both projects and others that have followed have realized major benefits from continuously integrating recent pre-release versions. OpenDaylight would like to continue and expand this work in Fluorine.
OpenDaylight’s cloud infrastructure runs on OpenStack. We would like to start using a released version of OpenDaylight NetVirt as the Neutron backend in this infrastructure. This “eating our own dogfood” exercise would make for a good production-level test and good marketing.
This change wasn’t possible in our previous Rackspace-hosted infrastructure, but we hope it will be enabled by our recent move to Vexxhost or by running jobs that require OpenStack on LFN-managed hardware.
OpenDaylight would like to continue bringing new contributors to the community.
For Fluorine, OpenDaylight would like to focus on getting downstream consumers involved in upstream development. In an ideal Open Source world, the users of an Open Source projects would contribute back to the projects they consume. OpenDaylight would like to facilitate this by building special relationships between key downstream consumers and the upstream developer community. These downstreams could be companies, universities or Open Source projects. We hope for contributions in the form of code, documentation and bug reports.
OpenDaylight would like to work with the LFN MAC and TAC to identify a small set of downstream users to pilot the program with. The users would provide developers with dedicated cycles and a commitment to stick around for the long-term. In exchange, the OpenDaylight developer community would prioritize training these developers, answering their questions and generally facilitate their bootstrapping into the upstream community.
Companies allocating contributors to OpenDaylight tend to distribute resources to projects that are directly related to the usecases they are interested in, but neglect to give sufficient resources to the kernel projects that support them. OpenDaylight’s kernel developers are doing a heroic job of keeping the platform healthy, but for the long-term health of the project special attention needs to be paid to sufficiently staffing these key projects.
OpenDaylight requests that LFN member companies that consume OpenDaylight consider contributing developer resources to kernel projects. The new developers should be allocated for the long-term, to avoid costing cycles for training that aren’t repaid by contributions.
OpenDaylight has a tremendous amount of documentation, but much of it is written by experienced developers for experienced developers. As with most Open Source projects, the experienced developers typically don’t look at documentation targeted at inexperienced potential contributors. This type of general documentation is also typically not maintained by individual projects, who are focused on making sure their project-specific docs are in good shape.
To facilitate expanding OpenDaylight’s user and contributor base, we would like to focus on improving this “first impression” documentation for Fluorine. Since it’s not realistic to hope for a major improvement from the existing contributor base, OpenDaylight requests the LFN Board create a LF staff position focused on auditing and working with LFN project communities to improve this general, “first impression” documentation. This resource would be shared across all LFN projects.
The OpenDaylight community has developed a new release model for Fluorine. The Managed Release Model will facilitate timely releases, provide a more stable development environment for the most active OpenDaylight projects, reduce process overhead for all projects, give more autonomy to Unmanged Projects and allow the Release and Test teams to give more support to Managed Projects.
See the Managed Release Process for additional details.
OpenDaylight’s release dates need to synchronize with a number of related Open Source projects. The OpenDaylight TSC will work with those projects, perhaps making use of the LFN TAC, to understand the best time for our releases. The TSC will adjust OpenDaylight’s release schedule accordingly and ensure it’s met. We anticipate that the new Managed Release Process will make it easier for OpenDaylight to consistently meet release date targets going forward.
OpenDaylight would like to continue having a face-to-face Developer Design Forum to plan each release. The community has expressed many times that these events are extremely valuable, that they need to continue happening and that they can’t be replaced by remote DDFs.
OpenDaylight requests that the LFN Board allocate resources for at least one, ideally two, days of DDF for each OpenDaylight six-month release cycle. It has worked well to host these events in conjunction with other large, relevant events like ONS.
Processes¶
Project Standalone Release¶
This page explains how a project can release independently outside of the OpenDaylight simultanious release.
Preparing your project for release¶
A project can produce a staging repository by using one of the following methods against the {project-name}-maven-stage-{stream} job:
Leave a comment
stage-release
against any patch for the stream to buildClick
Build with Parameters
in Jenkins Web UI for the job
This job performs the following duties:
Removes -SNAPSHOT from all pom files
Produces a taglist.log, project.patch, and project.bundle files
Runs a mvn clean deploy to a local staging repo
Pushes the staging repo to a Nexus staging repo https://nexus.opendaylight.org/content/repositories/<REPO_ID> (REPO_ID is saved to staging-repo.txt on the log server)
Archives taglist.log, project.patch, and project.bundle files to log server
The files taglist.log and project.bundle can be used later at release time to reproduce a byte exact commit of what was built by the Jenkins job. This can be used to tag the release at release time.
Releasing your project¶
Once testing against the staging repo has been completed and project has determined that the staged repo is ready for release. A release can the be performed using the self-serve release process: https://docs.releng.linuxfoundation.org/projects/global-jjb/en/latest/jjb/lf-release-jobs.html
Ask helpdesk the necessary right on jenkins if you do not have them
Choose your project dashboard
Check your release branch has been successfully staged and note the corresponding log folder
Go back to the dashboard and choose the release-merge job
Click on build with parameters
Fill in the form:
GERRIT_BRANCH must be changed to the branch name you want to release (e.g. stable/sodium)
VERSION with your corresponding project version (e.g. 0.4.1)
LOG_DIR with the relative path of the log from the stage release job (e.g. project-maven-stage-master/17/)
choose maven DISTRIBUTION_TYPE in the select box
uncheck USE_RELEASE_FILE box
Launch the jenkins job
This job performs the following duties: * download and patch your project repository * build the project * publish the artifacts on nexus * tag and sign the release on Gerrit
Autorelease¶
The Release Engineering - Autorelease project is targeted at building the artifacts that are used in the release candidates and final full release.
Cloning Autorelease¶
To clone all the autorelease repo including it’s submodules simply run the clone command with the ‘’‘–recursive’‘’ parameter.
git clone --recursive https://git.opendaylight.org/gerrit/releng/autorelease
If you forgot to add the –recursive parameter to your git clone you can pull the submodules after with the following commands.
git submodule init
git submodule update
Creating Autorelease - Release and RC build¶
An autorelease release build comes from the autorelease-release-<branch> job which can be found on the autorelease tab in the releng master:
For example to create a Boron release candidate build launch a build from the autorelease-release-boron job by clicking the ‘’‘Build with Parameters’‘’ button on the left hand menu:
Note
The only field that needs to be filled in is the ‘’‘RELEASE_TAG’‘’, leave all other fields to their default setting. Set this to Boron, Boron-RC0, Boron-RC1, etc… depending on the build you’d like to create.
Adding Autorelease staging repo to settings.xml¶
If you are building or testing this release in such a way that requires pulling some of the artifacts from the Nexus repo you may need to modify your settings.xml to include the staging repo URL as this URL is not part of ODL Nexus’ public or snapshot groups. If you’ve already cloned the recommended settings.xml for building ODL you will need to add an additional profile and activate it by adding these sections to the “<profiles>” and “<activeProfiles>” sections (please adjust accordingly).
Note
This is an example and you need to “Add” these example sections to your settings.xml do not delete your existing sections.
The URLs in the <repository> and <pluginRepository> sections will also need to be updated with the staging repo you want to test.
<profiles>
<profile>
<id>opendaylight-staging</id>
<repositories>
<repository>
<id>opendaylight-staging</id>
<name>opendaylight-staging</name>
<url>https://nexus.opendaylight.org/content/repositories/automatedweeklyreleases-1062</url>
<releases>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>opendaylight-staging</id>
<name>opendaylight-staging</name>
<url>https://nexus.opendaylight.org/content/repositories/automatedweeklyreleases-1062</url>
<releases>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
</profiles>
<activeProfiles>
<activeProfile>opendaylight-staging</activeProfile>
</activeProfiles>
Project lifecycle¶
This page documents the current rules to follow when adding and removing a particular project to Simultaneous Release (SR).
List of states for projects in autorelease¶
The state names are short negative phrases describing what is missing to progress to the following state.
non-existent The project is not recognized by Technical Steering Committee (TSC) to be part of OpenDaylight (ODL).
non-participating The project is recognized byt TSC to be an ODL project, but the project has not confirmed participation in SR for given release cycle.
non-building The recognized project is willing to participate, but its current codebase is not passing its own merge job, or the project artifacts are otherwise unavailable in Nexus.
not-in-autorelease Project merge job passes, but the project is not added to autorelease (git submodule, maven module, validate-autorelease job passes).
failing-autorelease The project is added to autorelease (git submodule, maven module, validate-autorelease job passes), but autorelease build fails when building project’s artifact. Temporary state, timing out into not-in-autorelease.
repo-not-in-integration Project is succesfully built within autorelease, but integration/distribution:features-index is not listing all its public feature repositories.
feature-not-in-integration Feature repositories are referenced, distribution-check job is passing, but some user-facing features are absent from integration/distribution:features-test (possibly because adding them does not pass distribution SingleFeatureTest).
distribution-check-not-passing Features are in distribution, but distribution-check job is either not running, or it is failing for any reason. Temporary state, timing out into feature-not-in-integration.
feature-is-experimental All user-facing features are in features-test, but at least one of the corresponding functional CSIT jobs does not meet Integration/Test requirements.
feature-is-not-stable Feature does meet Integration/Test requirements, but it does not meed all requirements for stable features.
feature-is-stable
Note
A project may change its state in both directions, this list is to make sure a project is not left in an invalid state, for example distribution referencing feature repositories, but without passing distribution-check job.
Note
Projects can participate in Simultaneous Release even if they are not included in autorelease. Nitrogen example: Odlparent. FIXME: Clarify states for such projects (per version, if they released multiple times within the same cycle).
Branch Cutting¶
This page documents the current branch cutting tasks that are needed to be performed at RC0 and which team has the necessary permissions in order to perform the necessary task in Parentheses.
JJB (releng/builder)¶
Export
${NEXT_RELEASE}
and${CURR_RELEASE}
with new and current release names. (releng/builder committers)export CURR_RELEASE="Silicon" export NEXT_RELEASE="Phosphorus"
Run the script
cut-branch-jobs.py
to generate next release jobs. (releng/builder committers)python scripts/cut-branch-jobs.py $CURR_RELEASE $NEXT_RELEASE jjb/ pre-commit run --all-files
Note
pre-commit
is necessary to adjust the formatting of the generated YAML.This script changes JJB yaml files to insert the next release configuration by updating streams and branches where relevant. For example if
master
is currently Silicon, the result of this script will update config blocks as follows:Update multi-streams:
stream: - Phosphorus: branch: master - Silicon: branch: stable/silicon
Insert project new blocks:
- project: name: aaa-phosphorus jobs: - '{project-name}-verify-{stream}-{maven}-{jdks}' stream: phosphorus branch: master - project: name: aaa-silicon jobs: - '{project-name}-verify-{stream}-{maven}-{jdks}' stream: silicon branch: stable/silicon
Review and submit the changes to releng/builder project. (releng/builder committers)
Autorelease¶
Block submit permissions for registered users and elevate RE’s committer rights on gerrit. (Helpdesk)
Note
Enable Exclusive checkbox for the submit button to override any existing permissions.
Enable create reference permissions on gerrit for RE’s to submit .gitreview patches. (Helpdesk)
Note
Enable Exclusive checkbox override any existing permissions.
Start the branch cut job or use the manual steps below for branch cutting autorelease. (Release Engineering Team)
Start the version bump job or use the manual steps below for version bump autorelease. (Release Engineering Team)
Merge all .gitreview patches submitted though the job or manually. (Release Engineering Team)
Remove create reference permissions set on gerrit for RE’s. (Helpdesk)
Merge all version bump patches in the order of dependencies. (Release Engineering Team)
Re-enable submit permissions for registered users and disable elevated RE committer rights on gerrit. (Helpdesk)
Notify release list on branch cutting work completion. (Release Engineering Team)
Branch cut job (Autorelease)¶
Branch cutting can be performed either through the job or manually.
Start the autorelease-branch-cut job (Release Engineering Team)
Manual steps to branch cut (Autorelease)¶
Setup releng/autorelease repository. (Release Engineering Team)
git review -s git submodule foreach 'git review -s' git checkout master git submodule foreach 'git checkout master' git pull --rebase git submodule foreach 'git pull --rebase'
Enable create reference permissions on gerrit for RE’s to submit .gitreview patches. (Helpdesk)
Note
Enable Exclusive check-box override any existing permissions.
Create stable/${CURR_RELEASE} branches based on HEAD master. (Release Engineering Team)
git checkout -b stable/${CURR_RELEASE,,} origin/master git submodule foreach 'git checkout -b stable/${CURR_RELEASE,,} origin/master' git push gerrit stable/${CURR_RELEASE,,} git submodule foreach 'git push gerrit stable/${CURR_RELEASE,,}'
Contribute .gitreview updates to stable/${CURR_RELEASE,,}. (Release Engineering Team)
git submodule foreach sed -i -e "s#defaultbranch=master#defaultbranch=stable/${CURR_RELEASE,,}#" .gitreview git submodule foreach git commit -asm "Update .gitreview to stable/${CURR_RELEASE,,}" git submodule foreach 'git review -t ${CURR_RELEASE,,}-branch-cut' sed -i -e "s#defaultbranch=master#defaultbranch=stable/${CURR_RELEASE,,}#" .gitreview git add .gitreview git commit -s -v -m "Update .gitreview to stable/${CURR_RELEASE,,}" git review -t ${CURR_RELEASE,,}-branch-cut
Version bump job (Autorelease)¶
Version bump can performed either through the job or manually.
Start the autorelease-version-bump-${NEXT_RELEASE,,} job (Release Engineering Team)
Note
Enabled BRANCH_CUT and disable DRY_RUN to run the job for branch cut work-flow. The version bump job can be run only on the master branch.
Manual steps to version bump (Autorelease)¶
Version bump master by x.(y+1).z. (Release Engineering Team)
git checkout master git submodule foreach 'git checkout master' pip install lftools lftools version bump ${CURR_RELEASE}
Make sure the version bump changes does not modify anything under scripts or pom.xml. (Release Engineering Team)
git checkout pom.xml scripts/
Push version bump master changes to gerrit. (Release Engineering Team)
git submodule foreach 'git commit -asm "Bump versions by x.(y+1).z for next dev cycle"' git submodule foreach 'git review -t ${CURR_RELEASE,,}-branch-cut'
Merge the patches in order according to the merge-order.log file found in autorelease jobs. (Release Engineering Team)
Note
The version bump patches can be merged more quickly by performing a local build with
mvn clean deploy -DskipTests
to prime Nexus with the new version updates.
Documentation post branch tasks¶
Git remove all files/directories from the
docs/release-notes/*
directory. (Release Engineering Team)git checkout master git rm -rf docs/release-notes/<project file and/or folder> git commit -sm "Reset release notes for next dev cycle" git review
Simultaneous Release¶
This page explains how the OpenDaylight release process works once the TSC has approved a release.
Code Freeze¶
At the first Release Candidate (RC) the Submit
button is disabled on the
stable branch to prevent projects from merging non-blocking patches
into the release.
Disable
Submit
for Registered Users and allow permission to the Release Engineering Team (Helpdesk)Important
DO NOT enable Code-Review+2 and Verified+1 to the Release Engienering Team during code freeze.
Note
Enable Exclusive checkbox for the submit button to override any existing persmissions. Code-Review and Verify permissions are only needed during version bumping.
Release Preparations¶
After release candidate is built gpg sign artifacts using the lftools sign command.
STAGING_REPO=autorelease-1903
STAGING_PROFILE_ID=abc123def456 # This Profile ID is listed in Nexus > Staging Profiles
lftools sign deploy-nexus https://nexus.opendaylight.org $STAGING_REPO $STAGING_PROFILE_ID
Verify the distribution-karaf file with the signature.
gpg2 --verify karaf-x.y.z-${RELEASE}.tar.gz.asc karaf-x.y.z-${RELEASE}.tar.gz
Note
Projects such as OpFlex participate in the Simultaneous Release but are not part of the autorelease build. Ping those projects and prep their staging repos as well.
Releasing OpenDaylight¶
The following describes the Simultaneous Release process for shipping out the binary and source code on release day.
Bulleted actions can be performed in parallel while numbered actions should be done in sequence.
Release the Nexus Staging repos (Helpdesk)
Select both the artifacts and signature repos (created previously) and
click Release
.Enter
Release OpenDaylight $RELEASE
for the description andclick confirm
.
Perform this step for any additional projects that are participating in the Simultaneous Release but are not part of the autorelease build.
Tip
This task takes hours to run so kicking it off early is a good idea.
Version bump for next dev cycle (Release Engineering Team)
Run the autorelease-version-bump-${STREAM} job
Tip
This task takes hours to run so kicking it off early is a good idea.
Enable
Code-Review+2
andVerify+1
voting permissions for theRelease Engineering Team
(Helpdesk)Note
Enable Exclusive checkbox for the submit button to override any existing persmissions. Code-Review and Verify permissions are only needed during version bumping. DO NOT enable it during code freeze.
Merge all patches generated by the job
Restore Gerrit permissions for Registered Users and disable elevated Release Engineering Team permissions (Helpdesk)
Tag the release (Release Engineering Team)
Install lftools
lftools contains the version bumping scripts we need to version bump and tag the dev branches. We recommend using a virtualenv for this.
# Skip mkvirtualenv if you already have an lftools virtualenv mkvirtualenv lftools workon lftools pip install --upgrade lftools
Pull latest autorelease repository
export RELEASE=Nitrogen-SR1 export STREAM=${RELEASE//-*} export BRANCH=origin/stable/${STREAM,,} # No need to clean if you have already done it. git clone --recursive https://git.opendaylight.org/gerrit/releng/autorelease cd autorelease git fetch origin # Ensure we are on the right branch. Note that we are wiping out all # modifications in the repo so backup unsaved changes before doing this. git checkout -f git checkout ${BRANCH,,} git clean -xdff git submodule foreach git checkout -f git submodule foreach git clean -xdff git submodule update --init # Ensure git review is setup git review -s git submodule foreach 'git review -s'
Publish release tags
export BUILD_NUM=55 export OPENJDKVER="openjdk8" export PATCH_URL="https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/autorelease-release-${STREAM,,}-mvn35-${OPENJDKVER}/${BUILD_NUM}/patches.tar.gz" ./scripts/release-tags.sh "${RELEASE}" /tmp/patches "$PATCH_URL"
Notify Community and Website teams
Update downloads page
Submit a patch to the ODL docs project to update the downloads page with the latest binaries and packages (Release Engineering Team)
Email dev/release/tsc mailing lists announcing release binaries location (Release Engineering Team)
Email dev/release/tsc mailing lists to notify of tagging and version bump completion (Release Engineering Team)
Note
This step is performed after Version Bump and Tagging steps are complete.
Generate Service Release notes
Warning
If this is a major release (eg. Sodium) as opposed to a Service Release (eg. Sodium-SR1). Skip this step.
For major releases the notes come from the projects themselves in the docs repo via the docs/releaset-notes/projects directory.
For service releases (SRs) we need to generate service release notes. This can be performed by running the autorelease-generate-release-notes-$STREAM job.
Run the autorelease-generate-release-notes-${STREAM} job (Release Engineering Team)
Trigger this job by leaving a Gerrit comment
generate-release-notes Carbon-SR2
Release notes can also be manually generated with the script:
git checkout stable/${BRANCH,,} ./scripts/release-notes-generator.sh ${RELEASE}
A
release-notes.rst
will be generated in the working directory. Submit this file asrelease-notes-sr1.rst
(update the sr as necessary) to the docs project.
Super Committers¶
Super committers are a group of TSC-approved individuals within the OpenDaylight community with the power to merge patches on behalf of projects during approved Release Activities.
Super Committer Activities¶
Super committers are given super committer powers ONLY during TSC-approved activities and are not a power that is active on a regular basis. Once one of the TSC-approved activities are triggered, helpdesk will enable the permissions listed for the respective activities for the duration of that activity.
Note
This activity has been pre-approved by the TSC and does not require a TSC vote. Helpdesk should be notified to enable the permissions and again to disable the permissions once activities are complete.
Super committers are granted powers to merge blocking patches for the duration code of freeze until a release is approved and code freeze is lifted. This permission is only granted for the specific branch that is frozen.
The following powers are granted:
Submit button access
During this time Super Committers can ONLY merge patches that have a +2 Code-Review by a project committer approving the merge, and the patch passes Jenkins Verify check. If neither of these conditions are met then DO NOT merge the patch.
Note
This activity has been pre-approved by the TSC and does not require a TSC vote. Helpdesk should be notified to enable the permissions and again to disable the permissions once activities are complete.
Super committers are granted powers to merge version bump related patches for the duration of version bumping activities.
The following powers are granted:
Vote Code-Review +2
Vote Verified +1
Remove Reviewer
Submit button access
These permissions are granted to allow super committers to push through version bump patches with haste. The Remove Reviewer permission is to be used only for removing Jenkins vote caused by a failed distribution-check job, if that failure is caused by a temporary version inconsistency present while the bump activity is being performed.
Version bumping activities come in 2 forms.
Post-release Autorelease version bumping
MRI project version bumping
Case 1, the TSC has approved an official OpenDaylight release and after the binaries are released to the world all Autorelease managed projects are version bumped appropriately to the next development release number.
Case 2, During the Release Integrated Deadline of the release schedule MRI projects submit desired version updates. Once approved by the TSC Super Committers can merge these patches across the projects.
Ideally the version bumping activities should not include code modifications, if they do +2 Code-Review vote should be complete by a committer on the project to indicate that they approve the code changes.
Once version bump patches are merged these permissions are removed.
Any activities not in the list above will fall under the exceptional case in which requires TSC approval before Super Committers can merge changes. These cases should be brought up to the TSC for voting.
Super Committers¶
Name |
IRC |
|
---|---|---|
Anil Belur |
abelur |
|
Daniel Farrell |
dfarrell07 |
|
Jamo Luhrsen |
jamoluhrsen |
|
Luis Gomez |
LuisGomez |
|
Michael Vorburger |
vorburger |
|
Sam Hague |
shague |
|
Stephen Kitt |
skitt |
|
Robert Varga |
rovarga |
|
Thanh Ha |
zxiiro |
Supporting Documentation¶
Identifying Managed Projects in an OpenDaylight Version¶
What are Managed Projects?¶
Managed Projects are simply projects that take part in the Managed Release Process. Managed Projects are either core components of OpenDaylight or have demonstrated their maturity and ability to successfully take part in the Managed Release.
For more information, see the full description of Managed Projects.
What is a Managed Distribution?¶
Managed Projects are aggregated together by a POM file that defines a Managed Distribution. The Managed Distribution is the focus of OpenDaylight development. It’s continuously built, tested, packaged and released into Continuous Delivery pipelines. As prescribed by the Managed Release Process, Managed Distributions are eventually blessed as formal OpenDaylight releases.
NB: OpenDaylight’s Fluorine release actually included Managed and Self-Managed Projects, but the community is working towards the formal release being exactly the Managed Distribution, with an option for Self-Managed Projects to release independently on top of the Managed Distribution later.
Finding the Managed Projects given a Managed Distribution¶
Given a Managed Distribution (tar.gz, .zip, RPM, Deb), the Managed Projects that constitute it can be found in the taglist.log file in the root of the archive.
taglist.log files are of the format:
<Managed Project> <Git SHA of built commit> <Codename of release>
Finding the Managed Projects Given a Branch¶
To find the current set of Managed Projects in a given OpenDaylight branch, examine the integration/distribution/features/repos/index/pom.xml file that defines the Managed Distribution.
The release management team maintains several documents in Google Drive to track releases. These documents can be found at the following link:
https://drive.google.com/drive/folders/0ByPlysxjHHJaUXdfRkJqRGo4aDg
OpenDaylight User Guide¶
Overview¶
This first part of the user guide covers the basic user operations of the OpenDaylight Release using the generic base functionality.
OpenDaylight Controller Overview¶
The OpenDaylight controller is JVM software and can be run from any operating system and hardware as long as it supports Java. The controller is an implementation of the Software Defined Network (SDN) concept and makes use of the following tools:
Maven: OpenDaylight uses Maven for easier build automation. Maven uses pom.xml (Project Object Model) to script the dependencies between bundle and also to describe what bundles to load and start.
OSGi: This framework is the back-end of OpenDaylight as it allows dynamically loading bundles and packages JAR files, and binding bundles together for exchanging information.
JAVA interfaces: Java interfaces are used for event listening, specifications, and forming patterns. This is the main way in which specific bundles implement call-back functions for events and also to indicate awareness of specific state.
REST APIs: These are northbound APIs such as topology manager, host tracker, flow programmer, static routing, and so on.
The controller exposes open northbound APIs which are used by applications. The OSGi framework and bidirectional REST are supported for the northbound APIs. The OSGi framework is used for applications that run in the same address space as the controller while the REST (web-based) API is used for applications that do not run in the same address space (or even the same system) as the controller. The business logic and algorithms reside in the applications. These applications use the controller to gather network intelligence, run its algorithm to do analytics, and then orchestrate the new rules throughout the network. On the southbound, multiple protocols are supported as plugins, e.g. OpenFlow 1.0, OpenFlow 1.3, BGP-LS, and so on. The OpenDaylight controller starts with an OpenFlow 1.0 southbound plugin. Other OpenDaylight contributors begin adding to the controller code. These modules are linked dynamically into a Service Abstraction Layer (SAL).
The SAL exposes services to which the modules north of it are written. The SAL figures out how to fulfill the requested service irrespective of the underlying protocol used between the controller and the network devices. This provides investment protection to the applications as OpenFlow and other protocols evolve over time. For the controller to control devices in its domain, it needs to know about the devices, their capabilities, reachability, and so on. This information is stored and managed by the Topology Manager. The other components like ARP handler, Host Tracker, Device Manager, and Switch Manager help in generating the topology database for the Topology Manager.
For a more detailed overview of the OpenDaylight controller, see the OpenDaylight Developer Guide.
Project-specific User Guides¶
Distribution Version reporting¶
Overview¶
This section provides an overview of odl-distribution-version feature.
A remote user of OpenDaylight usually has access to RESTCONF and NETCONF northbound interfaces, but does not have access to the system OpenDaylight is running on. OpenDaylight has released multiple versions including Service Releases, and there are incompatible changes between them. In order to know which YANG modules to use, which bugs to expect and which workarounds to apply, such user would need to know the exact version of at least one OpenDaylight component.
There are indirect ways to deduce such version, but the direct way is enabled by odl-distribution-version feature. Administrator can specify version strings, which would be available to users via NETCONF, or via RESTCONF if OpenDaylight is configured to initiate NETCONF connection to its config subsystem northbound interface.
By default, users have write access to config subsystem, so they can add, modify or delete any version strings present there. Admins can only influence whether the feature is installed, and initial values.
Config subsystem is local only, not cluster aware, so each member reports versions independently. This is suitable for heterogeneous clusters.
Initial version values are set via config file odl-version.xml
which is created in
$KARAF_HOME/etc/opendaylight/karaf/
upon installation of odl-distribution-version
feature.
If admin wants to use different content, the file with desired content has to be created
there before feature installation happens.
By default, the config file defines two config modules, named odl-distribution-version
and odl-odlparent-version
.
Opendaylight config subsystem NETCONF northbound is not made available just by installing
odl-distribution-version
, but most other feature installations would enable it.
RESTCONF interfaces are enabled by installing odl-restconf
feature,
but that do not allow access to config subsystem by itself.
On single node deployments, installation of odl-netconf-connector-ssh
is recommended,
which would configure controller-config
device and its MD-SAL mount point.
For cluster deployments, installing odl-netconf-clustered-topology
is recommended.
See documentation for clustering on how to create similar devices for each member,
as controller-config
name is not unique in that context.
Assuming single node deployment and user located on the same system,
here is an example curl
command accessing odl-odlparent-version
config module:
curl 127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-distribution-version:odl-version/odl-odlparent-version
Neutron Service User Guide¶
Overview¶
This Karaf feature (odl-neutron-service
) provides integration
support for OpenStack Neutron via the OpenDaylight ML2 mechanism driver.
The Neutron Service is only one of the components necessary for
OpenStack integration. For those related components please refer to
documentations of each component:
If you want OpenStack integration with OpenDaylight, you will need this feature with an OpenDaylight provider feature like netvirt, group based policy, VTN, and lisp mapper. For provider configuration, please refer to each individual provider’s documentation. Since the Neutron service only provides the northbound API for the OpenStack Neutron ML2 mechanism driver. Without those provider features, the Neutron service itself isn’t useful.
Neutron Service feature Architecture¶
The Neutron service provides northbound API for OpenStack Neutron via RESTCONF and also its dedicated REST API. It communicates through its YANG model with providers.

Neutron Service Architecture¶
Configuring Neutron Service feature¶
As the Karaf feature includes everything necessary for communicating northbound, no special configuration is needed. Usually this feature is used with an OpenDaylight southbound plugin that implements actual network virtualization functionality and OpenStack Neutron. The user wants to setup those configurations. Refer to each related documentations for each configurations.
Administering or Managing odl-neutron-service
¶
There is no specific configuration regarding to Neutron service itself. For related configuration, please refer to OpenStack Neutron configuration and OpenDaylight related services which are providers for OpenStack.
odl-neutron-service
while the controller running¶While OpenDaylight is running, in Karaf prompt, type:
feature:install odl-neutron-service
.Wait a while until the initialization is done and the controller stabilizes.
odl-neutron-service
provides only a unified interface for OpenStack
Neutron. It doesn’t provide actual functionality for network
virtualization. Refer to each OpenDaylight project documentation for
actual configuration with OpenStack Neutron.
Neutron Logger¶
Another service, the Neutron Logger, is provided for debugging/logging purposes. It logs changes on Neutron YANG models.
feature:install odl-neutron-logger
Service Function Chaining¶
OpenDaylight Service Function Chaining (SFC) Overview¶
OpenDaylight Service Function Chaining (SFC) provides the ability to define an ordered list of network services (e.g. firewalls, load balancers). These services are then “stitched” together in the network to create a service chain. This project provides the infrastructure (chaining logic, APIs) needed for ODL to provision a service chain in the network and an end-user application for defining such chains.
ACE - Access Control Entry
ACL - Access Control List
SCF - Service Classifier Function
SF - Service Function
SFC - Service Function Chain
SFF - Service Function Forwarder
SFG - Service Function Group
SFP - Service Function Path
RSP - Rendered Service Path
NSH - Network Service Header
SFC User Interface¶
The SFC User interface comes with a Command Line Interface (CLI): it provides several Karaf console commands to show the SFC model (SF, SFFs, etc.) provisioned in the datastore.
Run ODL distribution (run karaf)
In Karaf console execute:
feature:install odl-sfc-ui
Visit SFC-UI on:
http://<odl_ip_address>:8181/sfc/index.html
The Karaf Container offers a complete Unix-like console that allows managing the container. This console can be extended with custom commands to manage the features deployed on it. This feature will add some basic commands to show the provisioned SFC entities.
The SFC-CLI implements commands to show some of the provisioned SFC entities: Service Functions, Service Function Forwarders, Service Function Chains, Service Function Paths, Service Function Classifiers, Service Nodes and Service Function Types:
List one/all provisioned Service Functions:
sfc:sf-list [--name <name>]
List one/all provisioned Service Function Forwarders:
sfc:sff-list [--name <name>]
List one/all provisioned Service Function Chains:
sfc:sfc-list [--name <name>]
List one/all provisioned Service Function Paths:
sfc:sfp-list [--name <name>]
List one/all provisioned Service Function Classifiers:
sfc:sc-list [--name <name>]
List one/all provisioned Service Nodes:
sfc:sn-list [--name <name>]
List one/all provisioned Service Function Types:
sfc:sft-list [--name <name>]
SFC Southbound REST Plug-in¶
The Southbound REST Plug-in is used to send configuration from datastore down to network devices supporting a REST API (i.e. they have a configured REST URI). It supports POST/PUT/DELETE operations, which are triggered accordingly by changes in the SFC data stores.
Access Control List (ACL)
Service Classifier Function (SCF)
Service Function (SF)
Service Function Group (SFG)
Service Function Schedule Type (SFST)
Service Function Forwarder (SFF)
Rendered Service Path (RSP)
From the user perspective, the REST plug-in is another SFC Southbound plug-in used to communicate with network devices.

Southbound REST Plug-in integration into ODL¶
Run ODL distribution (run karaf)
In Karaf console execute:
feature:install odl-sfc-sb-rest
Configure REST URIs for SF/SFF through SFC User Interface or RESTCONF (required configuration steps can be found in the tutorial stated bellow)
Comprehensive tutorial on how to use the Southbound REST Plug-in and how to control network devices with it can be found on: https://wiki-archive.opendaylight.org/view/Service_Function_Chaining:Main
SFC-OVS integration¶
SFC-OVS provides integration of SFC with Open vSwitch (OVS) devices. Integration is realized through mapping of SFC objects (like SF, SFF, Classifier, etc.) to OVS objects (like Bridge, TerminationPoint=Port/Interface). The mapping takes care of automatic instantiation (setup) of corresponding object whenever its counterpart is created. For example, when a new SFF is created, the SFC-OVS plug-in will create a new OVS bridge.
The feature is intended for SFC users willing to use Open vSwitch as an underlying network infrastructure for deploying RSPs (Rendered Service Paths).
SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing information from/to OVS devices. From the user perspective SFC-OVS acts as a layer between SFC datastore and OVSDB.

SFC-OVS integration into ODL¶
Run ODL distribution (run karaf)
In Karaf console execute:
feature:install odl-sfc-ovs
Configure Open vSwitch to use ODL as a manager, using following command:
ovs-vsctl set-manager tcp:<odl_ip_address>:6640
This tutorial shows the usual workflow during creation of an OVS Bridge with use of the SFC APIs.
Open vSwitch installed (ovs-vsctl command available in shell)
SFC-OVS feature configured as stated above
In a shell execute:
ovs-vsctl set-manager tcp:<odl_ip_address>:6640
Send POST request to URL:
http://<odl_ip_address>:8181/restconf/operations/service-function-forwarder-ovs:create-ovs-bridge
Use Basic auth with credentials: “admin”, “admin” and setContent-Type: application/json
. The content of POST request should be following:
{
"input":
{
"name": "br-test",
"ovs-node": {
"ip": "<Open_vSwitch_ip_address>"
}
}
}
Open_vSwitch_ip_address is the IP address of the machine where Open vSwitch is installed.
In a shell execute: ovs-vsctl show
. There should be a Bridge with
the name br-test and one port/interface called br-test.
Also, the corresponding SFF for this OVS Bridge should be configured, which can be verified through the SFC User Interface or RESTCONF as follows.
Visit the SFC User Interface:
http://<odl_ip_address>:8181/sfc/index.html#/sfc/serviceforwarder
Use pure RESTCONF and send a GET request to URL:
http://<odl_ip_address>:8181/restconf/config/service-function-forwarder:service-function-forwarders
There should be an SFF, whose name will be ending with br1 and the SFF should contain two DataPlane locators: br1 and testPort.
SFC Classifier User Guide¶
Description of classifier can be found in: https://datatracker.ietf.org/doc/draft-ietf-sfc-architecture/
There are two types of classifier:
OpenFlow Classifier
Iptables Classifier
OpenFlow Classifier implements the classification criteria based on OpenFlow rules deployed into an OpenFlow switch. An Open vSwitch will take the role of a classifier and performs various encapsulations such NSH, VLAN, MPLS, etc. In the existing implementation, classifier can support NSH encapsulation. Matching information is based on ACL for MAC addresses, ports, protocol, IPv4 and IPv6. Supported protocols are TCP, UDP and SCTP. Actions information in the OF rules, shall be forwarding of the encapsulated packets with specific information related to the RSP.
The OVSDB Southbound interface is used to create an instance of a bridge in a specific location (via IP address). This bridge contains the OpenFlow rules that perform the classification of the packets and react accordingly. The OpenFlow Southbound interface is used to translate the ACL information into OF rules within the Open vSwitch.
Note
in order to create the instance of the bridge that takes the role of a classifier, an “empty” SFF must be created.
An empty SFF must be created in order to host the ACL that contains the classification information.
SFF data plane locator must be configured
Classifier interface must be manually added to SFF bridge.
Classification information is based on MAC addresses, protocol, ports and IP. ACL gathers this information and is assigned to an RSP which turns to be a specific path for a Service Chain.
Classifier manages everything from starting the packet listener to creation (and removal) of appropriate ip(6)tables rules and marking received packets accordingly. Its functionality is available only on Linux as it leverdges NetfilterQueue, which provides access to packets matched by an iptables rule. Classifier requires root privileges to be able to operate.
So far it is capable of processing ACL for MAC addresses, ports, IPv4 and IPv6. Supported protocols are TCP and UDP.
Python code located in the project repository sfc-py/common/classifier.py.
Note
classifier assumes that Rendered Service Path (RSP) already exists in ODL when an ACL referencing it is obtained
sfc_agent receives an ACL and passes it for processing to the classifier
the RSP (its SFF locator) referenced by ACL is requested from ODL
if the RSP exists in the ODL then ACL based iptables rules for it are applied
After this process is over, every packet successfully matched to an iptables rule (i.e. successfully classified) will be NSH encapsulated and forwarded to a related SFF, which knows how to traverse the RSP.
Rules are created using appropriate iptables command. If the Access Control Entry (ACE) rule is MAC address related both iptables and IPv6 tables rules re issued. If ACE rule is IPv4 address related, only iptables rules are issued, same for IPv6.
Note
iptables raw table contains all created rules
Classifier runs alongside sfc_agent, therefore the command for starting it locally is:
sudo python3.4 sfc-py/sfc_agent.py --rest --odl-ip-port localhost:8181
--auto-sff-name --nfq-class
SFC OpenFlow Renderer User Guide¶
The Service Function Chaining (SFC) OpenFlow Renderer (SFC OF Renderer) implements Service Chaining on OpenFlow switches. It listens for the creation of a Rendered Service Path (RSP) in the operational data store, and once received it programs Service Function Forwarders (SFF) that are hosted on OpenFlow capable switches to forward packets through the service chain. Currently the only tested OpenFlow capable switch is OVS 2.9.
Common acronyms used in the following sections:
SF - Service Function
SFF - Service Function Forwarder
SFC - Service Function Chain
SFP - Service Function Path
RSP - Rendered Service Path
The SFC OF Renderer is invoked after a RSP is created in the operational
data store using an MD-SAL listener called SfcOfRspDataListener
.
Upon SFC OF Renderer initialization, the SfcOfRspDataListener
registers itself to listen for RSP changes. When invoked, the
SfcOfRspDataListener
processes the RSP and calls the
SfcOfFlowProgrammerImpl
to create the necessary flows in
the Service Function Forwarders configured in the
RSP. Refer to the following diagram for more details.

SFC OpenFlow Renderer High Level Architecture¶
The SFC OpenFlow Renderer uses the following tables for its Flow pipeline:
Table 0, Classifier
Table 1, Transport Ingress
Table 2, Path Mapper
Table 3, Path Mapper ACL
Table 4, Next Hop
Table 10, Transport Egress
The OpenFlow Table Pipeline is intended to be generic to work for all of the different encapsulations supported by SFC.
All of the tables are explained in detail in the following section.
The SFFs (SFF1 and SFF2), SFs (SF1), and topology used for the flow tables in the following sections are as described in the following diagram.

SFC OpenFlow Renderer Typical Network Topology¶
It is possible for the SFF to also act as a classifier. This table maps subscriber traffic to RSPs, and is explained in detail in the classifier documentation.
If the SFF is not a classifier, then this table will just have a simple Goto Table 1 flow.
The Transport Ingress table has an entry per expected tunnel transport type to be received in a particular SFF, as established in the SFC configuration.
Here are two example on SFF1: one where the RSP ingress tunnel is MPLS assuming VLAN is used for the SFF-SF, and the other where the RSP ingress tunnel is either Eth+NSH or just NSH with no ethernet.
Priority |
Match |
Action |
---|---|---|
256 |
EtherType==0x8847 (MPLS unicast) |
Goto Table 2 |
256 |
EtherType==0x8100 (VLAN) |
Goto Table 2 |
250 |
EtherType==0x894f (Eth+NSH) |
Goto Table 2 |
250 |
PacketType==0x894f (NSH no Eth) |
Goto Table 2 |
5 |
Match Any |
Drop |
Table: Table Transport Ingress
The Path Mapper table has an entry per expected tunnel transport info to be received in a particular SFF, as established in the SFC configuration. The tunnel transport info is used to determine the RSP Path ID, and is stored in the OpenFlow Metadata. This table is not used for NSH, since the RSP Path ID is stored in the NSH header.
For SF nodes that do not support NSH tunneling, the IP header DSCP field is used to store the RSP Path Id. The RSP Path Id is written to the DSCP field in the Transport Egress table for those packets sent to an SF.
Here is an example on SFF1, assuming the following details:
VLAN ID 1000 is used for the SFF-SF
The RSP Path 1 tunnel uses MPLS label 100 for ingress and 101 for egress
The RSP Path 2 (symmetric downlink path) uses MPLS label 101 for ingress and 100 for egress
Priority |
Match |
Action |
---|---|---|
256 |
MPLS Label==100 |
RSP Path=1, Pop MPLS, Goto Table 4 |
256 |
MPLS Label==101 |
RSP Path=2, Pop MPLS, Goto Table 4 |
256 |
VLAN ID==1000, IP DSCP==1 |
RSP Path=1, Pop VLAN, Goto Table 4 |
256 |
VLAN ID==1000, IP DSCP==2 |
RSP Path=2, Pop VLAN, Goto Table 4 |
5 |
Match Any |
Goto Table 3 |
Table: Table Path Mapper
This table is only populated when PacketIn packets are received from the switch for TcpProxy type SFs. These flows are created with an inactivity timer of 60 seconds and will be automatically deleted upon expiration.
The Next Hop table uses the RSP Path Id and appropriate packet fields to determine where to send the packet next. For NSH, only the NSP (Network Services Path, RSP ID) and NSI (Network Services Index, next hop) fields from the NSH header are needed to determine the VXLAN tunnel destination IP. For VLAN or MPLS, then the source MAC address is used to determine the destination MAC address.
Here are two examples on SFF1, assuming SFF1 is connected to SFF2. RSP Paths 1 and 2 are symmetric VLAN paths. RSP Paths 3 and 4 are symmetric NSH paths. RSP Path 1 ingress packets come from external to SFC, for which we don’t have the source MAC address (MacSrc).
Priority |
Match |
Action |
---|---|---|
256 |
RSP Path==1, MacSrc==SF1 |
MacDst=SFF2, Goto Table 10 |
256 |
RSP Path==2, MacSrc==SF1 |
Goto Table 10 |
256 |
RSP Path==2, MacSrc==SFF2 |
MacDst=SF1, Goto Table 10 |
246 |
RSP Path==1 |
MacDst=SF1, Goto Table 10 |
550 |
dl_type=0x894f, nsh_spi=3,nsh_si=255 (NSH, SFF Ingress RSP 3, hop 1) |
load:0xa000002→ NXM_NX_TUN_IPV4_DST[], Goto Table 10 |
550 |
dl_type=0x894f nsh_spi=3,nsh_si=254 (NSH, SFF Ingress from SF, RSP 3, hop 2) |
load:0xa00000a→ NXM_NX_TUN_IPV4_DST[], Goto Table 10 |
550 |
dl_type=0x894f, nsh_spi=4,nsh_si=254 (NSH, SFF1 Ingress from SFF2) |
load:0xa00000a→ NXM_NX_TUN_IPV4_DST[], Goto Table 10 |
5 |
Match Any |
Drop |
Table: Table Next Hop
The Transport Egress table prepares egress tunnel information and sends the packets out.
Here are two examples on SFF1. RSP Paths 1 and 2 are symmetric MPLS paths that use VLAN for the SFF-SF. RSP Paths 3 and 4 are symmetric NSH paths. Since it is assumed that switches used for NSH will only have one VXLAN port, the NSH packets are just sent back where they came from.
Priority |
Match |
Action |
|
---|---|---|---|
256 |
RSP Path==1, MacDst==SF1 |
Push VLAN ID 1000, Port=SF1 |
|
256 |
RSP Path==1, MacDst==SFF2 |
Push MPLS Label 101, Port=SFF2 |
|
256 |
RSP Path==2, MacDst==SF1 |
Push VLAN ID 1000, Port=SF1 |
|
246 |
RSP Path==2 |
Push MPLS Label 100, Port=Ingress |
|
256 |
in_port=1,dl_type=0x894f nsh_spi=0x3,nsh_si=255 (NSH, SFF Ingress RSP 3) |
IN_PORT |
|
256 |
in_port=1,dl_type=0x894f, nsh_spi=0x3,nsh_si=254 (NSH,SFF Ingress from SF,RSP 3) |
IN_PORT |
|
|
IN_PORT |
||
5 |
Match Any |
Drop |
Table: Table Transport Egress
To use the SFC OpenFlow Renderer Karaf, at least the following Karaf features must be installed.
odl-openflowplugin-nxm-extensions
odl-openflowplugin-flow-services
odl-sfc-provider
odl-sfc-model
odl-sfc-openflow-renderer
odl-sfc-ui (optional)
Since OpenDaylight Karaf features internally install dependent features all of the above features can be installed by simply installing the ‘’odl-sfc-openflow-renderer’’ feature.
The following command can be used to view all of the currently installed Karaf features:
opendaylight-user@root>feature:list -i
Or, pipe the command to a grep to see a subset of the currently installed Karaf features:
opendaylight-user@root>feature:list -i | grep sfc
To install a particular feature, use the Karaf feature:install
command.
In this tutorial, the VXLAN-GPE NSH encapsulations will be shown. The following Network Topology diagram is a logical view of the SFFs and SFs involved in creating the Service Chains.

SFC OpenFlow Renderer Typical Network Topology¶
To use this example, SFF OpenFlow switches must be created and connected as illustrated above. Additionally, the SFs must be created and connected.
Note that RSP symmetry depends on the Service Function Path symmetric field, if present. If not, the RSP will be symmetric if any of the SFs involved in the chain has the bidirectional field set to true.
The target environment is not important, but this use-case was created and tested on Linux.
The steps to use this tutorial are as follows. The referenced configuration in the steps is listed in the following sections.
There are numerous ways to send the configuration. In the following
configuration chapters, the appropriate curl
command is shown for
each configuration to be sent, including the URL.
Steps to configure the SFC OF Renderer tutorial:
Send the
SF
RESTCONF configurationSend the
SFF
RESTCONF configurationSend the
SFC
RESTCONF configurationSend the
SFP
RESTCONF configurationThe
RSP
will be created internally when theSFP
is created.
Once the configuration has been successfully created, query the Rendered Service Paths with either the SFC UI or via RESTCONF. Notice that the RSP is symmetrical, so the following 2 RSPs will be created:
sfc-path1-Path-<RSP-ID>
sfc-path1-Path-<RSP-ID>-Reverse
At this point the Service Chains have been created, and the OpenFlow
Switches are programmed to steer traffic through the Service Chain.
Traffic can now be injected from a client into the Service Chain. To
debug problems, the OpenFlow tables can be dumped with the following
commands, assuming SFF1 is called s1
and SFF2 is called s2
.
sudo ovs-ofctl -O OpenFlow13 dump-flows s1
sudo ovs-ofctl -O OpenFlow13 dump-flows s2
In all the following configuration sections, replace the ${JSON}
string with the appropriate JSON configuration. Also, change the
localhost
destination in the URL accordingly.
The following configuration sections show how to create the different elements using NSH encapsulation.
The Service Function configuration can be sent with the following command:
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function:service-functions/
SF configuration JSON.
{
"service-functions": {
"service-function": [
{
"name": "sf1",
"type": "http-header-enrichment",
"ip-mgmt-address": "10.0.0.2",
"sf-data-plane-locator": [
{
"name": "sf1dpl",
"ip": "10.0.0.10",
"port": 4789,
"transport": "service-locator:vxlan-gpe",
"service-function-forwarder": "sff1"
}
]
},
{
"name": "sf2",
"type": "firewall",
"ip-mgmt-address": "10.0.0.3",
"sf-data-plane-locator": [
{
"name": "sf2dpl",
"ip": "10.0.0.20",
"port": 4789,
"transport": "service-locator:vxlan-gpe",
"service-function-forwarder": "sff2"
}
]
}
]
}
}
The Service Function Forwarder configuration can be sent with the following command:
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
SFF configuration JSON.
{
"service-function-forwarders": {
"service-function-forwarder": [
{
"name": "sff1",
"service-node": "openflow:2",
"sff-data-plane-locator": [
{
"name": "sff1dpl",
"data-plane-locator":
{
"ip": "10.0.0.1",
"port": 4789,
"transport": "service-locator:vxlan-gpe"
}
}
],
"service-function-dictionary": [
{
"name": "sf1",
"sff-sf-data-plane-locator":
{
"sf-dpl-name": "sf1dpl",
"sff-dpl-name": "sff1dpl"
}
}
]
},
{
"name": "sff2",
"service-node": "openflow:3",
"sff-data-plane-locator": [
{
"name": "sff2dpl",
"data-plane-locator":
{
"ip": "10.0.0.2",
"port": 4789,
"transport": "service-locator:vxlan-gpe"
}
}
],
"service-function-dictionary": [
{
"name": "sf2",
"sff-sf-data-plane-locator":
{
"sf-dpl-name": "sf2dpl",
"sff-dpl-name": "sff2dpl"
}
}
]
}
]
}
}
The Service Function Chain configuration can be sent with the following command:
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/
SFC configuration JSON.
{
"service-function-chains": {
"service-function-chain": [
{
"name": "sfc-chain1",
"sfc-service-function": [
{
"name": "hdr-enrich-abstract1",
"type": "http-header-enrichment"
},
{
"name": "firewall-abstract1",
"type": "firewall"
}
]
}
]
}
}
The Service Function Path configuration can be sent with the following command:
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-path:service-function-paths/
SFP configuration JSON.
{
"service-function-paths": {
"service-function-path": [
{
"name": "sfc-path1",
"service-chain-name": "sfc-chain1",
"transport-type": "service-locator:vxlan-gpe",
"symmetric": true
}
]
}
}
The following command can be used to query all of the created Rendered Service Paths:
curl -H "Content-Type: application/json" -H "Cache-Control: no-cache" -X GET --user admin:admin http://localhost:8181/restconf/operational/rendered-service-path:rendered-service-paths/
The following configuration sections show how to create the different elements using MPLS encapsulation.
The Service Function configuration can be sent with the following command:
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function:service-functions/
SF configuration JSON.
{
"service-functions": {
"service-function": [
{
"name": "sf1",
"type": "http-header-enrichment",
"ip-mgmt-address": "10.0.0.2",
"sf-data-plane-locator": [
{
"name": "sf1-sff1",
"mac": "00:00:08:01:02:01",
"vlan-id": 1000,
"transport": "service-locator:mac",
"service-function-forwarder": "sff1"
}
]
},
{
"name": "sf2",
"type": "firewall",
"ip-mgmt-address": "10.0.0.3",
"sf-data-plane-locator": [
{
"name": "sf2-sff2",
"mac": "00:00:08:01:03:01",
"vlan-id": 2000,
"transport": "service-locator:mac",
"service-function-forwarder": "sff2"
}
]
}
]
}
}
The Service Function Forwarder configuration can be sent with the following command:
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
SFF configuration JSON.
{
"service-function-forwarders": {
"service-function-forwarder": [
{
"name": "sff1",
"service-node": "openflow:2",
"sff-data-plane-locator": [
{
"name": "ulSff1Ingress",
"data-plane-locator":
{
"mpls-label": 100,
"transport": "service-locator:mpls"
},
"service-function-forwarder-ofs:ofs-port":
{
"mac": "11:11:11:11:11:11",
"port-id" : "1"
}
},
{
"name": "ulSff1ToSff2",
"data-plane-locator":
{
"mpls-label": 101,
"transport": "service-locator:mpls"
},
"service-function-forwarder-ofs:ofs-port":
{
"mac": "33:33:33:33:33:33",
"port-id" : "2"
}
},
{
"name": "toSf1",
"data-plane-locator":
{
"mac": "22:22:22:22:22:22",
"vlan-id": 1000,
"transport": "service-locator:mac",
},
"service-function-forwarder-ofs:ofs-port":
{
"mac": "33:33:33:33:33:33",
"port-id" : "3"
}
}
],
"service-function-dictionary": [
{
"name": "sf1",
"sff-sf-data-plane-locator":
{
"sf-dpl-name": "sf1-sff1",
"sff-dpl-name": "toSf1"
}
}
]
},
{
"name": "sff2",
"service-node": "openflow:3",
"sff-data-plane-locator": [
{
"name": "ulSff2Ingress",
"data-plane-locator":
{
"mpls-label": 101,
"transport": "service-locator:mpls"
},
"service-function-forwarder-ofs:ofs-port":
{
"mac": "44:44:44:44:44:44",
"port-id" : "1"
}
},
{
"name": "ulSff2Egress",
"data-plane-locator":
{
"mpls-label": 102,
"transport": "service-locator:mpls"
},
"service-function-forwarder-ofs:ofs-port":
{
"mac": "66:66:66:66:66:66",
"port-id" : "2"
}
},
{
"name": "toSf2",
"data-plane-locator":
{
"mac": "55:55:55:55:55:55",
"vlan-id": 2000,
"transport": "service-locator:mac"
},
"service-function-forwarder-ofs:ofs-port":
{
"port-id" : "3"
}
}
],
"service-function-dictionary": [
{
"name": "sf2",
"sff-sf-data-plane-locator":
{
"sf-dpl-name": "sf2-sff2",
"sff-dpl-name": "toSf2"
},
"service-function-forwarder-ofs:ofs-port":
{
"port-id" : "3"
}
}
]
}
]
}
}
The Service Function Chain configuration can be sent with the following command:
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user admin:admin
http://localhost:8181/restconf/config/service-function-chain:service-function-chains/
SFC configuration JSON.
{
"service-function-chains": {
"service-function-chain": [
{
"name": "sfc-chain1",
"sfc-service-function": [
{
"name": "hdr-enrich-abstract1",
"type": "http-header-enrichment"
},
{
"name": "firewall-abstract1",
"type": "firewall"
}
]
}
]
}
}
The Service Function Path configuration can be sent with the following command. This will internally trigger the Rendered Service Paths to be created.
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user admin:admin
http://localhost:8181/restconf/config/service-function-path:service-function-paths/
SFP configuration JSON.
{
"service-function-paths": {
"service-function-path": [
{
"name": "sfc-path1",
"service-chain-name": "sfc-chain1",
"transport-type": "service-locator:mpls",
"symmetric": true
}
]
}
}
The following command can be used to query all of the Rendered Service Paths that were created when the Service Function Path was created:
curl -H "Content-Type: application/json" -H "Cache-Control: no-cache" -X GET
--user admin:admin http://localhost:8181/restconf/operational/rendered-service-path:rendered-service-paths/
SFC IOS XE Renderer User Guide¶
The early Service Function Chaining (SFC) renderer for IOS-XE devices (SFC IOS-XE renderer) implements Service Chaining functionality on IOS-XE capable switches. It listens for the creation of a Rendered Service Path (RSP) and sets up Service Function Forwarders (SFF) that are hosted on IOS-XE switches to steer traffic through the service chain.
Common acronyms used in the following sections:
SF - Service Function
SFF - Service Function Forwarder
SFC - Service Function Chain
SP - Service Path
SFP - Service Function Path
RSP - Rendered Service Path
LSF - Local Service Forwarder
RSF - Remote Service Forwarder
When the SFC IOS-XE renderer is initialized, all required listeners are
registered to handle incoming data. It involves CSR/IOS-XE
NodeListener
which stores data about all configurable devices
including their mountpoints (used here as databrokers),
ServiceFunctionListener
, ServiceForwarderListener
(see mapping)
and RenderedPathListener
used to listen for RSP changes. When the
SFC IOS-XE renderer is invoked, RenderedPathListener
calls the
IosXeRspProcessor
which processes the RSP change and creates all
necessary Service Paths and Remote Service Forwarders (if necessary) on
IOS-XE devices.
Each Service Path is defined by index (represented by NSP) and contains service path entries. Each entry has appropriate service index (NSI) and definition of next hop. Next hop can be Service Function, different Service Function Forwarder or definition of end of chain - terminate. After terminating, the packet is sent to destination. If a SFF is defined as a next hop, it has to be present on device in the form of Remote Service Forwarder. RSFs are also created during RSP processing.
Example of Service Path:
service-chain service-path 200
service-index 255 service-function firewall-1
service-index 254 service-function dpi-1
service-index 253 terminate
Renderer contains mappers for SFs and SFFs. IOS-XE capable device is
using its own definition of Service Functions and Service Function
Forwarders according to appropriate .yang file.
ServiceFunctionListener
serves as a listener for SF changes. If SF
appears in datastore, listener extracts its management ip address and
looks into cached IOS-XE nodes. If some of available nodes match,
Service function is mapped in IosXeServiceFunctionMapper
to be
understandable by IOS-XE device and it’s written into device’s config.
ServiceForwarderListener
is used in a similar way. All SFFs with
suitable management ip address it mapped in
IosXeServiceForwarderMapper
. Remapped SFFs are configured as a Local
Service Forwarders. It is not possible to directly create Remote Service
Forwarder using IOS-XE renderer. RSF is created only during RSP
processing.
To use the SFC IOS-XE Renderer Karaf, at least the following Karaf features must be installed:
odl-aaa-shiro
odl-sfc-model
odl-sfc-provider
odl-restconf
odl-netconf-topology
odl-sfc-ios-xe-renderer
This tutorial is a simple example how to create Service Path on IOS-XE capable device using IOS-XE renderer
To connect to IOS-XE device, it is necessary to use several modified
yang models and override device’s ones. All .yang files are in the
Yang/netconf
folder in the sfc-ios-xe-renderer module
in the SFC
project. These files have to be copied to the cache/schema
directory, before Karaf is started. After that, custom capabilities have
to be sent to network-topology:
PUT ./config/network-topology:network-topology/topology/topology-netconf/node/<device-name>
<node xmlns="urn:TBD:params:xml:ns:yang:network-topology"> <node-id>device-name</node-id> <host xmlns="urn:opendaylight:netconf-node-topology">device-ip</host> <port xmlns="urn:opendaylight:netconf-node-topology">2022</port> <username xmlns="urn:opendaylight:netconf-node-topology">login</username> <password xmlns="urn:opendaylight:netconf-node-topology">password</password> <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only> <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">0</keepalive-delay> <yang-module-capabilities xmlns="urn:opendaylight:netconf-node-topology"> <override>true</override> <capability xmlns="urn:opendaylight:netconf-node-topology"> urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&revision=2013-07-15 </capability> <capability xmlns="urn:opendaylight:netconf-node-topology"> urn:ietf:params:xml:ns:yang:ietf-yang-types?module=ietf-yang-types&revision=2013-07-15 </capability> <capability xmlns="urn:opendaylight:netconf-node-topology"> urn:ios?module=ned&revision=2016-03-08 </capability> <capability xmlns="urn:opendaylight:netconf-node-topology"> http://tail-f.com/yang/common?module=tailf-common&revision=2015-05-22 </capability> <capability xmlns="urn:opendaylight:netconf-node-topology"> http://tail-f.com/yang/common?module=tailf-meta-extensions&revision=2013-11-07 </capability> <capability xmlns="urn:opendaylight:netconf-node-topology"> http://tail-f.com/yang/common?module=tailf-cli-extensions&revision=2015-03-19 </capability> </yang-module-capabilities> </node>
Note
The device name in the URL and in the XML must match.
When the IOS-XE renderer is installed, all NETCONF nodes in topology-netconf are processed and all capable nodes with accessible mountpoints are cached. The first step is to create LSF on node.
Service Function Forwarder configuration
PUT ./config/service-function-forwarder:service-function-forwarders
{ "service-function-forwarders": { "service-function-forwarder": [ { "name": "CSR1Kv-2", "ip-mgmt-address": "172.25.73.23", "sff-data-plane-locator": [ { "name": "CSR1Kv-2-dpl", "data-plane-locator": { "transport": "service-locator:vxlan-gpe", "port": 6633, "ip": "10.99.150.10" } } ] } ] } }
If the IOS-XE node with appropriate management IP exists, this configuration is mapped and LSF is created on the device. The same approach is used for Service Functions.
PUT ./config/service-function:service-functions
{ "service-functions": { "service-function": [ { "name": "Firewall", "ip-mgmt-address": "172.25.73.23", "type": "firewall", "sf-data-plane-locator": [ { "name": "firewall-dpl", "port": 6633, "ip": "12.1.1.2", "transport": "service-locator:gre", "service-function-forwarder": "CSR1Kv-2" } ] }, { "name": "Dpi", "ip-mgmt-address": "172.25.73.23", "type":"dpi", "sf-data-plane-locator": [ { "name": "dpi-dpl", "port": 6633, "ip": "12.1.1.1", "transport": "service-locator:gre", "service-function-forwarder": "CSR1Kv-2" } ] }, { "name": "Qos", "ip-mgmt-address": "172.25.73.23", "type":"qos", "sf-data-plane-locator": [ { "name": "qos-dpl", "port": 6633, "ip": "12.1.1.4", "transport": "service-locator:gre", "service-function-forwarder": "CSR1Kv-2" } ] } ] } }
All these SFs are configured on the same device as the LSF. The next step is to prepare Service Function Chain.
PUT ./config/service-function-chain:service-function-chains/
{ "service-function-chains": { "service-function-chain": [ { "name": "CSR3XSF", "sfc-service-function": [ { "name": "Firewall", "type": "firewall" }, { "name": "Dpi", "type": "dpi" }, { "name": "Qos", "type": "qos" } ] } ] } }
Service Function Path:
PUT ./config/service-function-path:service-function-paths/
{ "service-function-paths": { "service-function-path": [ { "name": "CSR3XSF-Path", "service-chain-name": "CSR3XSF", "starting-index": 255, "symmetric": "true" } ] } }
Without a classifier, there is possibility to POST RSP directly.
POST ./operations/rendered-service-path:create-rendered-path
{ "input": { "name": "CSR3XSF-Path-RSP", "parent-service-function-path": "CSR3XSF-Path" } }
The resulting configuration:
!
service-chain service-function-forwarder local
ip address 10.99.150.10
!
service-chain service-function firewall
ip address 12.1.1.2
encapsulation gre enhanced divert
!
service-chain service-function dpi
ip address 12.1.1.1
encapsulation gre enhanced divert
!
service-chain service-function qos
ip address 12.1.1.4
encapsulation gre enhanced divert
!
service-chain service-path 1
service-index 255 service-function firewall
service-index 254 service-function dpi
service-index 253 service-function qos
service-index 252 terminate
!
service-chain service-path 2
service-index 255 service-function qos
service-index 254 service-function dpi
service-index 253 service-function firewall
service-index 252 terminate
!
Service Path 1 is direct, Service Path 2 is reversed. Path numbers may vary.
Service Function Scheduling Algorithms¶
When creating the Rendered Service Path, the origin SFC controller chose the first available service function from a list of service function names. This may result in many issues such as overloaded service functions and a longer service path as SFC has no means to understand the status of service functions and network topology. The service function selection framework supports at least four algorithms (Random, Round Robin, Load Balancing and Shortest Path) to select the most appropriate service function when instantiating the Rendered Service Path. In addition, it is an extensible framework that allows 3rd party selection algorithm to be plugged in.
The following figure illustrates the service function selection framework and algorithms.

SF Selection Architecture¶
A user has three different ways to select one service function selection algorithm:
Integrated RESTCONF Calls. OpenStack and/or other administration system could provide plugins to call the APIs to select one scheduling algorithm.
Command line tools. Command line tools such as curl or browser plugins such as POSTMAN (for Google Chrome) and RESTClient (for Mozilla Firefox) could select schedule algorithm by making RESTCONF calls.
SFC-UI. Now the SFC-UI provides an option for choosing a selection algorithm when creating a Rendered Service Path.
The RESTCONF northbound SFC API provides GUI/RESTCONF interactions for choosing the service function selection algorithm. MD-SAL data store provides all supported service function selection algorithms, and provides APIs to enable one of the provided service function selection algorithms. Once a service function selection algorithm is enabled, the service function selection algorithm will work when creating a Rendered Service Path.
Administrator could use both the following ways to select one of the selection algorithm when creating a Rendered Service Path.
Command line tools. Command line tools includes Linux commands curl or even browser plugins such as POSTMAN(for Google Chrome) or RESTClient(for Mozilla Firefox). In this case, the following JSON content is needed at the moment: Service_function_schudule_type.json
{ "service-function-scheduler-types": { "service-function-scheduler-type": [ { "name": "random", "type": "service-function-scheduler-type:random", "enabled": false }, { "name": "roundrobin", "type": "service-function-scheduler-type:round-robin", "enabled": true }, { "name": "loadbalance", "type": "service-function-scheduler-type:load-balance", "enabled": false }, { "name": "shortestpath", "type": "service-function-scheduler-type:shortest-path", "enabled": false } ] } }
If using the Linux curl command, it could be:
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '$${Service_function_schudule_type.json}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-scheduler-type:service-function-scheduler-types/
Here is also a snapshot for using the RESTClient plugin:

Mozilla Firefox RESTClient¶
SFC-UI.SFC-UI provides a drop down menu for service function selection algorithm. Here is a snapshot for the user interaction from SFC-UI when creating a Rendered Service Path.

Karaf Web UI¶
Note
Some service function selection algorithms in the drop list are not implemented yet. Only the first three algorithms are committed at the moment.
Select Service Function from the name list randomly.
The Random algorithm is used to select one Service Function from the name list which it gets from the Service Function Type randomly.
Service Function information are stored in datastore.
Either no algorithm or the Random algorithm is selected.
The Random algorithm will work either no algorithm type is selected or the Random algorithm is selected.
Once the plugins are installed into Karaf successfully, a user can use his favorite method to select the Random scheduling algorithm type. There are no special instructions for using the Random algorithm.
Select Service Function from the name list in Round Robin manner.
The Round Robin algorithm is used to select one Service Function from the name list which it gets from the Service Function Type in a Round Robin manner, this will balance workloads to all Service Functions. However, this method cannot help all Service Functions load the same workload because it’s flow-based Round Robin.
Service Function information are stored in datastore.
Round Robin algorithm is selected
The Round Robin algorithm will work one the Round Robin algorithm is selected.
Once the plugins are installed into Karaf successfully, a user can use his favorite method to select the Round Robin scheduling algorithm type. There are no special instructions for using the Round Robin algorithm.
Select appropriate Service Function by actual CPU utilization.
The Load Balance Algorithm is used to select appropriate Service Function by actual CPU utilization of service functions. The CPU utilization of service function obtained from monitoring information reported via NETCONF.
CPU-utilization for Service Function.
NETCONF server.
NETCONF client.
Each VM has a NETCONF server and it could work with NETCONF client well.
Set up VMs as Service Functions. enable NETCONF server in VMs. Ensure that you specify them separately. For example:
Set up 4 VMs include 2 SFs’ type are Firewall, Others are Napt44. Name them as firewall-1, firewall-2, napt44-1, napt44-2 as Service Function. The four VMs can run either the same server or different servers.
Install NETCONF server on every VM and enable it. More information on NETCONF can be found on the OpenDaylight wiki here: https://wiki-archive.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf:Manual_netopeer_installation
Get Monitoring data from NETCONF server. These monitoring data should be get from the NETCONF server which is running in VMs. The following static XML data is an example:
static XML data like this:
<?xml version="1.0" encoding="UTF-8"?>
<service-function-description-monitor-report>
<SF-description>
<number-of-dataports>2</number-of-dataports>
<capabilities>
<supported-packet-rate>5</supported-packet-rate>
<supported-bandwidth>10</supported-bandwidth>
<supported-ACL-number>2000</supported-ACL-number>
<RIB-size>200</RIB-size>
<FIB-size>100</FIB-size>
<ports-bandwidth>
<port-bandwidth>
<port-id>1</port-id>
<ipaddress>10.0.0.1</ipaddress>
<macaddress>00:1e:67:a2:5f:f4</macaddress>
<supported-bandwidth>20</supported-bandwidth>
</port-bandwidth>
<port-bandwidth>
<port-id>2</port-id>
<ipaddress>10.0.0.2</ipaddress>
<macaddress>01:1e:67:a2:5f:f6</macaddress>
<supported-bandwidth>10</supported-bandwidth>
</port-bandwidth>
</ports-bandwidth>
</capabilities>
</SF-description>
<SF-monitoring-info>
<liveness>true</liveness>
<resource-utilization>
<packet-rate-utilization>10</packet-rate-utilization>
<bandwidth-utilization>15</bandwidth-utilization>
<CPU-utilization>12</CPU-utilization>
<memory-utilization>17</memory-utilization>
<available-memory>8</available-memory>
<RIB-utilization>20</RIB-utilization>
<FIB-utilization>25</FIB-utilization>
<power-utilization>30</power-utilization>
<SF-ports-bandwidth-utilization>
<port-bandwidth-utilization>
<port-id>1</port-id>
<bandwidth-utilization>20</bandwidth-utilization>
</port-bandwidth-utilization>
<port-bandwidth-utilization>
<port-id>2</port-id>
<bandwidth-utilization>30</bandwidth-utilization>
</port-bandwidth-utilization>
</SF-ports-bandwidth-utilization>
</resource-utilization>
</SF-monitoring-info>
</service-function-description-monitor-report>
Unzip SFC release tarball.
Run SFC: ${sfc}/bin/karaf. More information on Service Function Chaining can be found on the OpenDaylight SFC’s wiki page: https://wiki-archive.opendaylight.org/view/Service_Function_Chaining:Main
Deploy the SFC2 (firewall-abstract2⇒napt44-abstract2) and click button to Create Rendered Service Path in SFC UI (http://localhost:8181/sfc/index.html).
Verify the Rendered Service Path to ensure the CPU utilization of the selected hop is the minimum one among all the service functions with same type. The correct RSP is firewall-1⇒napt44-2
Select appropriate Service Function by Dijkstra’s algorithm. Dijkstra’s algorithm is an algorithm for finding the shortest paths between nodes in a graph.
The Shortest Path Algorithm is used to select appropriate Service Function by actual topology.
Deployed topology (include SFFs, SFs and their links).
Dijkstra’s algorithm. More information on Dijkstra’s algorithm can be found on the wiki here: http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
Unzip SFC release tarball.
Run SFC: ${sfc}/bin/karaf.
Depoly SFFs and SFs. import the service-function-forwarders.json and service-functions.json in UI (http://localhost:8181/sfc/index.html#/sfc/config)
service-function-forwarders.json:
{
"service-function-forwarders": {
"service-function-forwarder": [
{
"name": "SFF-br1",
"service-node": "OVSDB-test01",
"rest-uri": "http://localhost:5001",
"sff-data-plane-locator": [
{
"name": "eth0",
"service-function-forwarder-ovs:ovs-bridge": {
"uuid": "4c3778e4-840d-47f4-b45e-0988e514d26c",
"bridge-name": "br-tun"
},
"data-plane-locator": {
"port": 5000,
"ip": "192.168.1.1",
"transport": "service-locator:vxlan-gpe"
}
}
],
"service-function-dictionary": [
{
"sff-sf-data-plane-locator": {
"sf-dpl-name": "sf1dpl",
"sff-dpl-name": "sff1dpl"
},
"name": "napt44-1",
"type": "napt44"
},
{
"sff-sf-data-plane-locator": {
"sf-dpl-name": "sf2dpl",
"sff-dpl-name": "sff2dpl"
},
"name": "firewall-1",
"type": "firewall"
}
],
"connected-sff-dictionary": [
{
"name": "SFF-br3"
}
]
},
{
"name": "SFF-br2",
"service-node": "OVSDB-test01",
"rest-uri": "http://localhost:5002",
"sff-data-plane-locator": [
{
"name": "eth0",
"service-function-forwarder-ovs:ovs-bridge": {
"uuid": "fd4d849f-5140-48cd-bc60-6ad1f5fc0a1",
"bridge-name": "br-tun"
},
"data-plane-locator": {
"port": 5000,
"ip": "192.168.1.2",
"transport": "service-locator:vxlan-gpe"
}
}
],
"service-function-dictionary": [
{
"sff-sf-data-plane-locator": {
"sf-dpl-name": "sf1dpl",
"sff-dpl-name": "sff1dpl"
},
"name": "napt44-2",
"type": "napt44"
},
{
"sff-sf-data-plane-locator": {
"sf-dpl-name": "sf2dpl",
"sff-dpl-name": "sff2dpl"
},
"name": "firewall-2",
"type": "firewall"
}
],
"connected-sff-dictionary": [
{
"name": "SFF-br3"
}
]
},
{
"name": "SFF-br3",
"service-node": "OVSDB-test01",
"rest-uri": "http://localhost:5005",
"sff-data-plane-locator": [
{
"name": "eth0",
"service-function-forwarder-ovs:ovs-bridge": {
"uuid": "fd4d849f-5140-48cd-bc60-6ad1f5fc0a4",
"bridge-name": "br-tun"
},
"data-plane-locator": {
"port": 5000,
"ip": "192.168.1.2",
"transport": "service-locator:vxlan-gpe"
}
}
],
"service-function-dictionary": [
{
"sff-sf-data-plane-locator": {
"sf-dpl-name": "sf1dpl",
"sff-dpl-name": "sff1dpl"
},
"name": "test-server",
"type": "dpi"
},
{
"sff-sf-data-plane-locator": {
"sf-dpl-name": "sf2dpl",
"sff-dpl-name": "sff2dpl"
},
"name": "test-client",
"type": "dpi"
}
],
"connected-sff-dictionary": [
{
"name": "SFF-br1"
},
{
"name": "SFF-br2"
}
]
}
]
}
}
service-functions.json:
{
"service-functions": {
"service-function": [
{
"rest-uri": "http://localhost:10001",
"ip-mgmt-address": "10.3.1.103",
"sf-data-plane-locator": [
{
"name": "preferred",
"port": 10001,
"ip": "10.3.1.103",
"service-function-forwarder": "SFF-br1"
}
],
"name": "napt44-1",
"type": "napt44"
},
{
"rest-uri": "http://localhost:10002",
"ip-mgmt-address": "10.3.1.103",
"sf-data-plane-locator": [
{
"name": "master",
"port": 10002,
"ip": "10.3.1.103",
"service-function-forwarder": "SFF-br2"
}
],
"name": "napt44-2",
"type": "napt44"
},
{
"rest-uri": "http://localhost:10003",
"ip-mgmt-address": "10.3.1.103",
"sf-data-plane-locator": [
{
"name": "1",
"port": 10003,
"ip": "10.3.1.102",
"service-function-forwarder": "SFF-br1"
}
],
"name": "firewall-1",
"type": "firewall"
},
{
"rest-uri": "http://localhost:10004",
"ip-mgmt-address": "10.3.1.103",
"sf-data-plane-locator": [
{
"name": "2",
"port": 10004,
"ip": "10.3.1.101",
"service-function-forwarder": "SFF-br2"
}
],
"name": "firewall-2",
"type": "firewall"
},
{
"rest-uri": "http://localhost:10005",
"ip-mgmt-address": "10.3.1.103",
"sf-data-plane-locator": [
{
"name": "3",
"port": 10005,
"ip": "10.3.1.104",
"service-function-forwarder": "SFF-br3"
}
],
"name": "test-server",
"type": "dpi"
},
{
"rest-uri": "http://localhost:10006",
"ip-mgmt-address": "10.3.1.103",
"sf-data-plane-locator": [
{
"name": "4",
"port": 10006,
"ip": "10.3.1.102",
"service-function-forwarder": "SFF-br3"
}
],
"name": "test-client",
"type": "dpi"
}
]
}
}
The deployed topology like this:
+----+ +----+ +----+
|sff1|+----------|sff3|---------+|sff2|
+----+ +----+ +----+
| |
+--------------+ +--------------+
| | | |
+----------+ +--------+ +----------+ +--------+
|firewall-1| |napt44-1| |firewall-2| |napt44-2|
+----------+ +--------+ +----------+ +--------+
Deploy the SFC2(firewall-abstract2⇒napt44-abstract2), select “Shortest Path” as schedule type and click button to Create Rendered Service Path in SFC UI (http://localhost:8181/sfc/index.html).

select schedule type¶
Verify the Rendered Service Path to ensure the selected hops are linked in one SFF. The correct RSP is firewall-1⇒napt44-1 or firewall-2⇒napt44-2. The first SF type is Firewall in Service Function Chain. So the algorithm will select first Hop randomly among all the SFs type is Firewall. Assume the first selected SF is firewall-2. All the path from firewall-1 to SF which type is Napt44 are list:
Path1: firewall-2 → sff2 → napt44-2
Path2: firewall-2 → sff2 → sff3 → sff1 → napt44-1 The shortest path is Path1, so the selected next hop is napt44-2.

rendered service path¶
Service Function Load Balancing User Guide¶
SFC Load-Balancing feature implements load balancing of Service Functions, rather than a one-to-one mapping between Service-Function-Forwarder and Service-Function.
Service Function Groups (SFG) can replace Service Functions (SF) in the Rendered Path model. A Service Path can only be defined using SFGs or SFs, but not a combination of both.
Relevant objects in the YANG model are as follows:
Service-Function-Group-Algorithm:
Service-Function-Group-Algorithms { Service-Function-Group-Algorithm { String name String type } }
Available types: ALL, SELECT, INDIRECT, FAST_FAILURE
Service-Function-Group:
Service-Function-Groups { Service-Function-Group { String name String serviceFunctionGroupAlgorithmName String type String groupId Service-Function-Group-Element { String service-function-name int index } } }
ServiceFunctionHop: holds a reference to a name of SFG (or SF)
This tutorial will explain how to create a simple SFC configuration, with SFG instead of SF. In this example, the SFG will include two existing SF.
For general SFC setup and scenarios, please see the SFC wiki page: https://wiki-archive.opendaylight.org/view/Service_Function_Chaining:Main
{
"service-function-group-algorithm": [
{
"name": "alg1"
"type": "ALL"
}
]
}
(Header “content-type”: application/json)
In order to delete all algorithms: DELETE - http://127.0.0.1:8181/restconf/config/service-function-group-algorithm:service-function-group-algorithms
POST - http://127.0.0.1:8181/restconf/config/service-function-group:service-function-groups
{
"service-function-group": [
{
"rest-uri": "http://localhost:10002",
"ip-mgmt-address": "10.3.1.103",
"algorithm": "alg1",
"name": "SFG1",
"type": "napt44",
"sfc-service-function": [
{
"name":"napt44-104"
},
{
"name":"napt44-103-1"
}
]
}
]
}
GET - http://127.0.0.1:8181/restconf/config/service-function-group:service-function-groups
SFC Proof of Transit User Guide¶
Several deployments use traffic engineering, policy routing, segment routing or service function chaining (SFC) to steer packets through a specific set of nodes. In certain cases regulatory obligations or a compliance policy require to prove that all packets that are supposed to follow a specific path are indeed being forwarded across the exact set of nodes specified. I.e. if a packet flow is supposed to go through a series of service functions or network nodes, it has to be proven that all packets of the flow actually went through the service chain or collection of nodes specified by the policy. In case the packets of a flow weren’t appropriately processed, a proof of transit egress device would be required to identify the policy violation and take corresponding actions (e.g. drop or redirect the packet, send an alert etc.) corresponding to the policy.
Service Function Chaining (SFC) Proof of Transit (SFC PoT) implements Service Chaining Proof of Transit functionality on capable network devices. Proof of Transit defines mechanisms to securely prove that traffic transited the defined path. After the creation of an Rendered Service Path (RSP), a user can configure to enable SFC proof of transit on the selected RSP to effect the proof of transit.
To ensure that the data traffic follows a specified path or a function chain, meta-data is added to user traffic in the form of a header. The meta-data is based on a ‘share of a secret’ and provisioned by the SFC PoT configuration from ODL over a secure channel to each of the nodes in the SFC. This meta-data is updated at each of the service-hop while a designated node called the verifier checks whether the collected meta-data allows the retrieval of the secret.
The following diagram shows the overview and essentially utilizes Shamir’s secret sharing algorithm, where each service is given a point on the curve and when the packet travels through each service, it collects these points (meta-data) and a verifier node tries to re-construct the curve using the collected points, thus verifying that the packet traversed through all the service functions along the chain.

SFC Proof of Transit overview¶
Transport options for different protocols includes a new TLV in SR header for Segment Routing, NSH Type-2 meta-data, IPv6 extension headers, IPv4 variants and for VXLAN-GPE. More details are captured in the following link.
In-situ OAM: https://github.com/CiscoDevNet/iOAM
Common acronyms used in the following sections:
SF - Service Function
SFF - Service Function Forwarder
SFC - Service Function Chain
SFP - Service Function Path
RSP - Rendered Service Path
SFC PoT - Service Function Chain Proof of Transit
SFC PoT feature is implemented as a two-part implementation with a north-bound handler that augments the RSP while a south-bound renderer auto-generates the required parameters and passes it on to the nodes that belong to the SFC.
The north-bound feature is enabled via odl-sfc-pot feature while the south-bound renderer is enabled via the odl-sfc-pot-netconf-renderer feature. For the purposes of SFC PoT handling, both features must be installed.
RPC handlers to augment the RSP are part of SfcPotRpc
while the
RSP augmentation to enable or disable SFC PoT feature is done via
SfcPotRspProcessor
.
In order to implement SFC Proof of Transit for a service function chain, an RSP is a pre-requisite to identify the SFC to enable SFC PoT on. SFC Proof of Transit for a particular RSP is enabled by an RPC request to the controller along with necessary parameters to control some of the aspects of the SFC Proof of Transit process.
The RPC handler identifies the RSP and adds PoT feature meta-data like enable/disable, number of PoT profiles, profiles refresh parameters etc., that directs the south-bound renderer appropriately when RSP changes are noticed via call-backs in the renderer handlers.
To use the SFC Proof of Transit Karaf, at least the following Karaf features must be installed:
odl-sfc-model
odl-sfc-provider
odl-sfc-netconf
odl-restconf
odl-netconf-topology
odl-netconf-connector-all
odl-sfc-pot
Please note that the odl-sfc-pot-netconf-renderer or other renderers in future must be installed for the feature to take full-effect. The details of the renderer features are described in other parts of this document.
This tutorial is a simple example how to configure Service Function Chain Proof of Transit using SFC POT feature.
To enable a device to handle SFC Proof of Transit, it is expected that the NETCONF node device advertise capability as under ioam-sb-pot.yang present under sfc-model/src/main/yang folder. It is also expected that base NETCONF support be enabled and its support capability advertised as capabilities.
NETCONF support:urn:ietf:params:netconf:base:1.0
PoT support: (urn:cisco:params:xml:ns:yang:sfc-ioam-sb-pot?revision=2017-01-12)sfc-ioam-sb-pot
It is also expected that the devices are netconf mounted and available in the topology-netconf store.
When SFC Proof of Transit is installed, all netconf nodes in topology-netconf are processed and all capable nodes with accessible mountpoints are cached.
First step is to create the required RSP as is usually done using RSP creation steps in SFC main.
Once RSP name is available it is used to send a POST RPC to the controller similar to below:
POST - http://127.0.0.1:8181/restconf/operations/sfc-ioam-nb-pot:enable-sfc-ioam-pot-rendered-path/
{
"input":
{
"sfc-ioam-pot-rsp-name": "sfc-path-3sf3sff",
"ioam-pot-enable":true,
"ioam-pot-num-profiles":2,
"ioam-pot-bit-mask":"bits32",
"refresh-period-time-units":"milliseconds",
"refresh-period-value":5000
}
}
The following can be used to disable the SFC Proof of Transit on an RSP which disables the PoT feature.
POST - http://127.0.0.1:8181/restconf/operations/sfc-ioam-nb-pot:disable-sfc-ioam-pot-rendered-path/
{
"input":
{
"sfc-ioam-pot-rsp-name": "sfc-path-3sf3sff",
}
}
SFC PoT NETCONF Renderer User Guide¶
The SFC Proof of Transit (PoT) NETCONF renderer implements SFC Proof of Transit functionality on NETCONF-capable devices, that have advertised support for in-situ OAM (iOAM) support.
It listens for an update to an existing RSP with enable or disable proof of transit support and adds the auto-generated SFC PoT configuration parameters to all the SFC hop nodes. The last node in the SFC is configured as a verifier node to allow SFC PoT process to be completed.
Common acronyms are used as below:
SF - Service Function
SFC - Service Function Chain
RSP - Rendered Service Path
SFF - Service Function Forwarder
The renderer module listens to RSP updates in SfcPotNetconfRSPListener
and triggers configuration generation in SfcPotNetconfIoam
class. Node
arrival and leaving are managed via SfcPotNetconfNodeManager
and
SfcPotNetconfNodeListener
. In addition there is a timer thread that
runs to generate configuration periodically to refresh the profiles in the
nodes that are part of the SFC.
To use the SFC Proof of Transit Karaf, the following Karaf features must be installed:
odl-sfc-model
odl-sfc-provider
odl-sfc-netconf
odl-restconf-all
odl-netconf-topology
odl-netconf-connector-all
odl-sfc-pot
odl-sfc-pot-netconf-renderer
This tutorial is a simple example how to enable SFC PoT on NETCONF-capable devices.
The NETCONF-capable device will have to support sfc-ioam-sb-pot.yang file.
It is expected that a NETCONF-capable VPP device has Honeycomb (Hc2vpp) Java-based agent that helps to translate between NETCONF and VPP internal APIs.
More details are here: In-situ OAM: https://github.com/CiscoDevNet/iOAM
When the SFC PoT NETCONF renderer module is installed, all NETCONF nodes in topology-netconf are processed and all sfc-ioam-sb-pot yang capable nodes with accessible mountpoints are cached.
The first step is to create RSP for the SFC as per SFC guidelines above.
Enable SFC PoT is done on the RSP via RESTCONF to the ODL as outlined above.
Internally, the NETCONF renderer will act on the callback to a modified RSP that has PoT enabled.
In-situ OAM algorithms for auto-generation of SFC PoT parameters are generated automatically and sent to these nodes via NETCONF.
Logical Service Function Forwarder¶
When the current SFC is deployed in a cloud environment, it is assumed that each switch connected to a Service Function is configured as a Service Function Forwarder and each Service Function is connected to its Service Function Forwarder depending on the Compute Node where the Virtual Machine is located.

As shown in the picture above, this solution allows the basic cloud use cases to be fulfilled, as for example, the ones required in OPNFV Brahmaputra, however, some advanced use cases like the transparent migration of VMs can not be implemented. The Logical Service Function Forwarder enables the following advanced use cases:
Service Function mobility without service disruption
Service Functions load balancing and failover
As shown in the picture below, the Logical Service Function Forwarder concept extends the current SFC northbound API to provide an abstraction of the underlying Data Center infrastructure. The Data Center underlaying network can be abstracted by a single SFF. This single SFF uses the logical port UUID as data plane locator to connect SFs globally and in a location-transparent manner. SFC makes use of Genius project to track the location of the SF’s logical ports.

The SFC internally distributes the necessary flow state over the relevant switches based on the internal Data Center topology and the deployment of SFs.
The Logical Service Function Forwarder concept extends the current SFC northbound API to provide an abstraction of the underlying Data Center infrastructure.
The Logical SFF simplifies the configuration of the current SFC data model by reducing the number of parameters to be be configured in every SFF, since the controller will discover those parameters by interacting with the services offered by the Genius project.
The following picture shows the Logical SFF data model. The model gets simplified as most of the configuration parameters of the current SFC data model are discovered in runtime. The complete YANG model can be found here logical SFF model.

The following are examples to configure the Logical SFF:
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/restconf/config/service-function:service-functions/
Service Functions JSON.
{
"service-functions": {
"service-function": [
{
"name": "firewall-1",
"type": "firewall",
"sf-data-plane-locator": [
{
"name": "firewall-dpl",
"interface-name": "eccb57ae-5a2e-467f-823e-45d7bb2a6a9a",
"transport": "service-locator:eth-nsh",
"service-function-forwarder": "sfflogical1"
}
]
},
{
"name": "dpi-1",
"type": "dpi",
"sf-data-plane-locator": [
{
"name": "dpi-dpl",
"interface-name": "df15ac52-e8ef-4e9a-8340-ae0738aba0c0",
"transport": "service-locator:eth-nsh",
"service-function-forwarder": "sfflogical1"
}
]
}
]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
Service Function Forwarders JSON.
{
"service-function-forwarders": {
"service-function-forwarder": [
{
"name": "sfflogical1"
}
]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/
Service Function Chains JSON.
{
"service-function-chains": {
"service-function-chain": [
{
"name": "SFC1",
"sfc-service-function": [
{
"name": "dpi-abstract1",
"type": "dpi"
},
{
"name": "firewall-abstract1",
"type": "firewall"
}
]
},
{
"name": "SFC2",
"sfc-service-function": [
{
"name": "dpi-abstract1",
"type": "dpi"
}
]
}
]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8182/restconf/config/service-function-chain:service-function-paths/
Service Function Paths JSON.
{
"service-function-paths": {
"service-function-path": [
{
"name": "SFP1",
"service-chain-name": "SFC1",
"starting-index": 255,
"symmetric": "true",
"context-metadata": "NSH1",
"transport-type": "service-locator:vxlan-gpe"
}
]
}
}
As a result of above configuration, OpenDaylight renders the needed flows in all involved SFFs. Those flows implement:
Two Rendered Service Paths:
dpi-1 (SF1), firewall-1 (SF2)
firewall-1 (SF2), dpi-1 (SF1)
The communication between SFFs and SFs based on eth-nsh
The communication between SFFs based on vxlan-gpe
The following picture shows a topology and traffic flow (in green) which corresponds to the above configuration.

Logical SFF Example¶
The Logical SFF functionality allows OpenDaylight to find out the SFFs holding the SFs involved in a path. In this example the SFFs affected are Node3 and Node4 thus the controller renders the flows containing NSH parameters just in those SFFs.
Here you have the new flows rendered in Node3 and Node4 which implement the NSH protocol. Every Rendered Service Path is represented by an NSP value. We provisioned a symmetric RSP so we get two NSPs: 8388613 and 5. Node3 holds the first SF of NSP 8388613 and the last SF of NSP 5. Node 4 holds the first SF of NSP 5 and the last SF of NSP 8388613. Both Node3 and Node4 will pop the NSH header when the received packet has gone through the last SF of its path.
Rendered flows Node 3
cookie=0x14, duration=59.264s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
cookie=0x14, duration=59.194s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
cookie=0x14, duration=59.257s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=5 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0x14, duration=59.189s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=8388613 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0xba5eba1100000203, duration=59.213s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=5 actions=pop_nsh,set_field:6e:e0:06:b4:c5:1e->eth_src,resubmit(,17)
cookie=0xba5eba1100000201, duration=59.213s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=59.188s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=8388613 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=59.182s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=set_field:0->tun_id,output:6
Rendered Flows Node 4
cookie=0x14, duration=69.040s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
cookie=0x14, duration=69.008s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
cookie=0x14, duration=69.040s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=5 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0x14, duration=69.005s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=8388613 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0xba5eba1100000201, duration=69.029s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=5 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=69.029s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=set_field:0->tun_id,output:1
cookie=0xba5eba1100000201, duration=68.999s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000203, duration=68.996s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=8388613 actions=pop_nsh,set_field:02:14:84:5e:a8:5d->eth_src,resubmit(,17)
An interesting scenario to show the Logical SFF strength is the migration of a SF from a compute node to another. The OpenDaylight will learn the new topology by itself, then it will re-render the new flows to the new SFFs affected.

Logical SFF - SF Migration Example¶
In our example, SF2 is moved from Node4 to Node2 then OpenDaylight removes NSH specific flows from Node4 and puts them in Node2. Check below flows showing this effect. Now Node3 keeps holding the first SF of NSP 8388613 and the last SF of NSP 5; but Node2 becomes the new holder of the first SF of NSP 5 and the last SF of NSP 8388613.
Rendered Flows Node 3 After Migration
cookie=0x14, duration=64.044s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
cookie=0x14, duration=63.947s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
cookie=0x14, duration=64.044s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=5 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0x14, duration=63.947s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=8388613 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0xba5eba1100000201, duration=64.034s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000203, duration=64.034s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=5 actions=pop_nsh,set_field:6e:e0:06:b4:c5:1e->eth_src,resubmit(,17)
cookie=0xba5eba1100000201, duration=63.947s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=8388613 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=63.942s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=set_field:0->tun_id,output:2
Rendered Flows Node 2 After Migration
cookie=0x14, duration=56.856s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
cookie=0x14, duration=56.755s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
cookie=0x14, duration=56.847s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=5 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0x14, duration=56.755s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=8388613 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
cookie=0xba5eba1100000201, duration=56.823s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=5 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000201, duration=56.823s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=set_field:0->tun_id,output:4
cookie=0xba5eba1100000201, duration=56.755s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
cookie=0xba5eba1100000203, duration=56.750s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=8388613 actions=pop_nsh,set_field:02:14:84:5e:a8:5d->eth_src,resubmit(,17)
Rendered Flows Node 4 After Migration
-- No flows for NSH processing --
As previously mentioned, in the Logical SFF rationale, the Logical SFF feature relies on Genius to get the dataplane IDs of the OpenFlow switches, in order to properly steer the traffic through the chain.
Since one of the classifier’s objectives is to steer the packets into the SFC domain, the classifier has to be aware of where the first Service Function is located - if it migrates somewhere else, the classifier table has to be updated accordingly, thus enabling the seemless migration of Service Functions.
For this feature, mobility of the client VM is out of scope, and should be managed by its high-availability module, or VNF manager.
Keep in mind that classification always occur in the compute-node where the client VM (i.e. traffic origin) is running.
In order to leverage this functionality, the classifier has to be configured using a Logical SFF as an attachment-point, specifying within it the neutron port to classify.
The following examples show how to configure an ACL, and a classifier having a Logical SFF as an attachment-point:
Configure an ACL
The following ACL enables traffic intended for port 80 within the subnetwork 192.168.2.0/24, for RSP1 and RSP1-Reverse.
{
"access-lists": {
"acl": [
{
"acl-name": "ACL1",
"acl-type": "ietf-access-control-list:ipv4-acl",
"access-list-entries": {
"ace": [
{
"rule-name": "ACE1",
"actions": {
"service-function-acl:rendered-service-path": "RSP1"
},
"matches": {
"destination-ipv4-network": "192.168.2.0/24",
"source-ipv4-network": "192.168.2.0/24",
"protocol": "6",
"source-port-range": {
"lower-port": 0
},
"destination-port-range": {
"lower-port": 80
}
}
}
]
}
},
{
"acl-name": "ACL2",
"acl-type": "ietf-access-control-list:ipv4-acl",
"access-list-entries": {
"ace": [
{
"rule-name": "ACE2",
"actions": {
"service-function-acl:rendered-service-path": "RSP1-Reverse"
},
"matches": {
"destination-ipv4-network": "192.168.2.0/24",
"source-ipv4-network": "192.168.2.0/24",
"protocol": "6",
"source-port-range": {
"lower-port": 80
},
"destination-port-range": {
"lower-port": 0
}
}
}
]
}
}
]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/ietf-access-control-list:access-lists/
Configure a classifier JSON
The following JSON provisions a classifier, having a Logical SFF as an attachment point. The value of the field ‘interface’ is where you indicate the neutron ports of the VMs you want to classify.
{
"service-function-classifiers": {
"service-function-classifier": [
{
"name": "Classifier1",
"scl-service-function-forwarder": [
{
"name": "sfflogical1",
"interface": "09a78ba3-78ba-40f5-a3ea-1ce708367f2b"
}
],
"acl": {
"name": "ACL1",
"type": "ietf-access-control-list:ipv4-acl"
}
}
]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-classifier:service-function-classifiers/
After binding SFC service with a particular interface by means of Genius, as explained in the Genius User Guide, the entry point in the SFC pipeline will be table 82 (SFC_TRANSPORT_CLASSIFIER_TABLE), and from that point, packet processing will be similar to the SFC OpenFlow pipeline, just with another set of specific tables for the SFC service.
This picture shows the SFC pipeline after service integration with Genius:

SFC Logical SFF OpenFlow pipeline¶
Directional data plane locators for symmetric paths¶
A symmetric path results from a Service Function Path with the symmetric field set or when any of the constituent Service Functions is set as bidirectional. Such a path is defined by two Rendered Service Paths where one of them steers the traffic through the same Service Functions as the other but in opposite order. These two Rendered Service Paths are also said to be symmetric to each other and gives to each path a sense of direction: The Rendered Service Path that corresponds to the same order of Service Functions as that defined on the Service Function Chain is tagged as the forward or up-link path, while the Rendered Service Path that corresponds to the opposite order is tagged as reverse or down-link path.
Directional data plane locators allow the use of different interfaces or interface details between the Service Function Forwarder and the Service Function in relation with the direction of the path for which they are being used. This function is relevant for Service Functions that would have no other way of discerning the direction of the traffic, like for example legacy bump-in-the-wire network devices.
+-----------------------------------------------+
| |
| |
| SF |
| |
| sf-forward-dpl sf-reverse-dpl |
+--------+-----------------------------+--------+
| |
^ | + + | ^
| | | | | |
| | | | | |
+ | + + | +
Forward Path | Reverse Path Forward Path | Reverse Path
+ | + + | +
| | | | | |
| | | | | |
| | | | | |
+ | v v | +
| |
+-----------+-----------------------------------------+
Forward Path | sff-forward-dpl sff-reverse-dpl | Forward Path
+--------------> | | +-------------->
| |
| SFF |
| |
<--------------+ | | <--------------+
Reverse Path | | Reverse Path
+-----------------------------------------------------+
As shown in the previous figure, the forward path egress from the Service Function Forwarder towards the Service Function is defined by the sff-forward-dpl and sf-forward-dpl data plane locators. The forward path ingress from the Service Function to the Service Function Forwarder is defined by the sf-reverse-dpl and sff-reverse-dpl data plane locators. For the reverse path, it’s the opposite: the sff-reverse-dpl and sf-reverse-dpl define the egress from the Service Function Forwarder to the Service Function, and the sf-forward-dpl and sff-forward-dpl define the ingress into the Service Function Forwarder from the Service Function.
Note
Directional data plane locators are only supported in combination with the SFC OF Renderer at this time.
Directional data plane locators are configured within the service-function-forwarder in the service-function-dictionary entity, which describes the association between a Service Function Forwarder and Service Functions:
list service-function-dictionary {
key "name";
leaf name {
type sfc-common:sf-name;
description
"The name of the service function.";
}
container sff-sf-data-plane-locator {
description
"SFF and SF data plane locators to use when sending
packets from this SFF to the associated SF";
leaf sf-dpl-name {
type sfc-common:sf-data-plane-locator-name;
description
"The SF data plane locator to use when sending
packets to the associated service function.
Used both as forward and reverse locators for
paths of a symmetric chain.";
}
leaf sff-dpl-name {
type sfc-common:sff-data-plane-locator-name;
description
"The SFF data plane locator to use when sending
packets to the associated service function.
Used both as forward and reverse locators for
paths of a symmetric chain.";
}
leaf sf-forward-dpl-name {
type sfc-common:sf-data-plane-locator-name;
description
"The SF data plane locator to use when sending
packets to the associated service function
on the forward path of a symmetric chain";
}
leaf sf-reverse-dpl-name {
type sfc-common:sf-data-plane-locator-name;
description
"The SF data plane locator to use when sending
packets to the associated service function
on the reverse path of a symmetric chain";
}
leaf sff-forward-dpl-name {
type sfc-common:sff-data-plane-locator-name;
description
"The SFF data plane locator to use when sending
packets to the associated service function
on the forward path of a symmetric chain.";
}
leaf sff-reverse-dpl-name {
type sfc-common:sff-data-plane-locator-name;
description
"The SFF data plane locator to use when sending
packets to the associated service function
on the reverse path of a symmetric chain.";
}
}
}
The following configuration example is based on the Logical SFF configuration one. Only the Service Function and Service Function Forwarder configuration changes with respect to that example:
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/restconf/config/service-function:service-functions/
Service Functions JSON.
{
"service-functions": {
"service-function": [
{
"name": "firewall-1",
"type": "firewall",
"sf-data-plane-locator": [
{
"name": "sf-firewall-net-A-dpl",
"interface-name": "eccb57ae-5a2e-467f-823e-45d7bb2a6a9a",
"transport": "service-locator:mac",
"service-function-forwarder": "sfflogical1"
},
{
"name": "sf-firewall-net-B-dpl",
"interface-name": "7764b6f1-a5cd-46be-9201-78f917ddee1d",
"transport": "service-locator:mac",
"service-function-forwarder": "sfflogical1"
}
]
},
{
"name": "dpi-1",
"type": "dpi",
"sf-data-plane-locator": [
{
"name": "sf-dpi-net-A-dpl",
"interface-name": "df15ac52-e8ef-4e9a-8340-ae0738aba0c0",
"transport": "service-locator:mac",
"service-function-forwarder": "sfflogical1"
},
{
"name": "sf-dpi-net-B-dpl",
"interface-name": "1bb09b01-422d-4ccf-8d7a-9ebf00d1a1a5",
"transport": "service-locator:mac",
"service-function-forwarder": "sfflogical1"
}
]
}
]
}
}
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '${JSON}' -X PUT --user
admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
Service Function Forwarders JSON.
{
"service-function-forwarders": {
"service-function-forwarder": [
{
"name": "sfflogical1"
"sff-data-plane-locator": [
{
"name": "sff-firewall-net-A-dpl",
"data-plane-locator": {
"interface-name": "eccb57ae-5a2e-467f-823e-45d7bb2a6a9a",
"transport": "service-locator:mac"
}
},
{
"name": "sff-firewall-net-B-dpl",
"data-plane-locator": {
"interface-name": "7764b6f1-a5cd-46be-9201-78f917ddee1d",
"transport": "service-locator:mac"
}
},
{
"name": "sff-dpi-net-A-dpl",
"data-plane-locator": {
"interface-name": "df15ac52-e8ef-4e9a-8340-ae0738aba0c0",
"transport": "service-locator:mac"
}
},
{
"name": "sff-dpi-net-B-dpl",
"data-plane-locator": {
"interface-name": "1bb09b01-422d-4ccf-8d7a-9ebf00d1a1a5",
"transport": "service-locator:mac"
}
}
],
"service-function-dictionary": [
{
"name": "firewall-1",
"sff-sf-data-plane-locator": {
"sf-forward-dpl-name": "sf-firewall-net-A-dpl",
"sf-reverse-dpl-name": "sf-firewall-net-B-dpl",
"sff-forward-dpl-name": "sff-firewall-net-A-dpl",
"sff-reverse-dpl-name": "sff-firewall-net-B-dpl",
}
},
{
"name": "dpi-1",
"sff-sf-data-plane-locator": {
"sf-forward-dpl-name": "sf-dpi-net-A-dpl",
"sf-reverse-dpl-name": "sf-dpi-net-B-dpl",
"sff-forward-dpl-name": "sff-dpi-net-A-dpl",
"sff-reverse-dpl-name": "sff-dpi-net-B-dpl",
}
}
]
}
]
}
}
In comparison with the Logical SFF example, noticed that each Service Function is configured with two data plane locators instead of one so that each can be used in different directions of the path. To specify which locator is used on which direction, the Service Function Forwarder configuration is also more extensive compared to the previous example.
When comparing this example with the Logical SFF one, that the Service Function Forwarder is configured with data plane locators and that they hold the same interface name values as the corresponding Service Function interfaces. This is because in the Logical SFF particular case, a single logical interface fully describes an attachment of a Service Function Forwarder to a Service Function on both the Service Function and Service Function Forwarder sides. For non-Logical SFF scenarios, it would be expected for the data plane locators to have different values as we have seen on other examples through out this user guide. For example, if mac addresses are to be specified in the locators, the Service Function would have a different mac address than the Service Function Forwarder.
As a result of the overall configuration, two Rendered Service Paths are implemented. The forward path:
+------------+ +-------+
| firewall-1 | | dpi- 1 |
+---+---+----+ +--+--+-+
^ | ^ |
net-A-dpl| |net-B-dpl net-A-dpl| |net-B-dpl
| | | |
+----------+ | | | | +----------+
| client A +--------------+ +------------------------+ +------------>+ server B |
+----------+ +----------+
And the reverse path:
+------------+ +-------+
| firewall 1 | | dpi-1 |
+---+---+----+ +--+--+-+
| ^ | ^
net-A-dpl| |net-B-dpl net-A-dpl| |net-B-dpl
| | | |
+----------+ | | | | +----------+
| client A +<-------------+ +------------------------+ +-------------+ server B |
+----------+ +----------+
Consider the following notes to put the example in context:
The classification function is obviated from the illustration.
The forward path is up-link traffic from a client in network A to a server in network B.
The reverse path is down-link traffic from a server in network B to a client in network A.
The service functions might be legacy bump-in-the-wire network devices that need to use different interfaces for each network.
SFC Statistics User Guide¶
Statistics can be queried for Rendered Service Paths created on OVS bridges. Future support will be added for Service Function Forwarders and Service Functions. Future support will also be added for VPP and IOs-XE devices.
To use SFC statistics the ‘odl-sfc-statistics’ Karaf feature needs to be installed.
Statistics are queried by sending an RPC RESTconf message to ODL. For RSPs, its possible to either query statistics for one individual RSP or for all RSPs, as follows:
Querying statistics for a specific RSP:
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '{ "input": { "name" : "path1-Path-42" } }' -X POST --user admin:admin
http://localhost:8181/restconf/operations/sfc-statistics-operations:get-rsp-statistics
Querying statistics for all RSPs:
curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
--data '{ "input": { } }' -X POST --user admin:admin
http://localhost:8181/restconf/operations/sfc-statistics-operations:get-rsp-statistics
The following is the sort of output that can be expected for each RSP.
{
"output": {
"statistics": [
{
"name": "sfc-path-1sf1sff-Path-34",
"statistic-by-timestamp": [
{
"service-statistic": {
"bytes-in": 0,
"bytes-out": 0,
"packets-in": 0,
"packets-out": 0
},
"timestamp": 1518561500480
}
]
}
]
}
}
Developer Guide¶
Overview¶
Integrating Animal Sniffer with OpenDaylight projects¶
This section provides information required to setup OpenDaylight projects with the Maven’s Animal Sniffer plugin for testing API compatibility with OpenJDK.
Steps to setup up animal sniffer plugin with your project¶
Clone odlparent and checkout the required branch. The example below uses the branch ‘origin/master/2.0.x’
git clone https://git.opendaylight.org/gerrit/odlparent
cd odlparent
git checkout origin/master/2.0.x
Modify the file odlparent/pom.xml to install the Animal Sniffer plugin as shown in the below example or refer to the change odlparent gerrit patch.
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>animal-sniffer-maven-plugin</artifactId>
<version>1.16</version>
<configuration>
<signature>
<groupId>org.codehaus.mojo.signature</groupId>
<artifactId>java18</artifactId>
<version>1.0</version>
</signature>
</configuration>
<executions>
<execution>
<id>animal-sniffer</id>
<phase>verify</phase>
<goals>
<goal>check</goal>
</goals>
</execution>
<execution>
<id>check-java-version</id>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
<configuration>
<signature>
<groupId>org.codehaus.mojo.signature</groupId>
<artifactId>java18</artifactId>
<version>1.0</version>
</signature>
</configuration>
</execution>
</executions>
</plugin>
Run a mvn clean install in odlparent.
mvn clean install
Clone the respective project to be tested with the plugin. As shown in the example in yangtools gerrit patch, modify the relevant pom.xml files to reference the version of odlparent which is checked-out. As shown in the example below change the version to 2.0.6-SNAPSHOT or the version of the 2.0.x-SNAPSHOT odlparent is checked out.
<parent>
<groupId>org.opendaylight.odlparent</groupId>
<artifactId>odlparent</artifactId>
<version>2.0.6-SNAPSHOT</version>
<relativePath/>
</parent>
Run a mvn clean install in your project.
mvn clean install
Run mvn animal-sniffer:check on your project and fix any relevant issues.
mvn animal-sniffer:check
Project-specific Developer Guides¶
Distribution Version reporting¶
Overview¶
This section provides an overview of odl-distribution-version feature.
A remote user of OpenDaylight usually has access to RESTCONF and NETCONF northbound interfaces, but does not have access to the system OpenDaylight is running on. OpenDaylight has released multiple versions including Service Releases, and there are incompatible changes between them. In order to know which YANG modules to use, which bugs to expect and which workarounds to apply, such user would need to know the exact version of at least one OpenDaylight component.
There are indirect ways to deduce such version, but the direct way is enabled by odl-distribution-version feature. Administrator can specify version strings, which would be available to users via NETCONF, or via RESTCONF if OpenDaylight is configured to initiate NETCONF connection to its config subsystem northbound interface.
By default, users have write access to config subsystem, so they can add, modify or delete any version strings present there. Admins can only influence whether the feature is installed, and initial values.
Config subsystem is local only, not cluster aware, so each member reports versions independently. This is suitable for heterogeneous clusters. On homogeneous clusters, make sure you set and check every member.
Key APIs and Interfaces¶
Current implementation relies heavily on config-parent
parent POM file from Controller project.
Throughout this chapter, model denotes YANG module, and module denotes item in config subsystem module list.
Version functionality relies on config subsystem and its config
YANG model.
The YANG model odl-distribution-version
adds an identity odl-version
and augments /config:modules/module/configuration
adding new case for odl-version
type.
This case contains single leaf version
, which would hold the version string.
Config subsystem can hold multiple modules, the version string should contain version of OpenDaylight component corresponding to the module name. As this is pure metadata with no consequence on OpenDaylight behavior, there is no prescribed scheme for chosing config module names. But see the default configuration file for examples.
Each config module needs to come with java classes which override customValidation()
and createInstance()
. Version related modules have no impact on OpenDaylight internal behavior,
so the methods return void and dummy closeable respectively, without any side effect.
Initial version values are set via config file odl-version.xml
which is created in
$KARAF_HOME/etc/opendaylight/karaf/
upon installation of odl-distribution-version
feature.
If admin wants to use different content, the file with desired content has to be created
there before feature installation happens.
By default, the config file defines two config modules, named odl-distribution-version
and odl-odlparent-version
.
Currently the default version values are set to Maven property strings (as opposed to valid values), as the needed new functionality did not make it into Controller project in Boron. See Bug number 6003.
The odl-distribution-version
feature is currently the only feature defined
in feature repository of artifactId features-distribution
,
which is available (transitively) in OpenDaylight Karaf distribution.
Opendaylight config subsystem NETCONF northbound is not made available just by installing
odl-distribution-version
, but most other feature installations would enable it.
RESTCONF interfaces are enabled by installing odl-restconf
feature,
but that do not allow access to config subsystem by itself.
On single node deployments, installation of odl-netconf-connector-ssh
is recommended,
which would configure controller-config
device and its MD-SAL mount point.
See documentation for clustering on how to create similar devices for member modes,
as controller-config
name is not unique in that context.
Assuming single node deployment and user located on the same system,
here is an example curl
command accessing odl-odlparent-version
config module:
curl 127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-distribution-version:odl-version/odl-odlparent-version
Distribution features¶
Overview¶
This section provides an overview of odl-integration-compatible-with-all and odl-integration-all features.
Integration/Distribution project produces a Karaf 4 distribution which gives users access to many Karaf features provided by upstream OpenDaylight projects. Users are free to install arbitrary subset of those features, but not every feature combination is expected to work properly.
Some features are pro-active, which means OpenDaylight in contact with othe network elements starts diving changes in the network even without prompting by users, in order to satisfy initial conditions their use case expects. Such activity from one feature may in turn affect behavior of another feature.
In some cases, there exists features which offer diferent implementation of the same service, they may fail to initialize properly (e.g. failing to bind a port already bound by the other feature).
Integration/Test project is maintaining system tests (CSIT) jobs. Aside of testing scenarios with only a minimal set of features installed (-only- jobs), the scenarios are also tested with a large set of features installed (-all- jobs).
In order to define a proper set of features to test with, Integration/Distribution project defines two “aggregate” features. Note that these features are not intended for production use, so the feature repository which defines them is not enabled by default.
The content of these features is determined by upstream OpenDaylight contributions, with Integration/Test providing insight on observed compatibuility relations. Integration/Distribution team is focused only on making sure the build process is reliable.
Feature repositories¶
This feature repository is enabled by default. It does not refer to any new features directly, instead it refers to upstream feature repositories, enabling any feature contained therein to be available for installation.
This feature repository defines the two aggregate features. To enable this repository, change the featuresRepositories line of org.apache.karaf.features.cfg file, by copy-pasting the feature-index value and editing the name.
Karaf features¶
The two aggregate features, defining sets of user-facing features defined by compatibility requirements. Note that is the compatibility relation differs between single node an cluster deployments, single node point of view takes precedence.
This feature contains the largest set of user-facing features which may affect each others operation, but the set does not affect usability of Karaf infrastructure.
Note that port binding conflicts and “server is unhealthy” status of config subsystem are considered to affect usability, as is a failure of Restconf to respond to GET on /restconf/modules with HTTP status 200.
This feature is used in verification process for Integration/Distribution contributions.
This feature contains the largest set of user-facing features which are not pro-active and do not affect each others operation.
Installing this set together with just one of odl-integration-all features should still result in fully operational installation, as one pro-active feature should not lead to any conflicts. This should also hold if the single added feature is outside odl-integration-all, if it is one of conflicting implementations (and no such implementations is in odl-integration-all).
This feature is used in the aforementioned -all- CSIT jobs.
Neutron Service Developer Guide¶
Overview¶
This Karaf feature (odl-neutron-service
) provides integration
support for OpenStack Neutron via the OpenDaylight ML2 mechanism driver.
The Neutron Service is only one of the components necessary for
OpenStack integration. It defines YANG models for OpenStack Neutron data
models and northbound API via REST API and YANG model RESTCONF.
Those developers who want to add new provider for new OpenStack Neutron extensions/services (Neutron constantly adds new extensions/services and OpenDaylight will keep up with those new things) need to communicate with this Neutron Service or add models to Neutron Service. If you want to add new extensions/services themselves to the Neutron Service, new YANG data models need to be added, but that is out of scope of this document because this guide is for a developer who will be using the feature to build something separate, but not somebody who will be developing code for this feature itself.
Neutron Service Architecture¶

Neutron Service Architecture¶
The Neutron Service defines YANG models for OpenStack Neutron integration. When OpenStack admins/users request changes (creation/update/deletion) of Neutron resources, e.g., Neutron network, Neutron subnet, Neutron port, the corresponding YANG model within OpenDaylight will be modified. The OpenDaylight OpenStack will subscribe the changes on those models and will be notified those modification through MD-SAL when changes are made. Then the provider will do the necessary tasks to realize OpenStack integration. How to realize it (or even not realize it) is up to each provider. The Neutron Service itself does not take care of it.
How to Write a SB Neutron Consumer¶
In Boron, there is only one options for SB Neutron Consumers:
Listening for changes via the Neutron YANG model
Until Beryllium there was another way with the legacy I*Aware interface. From Boron, the interface was eliminated. So all the SB Neutron Consumers have to use Neutron YANG model.
Neutron YANG models¶
Neutron service defines YANG models for Neutron. The details can be found at
Basically those models are based on OpenStack Neutron API definitions. For exact definitions, OpenStack Neutron source code needs to be referred as the above documentation doesn’t always cover the necessary details. There is nothing special to utilize those Neutron YANG models. The basic procedure will be:
subscribe for changes made to the model
respond on the data change notification for each models
Note
Currently there is no way to refuse the request configuration at this point. That is left to future work.
public class NeutronNetworkChangeListener implements DataChangeListener, AutoCloseable {
private ListenerRegistration<DataChangeListener> registration;
private DataBroker db;
public NeutronNetworkChangeListener(DataBroker db){
this.db = db;
// create identity path to register on service startup
InstanceIdentifier<Network> path = InstanceIdentifier
.create(Neutron.class)
.child(Networks.class)
.child(Network.class);
LOG.debug("Register listener for Neutron Network model data changes");
// register for Data Change Notification
registration =
this.db.registerDataChangeListener(LogicalDatastoreType.CONFIGURATION, path, this, DataChangeScope.ONE);
}
@Override
public void onDataChanged(
AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject> changes) {
LOG.trace("Data changes : {}",changes);
// handle data change notification
Object[] subscribers = NeutronIAwareUtil.getInstances(INeutronNetworkAware.class, this);
createNetwork(changes, subscribers);
updateNetwork(changes, subscribers);
deleteNetwork(changes, subscribers);
}
}
Neutron configuration¶
From Boron, new models of configuration for OpenDaylight to tell OpenStack neutron/networking-odl its configuration/capability.
This is for OpenDaylight to tell per-node configuration to Neutron. Especially this is used by pseudo agent port binding heavily.
The model definition can be found at
How to populate this for pseudo agent port binding is documented at
In Boron this is experimental. The model definition can be found at
Each Neutron Service provider has its own feature set. Some support the full features of OpenStack, but others support only a subset. With same supported Neutron API, some functionality may or may not be supported. So there is a need for a way that OpenDaylight can tell networking-odl its capability. Thus networking-odl can initialize Neutron properly based on reported capability.
Neutorn Logger¶
There is another small Karaf feature, odl-neutron-logger
, which logs
changes of Neutron YANG models. which can be used for debug/audit.
It would also help to understand how to listen the change.
API Reference Documentation¶
The OpenStack Neutron API references
Neutron Northbound¶
How to add new API support¶
OpenStack Neutron is a moving target. It is continuously adding new features as new rest APIs. Here is a basic step to add new API support:
In the Neutron Northbound project:
Add new YANG model for it under
neutron/model/src/main/yang
andupdate neutron.yang
Add northbound API for it, and neutron-spi
Implement
Neutron<New API>Request.java
andNeutron<New API>Norhtbound.java
underneutron/northbound-api/src/main/java/org/opendaylight/neutron/northbound/api/
Implement
INeutron<New API>CRUD.java
and new data structure if any underneutron/neutron-spi/src/main/java/org/opendaylight/neutron/spi/
update
neutron/neutron-spi/src/main/java/org/opendaylight/neutron/spi/NeutronCRUDInterfaces.java
to wire new CRUD interfaceAdd unit tests,
Neutron<New structure>JAXBTest.java
underneutron/neutron-spi/src/test/java/org/opendaylight/neutron/spi/
update
neutron/northbound-api/src/main/java/org/opendaylight/neutron/northbound/api/NeutronNorthboundRSApplication.java
to wire new northbound api toRSApplication
Add transcriber,
Neutron<New API>Interface.java
undertranscriber/src/main/java/org/opendaylight/neutron/transcriber/
update
transcriber/src/main/java/org/opendaylight/neutron/transcriber/NeutronTranscriberProvider.java
to wire a new transcriberAdd integration tests
Neutron<New API>Tests.java
underintegration/test/src/test/java/org/opendaylight/neutron/e2etest/
update
integration/test/src/test/java/org/opendaylight/neutron/e2etest/ITNeutronE2E.java
to run a newly added tests.
In OpenStack networking-odl
Add new driver (or plugin) for new API with tests.
In a southbound Neutron Provider
implement actual backend to realize those new API by listening related YANG models.
How to write transcriber¶
For each Neutron data object, there is an Neutron*Interface
defined
within the transcriber artifact that will write that object to the
MD-SAL configuration datastore.
All Neutron*Interface
extend AbstractNeutronInterface
, in which
two methods are defined:
one takes the Neutron object as input, and will create a data object from it.
one takes an uuid as input, and will create a data object containing the uuid.
protected abstract T toMd(S neutronObject);
protected abstract T toMd(String uuid);
In addition the AbstractNeutronInterface
class provides several
other helper methods (addMd
, updateMd
, removeMd
), which
handle the actual writing to the configuration datastore.
toMD()
methods¶Each of the Neutron YANG models defines structures containing data.
Further each YANG-modeled structure has it own builder. A particular
toMD()
method instantiates an instance of the correct builder, fills
in the properties of the builder from the corresponding values of the
Neutron object and then creates the YANG-modeled structures via the
build()
method.
As an example, one of the toMD
code for Neutron Networks is
presented below:
protected Network toMd(NeutronNetwork network) {
NetworkBuilder networkBuilder = new NetworkBuilder();
networkBuilder.setAdminStateUp(network.getAdminStateUp());
if (network.getNetworkName() != null) {
networkBuilder.setName(network.getNetworkName());
}
if (network.getShared() != null) {
networkBuilder.setShared(network.getShared());
}
if (network.getStatus() != null) {
networkBuilder.setStatus(network.getStatus());
}
if (network.getSubnets() != null) {
List<Uuid> subnets = new ArrayList<Uuid>();
for( String subnet : network.getSubnets()) {
subnets.add(toUuid(subnet));
}
networkBuilder.setSubnets(subnets);
}
if (network.getTenantID() != null) {
networkBuilder.setTenantId(toUuid(network.getTenantID()));
}
if (network.getNetworkUUID() != null) {
networkBuilder.setUuid(toUuid(network.getNetworkUUID()));
} else {
logger.warn("Attempting to write neutron network without UUID");
}
return networkBuilder.build();
}
ODL Parent Developer Guide¶
Parent POMs¶
The ODL Parent component for OpenDaylight provides a number of Maven parent POMs which allow Maven projects to be easily integrated in the OpenDaylight ecosystem. Technically, the aim of projects in OpenDaylight is to produce Karaf features, and these parent projects provide common support for the different types of projects involved.
These parent projects are:
odlparent-lite
— the basic parent POM for Maven modules which don’t produce artifacts (e.g. aggregator POMs)odlparent
— the common parent POM for Maven modules containing Java codebundle-parent
— the parent POM for Maven modules producing OSGi bundles
The following parent projects are deprecated, but still used in Carbon:
feature-parent
— the parent POM for Maven modules producing Karaf 3 feature repositorieskaraf-parent
— the parent POM for Maven modules producing Karaf 3 distributions
The following parent projects are new in Carbon, for Karaf 4 support (which won’t be complete until Nitrogen):
single-feature-parent
— the parent POM for Maven modules producing a single Karaf 4 featurefeature-repo-parent
— the parent POM for Maven modules producing Karaf 4 feature repositorieskaraf4-parent
— the parent POM for Maven modules producing Karaf 4 distributions
This is the base parent for all OpenDaylight Maven projects and modules. It provides the following, notably to allow publishing artifacts to Maven Central:
license information;
organization information;
issue management information (a link to our Bugzilla);
continuous integration information (a link to our Jenkins setup);
default Maven plugins (
maven-clean-plugin
,maven-deploy-plugin
,maven-install-plugin
,maven-javadoc-plugin
with HelpMojo support,maven-project-info-reports-plugin
,maven-site-plugin
with Asciidoc support,jdepend-maven-plugin
);distribution management information.
It also defines two profiles which help during development:
q
(-Pq
), the quick profile, which disables tests, code coverage, Javadoc generation, code analysis, etc. — anything which isn’t necessary to build the bundles and features (see this blog post for details);addInstallRepositoryPath
(-DaddInstallRepositoryPath=…/karaf/system
) which can be used to drop a bundle in the appropriate Karaf location, to enable hot-reloading of bundles during development (see this blog post for details).
For modules which don’t produce any useful artifacts (e.g. aggregator POMs), you should add the following to avoid processing artifacts:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-deploy-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-install-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build>
This inherits from odlparent-lite
and mainly provides dependency and
plugin management for OpenDaylight projects.
If you use any of the following libraries, you should rely on
odlparent
to provide the appropriate versions:
Akka (and Scala)
Apache Commons:
commons-codec
commons-fileupload
commons-io
commons-lang
commons-lang3
commons-net
Apache Shiro
Guava
JAX-RS with Jersey
JSON processing:
GSON
Jackson
Logging:
Logback
SLF4J
Netty
OSGi:
Apache Felix
core OSGi dependencies (
core
,compendium
…)
Testing:
Hamcrest
JSON assert
JUnit
Mockito
Pax Exam
PowerMock
XML/XSL:
Xerces
XML APIs
Note
This list isn’t exhaustive. It’s also not cast in stone; if you’d like to add a new dependency (or migrate a dependency), please contact the mailing list.
odlparent
also enforces some Checkstyle verification rules. In
particular, it enforces the common license header used in all
OpenDaylight code:
/*
* Copyright © ${year} ${holder} and others. All rights reserved.
*
* This program and the accompanying materials are made available under the
* terms of the Eclipse Public License v1.0 which accompanies this distribution,
* and is available at http://www.eclipse.org/legal/epl-v10.html
*/
where “${year}
” is initially the first year of publication, then
(after a year has passed) the first and latest years of publication,
separated by commas (e.g. “2014, 2016”), and “${holder}
” is
the initial copyright holder (typically, the first author’s employer).
“All rights reserved” is optional.
If you need to disable this license check, e.g. for files imported
under another license (EPL-compatible of course), you can override the
maven-checkstyle-plugin
configuration. features-test
does this
for its CustomBundleUrlStreamHandlerFactory
class, which is
ASL-licensed:
<plugin>
<artifactId>maven-checkstyle-plugin</artifactId>
<executions>
<execution>
<id>check-license</id>
<goals>
<goal>check</goal>
</goals>
<phase>process-sources</phase>
<configuration>
<configLocation>check-license.xml</configLocation>
<headerLocation>EPL-LICENSE.regexp.txt</headerLocation>
<includeResources>false</includeResources>
<includeTestResources>false</includeTestResources>
<sourceDirectory>${project.build.sourceDirectory}</sourceDirectory>
<excludes>
<!-- Skip Apache Licensed files -->
org/opendaylight/odlparent/featuretest/CustomBundleUrlStreamHandlerFactory.java
</excludes>
<failsOnError>false</failsOnError>
<consoleOutput>true</consoleOutput>
</configuration>
</execution>
</executions>
</plugin>
This inherits from odlparent
and enables functionality useful for
OSGi bundles:
maven-javadoc-plugin
is activated, to build the Javadoc JAR;maven-source-plugin
is activated, to build the source JAR;maven-bundle-plugin
is activated (including extensions), to build OSGi bundles (using the “bundle” packaging).
In addition to this, JUnit is included as a default dependency in “test” scope.
This inherits from odlparent
and enables functionality useful for
Karaf features:
karaf-maven-plugin
is activated, to build Karaf features — but for OpenDaylight, projects need to use “jar” packaging (not “feature” or “kar”);features.xml
files are processed from templates stored insrc/main/features/features.xml
;Karaf features are tested after build to ensure they can be activated in a Karaf container.
The features.xml
processing allows versions to be ommitted from
certain feature dependencies, and replaced with “{{version}}
”.
For example:
<features name="odl-mdsal-${project.version}" xmlns="http://karaf.apache.org/xmlns/features/v1.2.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://karaf.apache.org/xmlns/features/v1.2.0 http://karaf.apache.org/xmlns/features/v1.2.0">
<repository>mvn:org.opendaylight.odlparent/features-odlparent/{{VERSION}}/xml/features</repository>
[...]
<feature name='odl-mdsal-broker-local' version='${project.version}' description="OpenDaylight :: MDSAL :: Broker">
<feature version='${yangtools.version}'>odl-yangtools-common</feature>
<feature version='${mdsal.version}'>odl-mdsal-binding-dom-adapter</feature>
<feature version='${mdsal.model.version}'>odl-mdsal-models</feature>
<feature version='${project.version}'>odl-mdsal-common</feature>
<feature version='${config.version}'>odl-config-startup</feature>
<feature version='${config.version}'>odl-config-netty</feature>
<feature version='[3.3.0,4.0.0)'>odl-lmax</feature>
[...]
<bundle>mvn:org.opendaylight.controller/sal-dom-broker-config/{{VERSION}}</bundle>
<bundle start-level="40">mvn:org.opendaylight.controller/blueprint/{{VERSION}}</bundle>
<configfile finalname="${config.configfile.directory}/${config.mdsal.configfile}">mvn:org.opendaylight.controller/md-sal-config/{{VERSION}}/xml/config</configfile>
</feature>
As illustrated, versions can be ommitted in this way for repository dependencies, bundle dependencies and configuration files. They must be specified traditionally (either hard-coded, or using Maven properties) for feature dependencies.
This allows building a Karaf 3 distribution, typically for local testing
purposes. Any runtime-scoped feature dependencies will be included in the
distribution, and the karaf.localFeature
property can be used to
specify the boot feature (in addition to standard
).
This inherits from odlparent
and enables functionality useful for
Karaf 4 features:
karaf-maven-plugin
is activated, to build Karaf features, typically with “feature” packaging (“kar” is also supported);feature.xml
files are generated based on the compile-scope dependencies defined in the POM, optionally initialised from a stub insrc/main/feature/feature.xml
.Karaf features are tested after build to ensure they can be activated in a Karaf container.
The feature.xml
processing adds transitive dependencies by default, which
allows features to be defined using only the most significant dependencies
(those that define the feature); other requirements are determined
automatically as long as they exist as Maven dependencies.
“configfiles” need to be defined both as Maven dependencies (with the
appropriate type and classifier) and as <configfile>
elements in the
feature.xml
stub.
Other features which a feature depends on need to be defined as Maven dependencies with type “xml” and classifier “features” (note the plural here).
This inherits from odlparent
and enables functionality useful for
Karaf 4 feature repositories. It follows the same principles as
single-feature-parent
, but is designed specifically for repositories
and should be used only for this type of artifacts.
It builds a feature repository referencing all the (feature) dependencies listed in the POM.
This allows building a Karaf 4 distribution, typically for local testing
purposes. Any runtime-scoped feature dependencies will be included in the
distribution, and the karaf.localFeature
property can be used to
specify the boot feature (in addition to standard
).
Features (for Karaf 3)¶
The ODL Parent component for OpenDaylight provides a number of Karaf 3 features which can be used by other Karaf 3 features to use certain third-party upstream dependencies.
These features are:
Akka features (in the
features-akka
repository):odl-akka-all
— all Akka bundles;odl-akka-scala-2.11
— Scala runtime for OpenDaylight;odl-akka-system-2.4
— Akka actor framework bundles;odl-akka-clustering-2.4
— Akka clustering bundles and dependencies;odl-akka-leveldb-0.7
— LevelDB;odl-akka-persistence-2.4
— Akka persistence;
general third-party features (in the
features-odlparent
repository):odl-netty-4
— all Netty bundles;odl-guava-18
— Guava 18;odl-guava-21
— Guava 21 (not indended for use in Carbon);odl-lmax-3
— LMAX Disruptor;odl-triemap-0.2
— Concurrent Trie HashMap.
To use these, you need to declare a dependency on the appropriate
repository in your features.xml
file:
<repository>mvn:org.opendaylight.odlparent/features-odlparent/{{VERSION}}/xml/features</repository>
and then include the feature, e.g.:
<feature name='odl-mdsal-broker-local' version='${project.version}' description="OpenDaylight :: MDSAL :: Broker">
[...]
<feature version='[3.3.0,4.0.0)'>odl-lmax</feature>
[...]
</feature>
You also need to depend on the features repository in your POM:
<dependency>
<groupId>org.opendaylight.odlparent</groupId>
<artifactId>features-odlparent</artifactId>
<classifier>features</classifier>
<type>xml</type>
</dependency>
assuming the appropriate dependency management:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.opendaylight.odlparent</groupId>
<artifactId>odlparent-artifacts</artifactId>
<version>1.8.0-SNAPSHOT</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>
(the version number there is appropriate for Carbon). For the time being
you also need to depend separately on the individual JARs as
compile-time dependencies to build your dependent code; the relevant
dependencies are managed in odlparent
’s dependency management.
odl-netty
:[4.0.37,4.1.0)
or[4.0.37,5.0.0)
;odl-guava
:[18,19)
(if your code is ready for it,[19,20)
is also available, but the current default version of Guava in OpenDaylight is 18);odl-lmax
:[3.3.4,4.0.0)
Features (for Karaf 4)¶
There are equivalent features to all the Karaf 3 features, for Karaf 4. The repositories use “features4” instead of “features”, and the features use “odl4” instead of “odl”.
The following new features are specific to Karaf 4:
Karaf wrapper features (also in the
features4-odlparent
repository) — these can be used to pull in a Karaf feature using a Maven dependency in a POM:odl-karaf-feat-feature
— the Karaffeature
feature;odl-karaf-feat-jdbc
— the Karafjdbc
feature;odl-karaf-feat-jetty
— the Karafjetty
feature;odl-karaf-feat-war
— the Karafwar
feature.
To use these, all you need to do now is add the appropriate dependency in your feature POM; for example:
<dependency>
<groupId>org.opendaylight.odlparent</groupId>
<artifactId>odl4-guava-18</artifactId>
<classifier>features</classifier>
<type>xml</type>
</dependency>
assuming the appropriate dependency management:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.opendaylight.odlparent</groupId>
<artifactId>odlparent-artifacts</artifactId>
<version>1.8.0-SNAPSHOT</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>
(the version number there is appropriate for Carbon). We no longer use version
ranges, the feature dependencies all use the odlparent
version (but you
should rely on the artifacts POM).
Service Function Chaining¶
OpenDaylight Service Function Chaining (SFC) Overview¶
OpenDaylight Service Function Chaining (SFC) provides the ability to define an ordered list of a network services (e.g. firewalls, load balancers). These service are then “stitched” together in the network to create a service chain. This project provides the infrastructure (chaining logic, APIs) needed for ODL to provision a service chain in the network and an end-user application for defining such chains.
ACE - Access Control Entry
ACL - Access Control List
SCF - Service Classifier Function
SF - Service Function
SFC - Service Function Chain
SFF - Service Function Forwarder
SFG - Service Function Group
SFP - Service Function Path
RSP - Rendered Service Path
NSH - Network Service Header
SFC Classifier Control and Date plane Developer guide¶
Description of classifier can be found in: https://datatracker.ietf.org/doc/draft-ietf-sfc-architecture/
Classifier manages everything from starting the packet listener to creation (and removal) of appropriate ip(6)tables rules and marking received packets accordingly. Its functionality is available only on Linux as it leverages NetfilterQueue, which provides access to packets matched by an iptables rule. Classifier requires root privileges to be able to operate.
So far it is capable of processing ACL for MAC addresses, ports, IPv4 and IPv6. Supported protocols are TCP and UDP.
Python code located in the project repository sfc-py/common/classifier.py.
Note
classifier assumes that Rendered Service Path (RSP) already exists in ODL when an ACL referencing it is obtained
sfc_agent receives an ACL and passes it for processing to the classifier
the RSP (its SFF locator) referenced by ACL is requested from ODL
if the RSP exists in the ODL then ACL based iptables rules for it are applied
After this process is over, every packet successfully matched to an iptables rule (i.e. successfully classified) will be NSH encapsulated and forwarded to a related SFF, which knows how to traverse the RSP.
Rules are created using appropriate iptables command. If the Access Control Entry (ACE) rule is MAC address related both iptables and IPv6 tables rules are issued. If ACE rule is IPv4 address related, only iptables rules are issued, same for IPv6.
Note
iptables raw table contains all created rules
Information regarding already registered RSP(s) are stored in an internal data-store, which is represented as a dictionary:
{rsp_id: {'name': <rsp_name>,
'chains': {'chain_name': (<ipv>,),
...
},
'sff': {'ip': <ip>,
'port': <port>,
'starting-index': <starting-index>,
'transport-type': <transport-type>
},
},
...
}
name
: name of the RSPchains
: dictionary of iptables chains related to the RSP with information about IP version for which the chain existsSFF
: SFF forwarding parametersip
: SFF IP addressport
: SFF portstarting-index
: index given to packet at first RSP hoptransport-type
: encapsulation protocol
This features exposes API to configure classifier (corresponds to service-function-classifier.yang)
See: sfc-model/src/main/yang/service-function-classifier.yang
SFC-OVS Plug-in¶
SFC-OVS provides integration of SFC with Open vSwitch (OVS) devices. Integration is realized through mapping of SFC objects (like SF, SFF, Classifier, etc.) to OVS objects (like Bridge, TerminationPoint=Port/Interface). The mapping takes care of automatic instantiation (setup) of corresponding object whenever its counterpart is created. For example, when a new SFF is created, the SFC-OVS plug-in will create a new OVS bridge and when a new OVS Bridge is created, the SFC-OVS plug-in will create a new SFF.
SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing information from/to OVS devices. The core functionality consists of two types of mapping:
mapping from OVS to SFC
OVS Bridge is mapped to SFF
OVS TerminationPoints are mapped to SFF DataPlane locators
mapping from SFC to OVS
SFF is mapped to OVS Bridge
SFF DataPlane locators are mapped to OVS TerminationPoints

SFC < — > OVS mapping flow diagram¶
SFF to OVS mapping API (methods to convert SFF object to OVS Bridge and OVS TerminationPoints)
OVS to SFF mapping API (methods to convert OVS Bridge and OVS TerminationPoints to SFF object)
SFC Southbound REST Plug-in¶
The Southbound REST Plug-in is used to send configuration from datastore down to network devices supporting a REST API (i.e. they have a configured REST URI). It supports POST/PUT/DELETE operations, which are triggered accordingly by changes in the SFC data stores.
Access Control List (ACL)
Service Classifier Function (SCF)
Service Function (SF)
Service Function Group (SFG)
Service Function Schedule Type (SFST)
Service Function Forwarder (SFF)
Rendered Service Path (RSP)
listeners - used to listen on changes in the SFC data stores
JSON exporters - used to export JSON-encoded data from binding-aware data store objects
tasks - used to collect REST URIs of network devices and to send JSON-encoded data down to these devices

Southbound REST Plug-in Architecture diagram¶
The plug-in provides Southbound REST API designated to listening REST devices. It supports POST/PUT/DELETE operations. The operation (with corresponding JSON-encoded data) is sent to unique REST URL belonging to certain data type.
Access Control List (ACL):
http://<host>:<port>/config/ietf-acl:access-lists/access-list/
Service Function (SF):
http://<host>:<port>/config/service-function:service-functions/service-function/
Service Function Group (SFG):
http://<host>:<port>/config/service-function:service-function-groups/service-function-group/
Service Function Schedule Type (SFST):
http://<host>:<port>/config/service-function-scheduler-type:service-function-scheduler-types/service-function-scheduler-type/
Service Function Forwarder (SFF):
http://<host>:<port>/config/service-function-forwarder:service-function-forwarders/service-function-forwarder/
Rendered Service Path (RSP):
http://<host>:<port>/operational/rendered-service-path:rendered-service-paths/rendered-service-path/
Therefore, network devices willing to receive REST messages must listen on these REST URLs.
Note
Service Classifier Function (SCF) URL does not exist, because SCF is
considered as one of the network devices willing to receive REST
messages. However, there is a listener hooked on the SCF data store,
which is triggering POST/PUT/DELETE operations of ACL object,
because ACL is referenced in service-function-classifier.yang
Service Function Load Balancing Developer Guide¶
SFC Load-Balancing feature implements load balancing of Service Functions, rather than a one-to-one mapping between Service Function Forwarder and Service Function.
Service Function Groups (SFG) can replace Service Functions (SF) in the Rendered Path model. A Service Path can only be defined using SFGs or SFs, but not a combination of both.
Relevant objects in the YANG model are as follows:
Service-Function-Group-Algorithm:
Service-Function-Group-Algorithms { Service-Function-Group-Algorithm { String name String type } }
Available types: ALL, SELECT, INDIRECT, FAST_FAILURE
Service-Function-Group:
Service-Function-Groups { Service-Function-Group { String name String serviceFunctionGroupAlgorithmName String type String groupId Service-Function-Group-Element { String service-function-name int index } } }
ServiceFunctionHop: holds a reference to a name of SFG (or SF)
This feature enhances the existing SFC API.
REST API commands include: * For Service Function Group (SFG): read existing SFG, write new SFG, delete existing SFG, add Service Function (SF) to SFG, and delete SF from SFG * For Service Function Group Algorithm (SFG-Alg): read, write, delete
Bundle providing the REST API: sfc-sb-rest * Service Function Groups and Algorithms are defined in: sfc-sfg and sfc-sfg-alg * Relevant JAVA API: SfcProviderServiceFunctionGroupAPI, SfcProviderServiceFunctionGroupAlgAPI
Service Function Scheduling Algorithms¶
When creating the Rendered Service Path (RSP), the earlier release of SFC chose the first available service function from a list of service function names. Now a new API is introduced to allow developers to develop their own schedule algorithms when creating the RSP. There are four scheduling algorithms (Random, Round Robin, Load Balance and Shortest Path) are provided as examples for the API definition. This guide gives a simple introduction of how to develop service function scheduling algorithms based on the current extensible framework.
The following figure illustrates the service function selection framework and algorithms.

SF Scheduling Algorithm framework Architecture¶
The YANG Model defines the Service Function Scheduling Algorithm type identities and how they are stored in the MD-SAL data store for the scheduling algorithms.
The MD-SAL data store stores all informations for the scheduling algorithms, including their types, names, and status.
The API provides some basic APIs to manage the informations stored in the MD-SAL data store, like putting new items into it, getting all scheduling algorithms, etc.
The RESTCONF API provides APIs to manage the informations stored in the MD-SAL data store through RESTful calls.
The Service Function Chain Renderer gets the enabled scheduling algorithm type, and schedules the service functions with scheduling algorithm implementation.
While developing a new Service Function Scheduling Algorithm, a new class should be added and it should extend the base schedule class SfcServiceFunctionSchedulerAPI. And the new class should implement the abstract function:
public List<String> scheduleServiceFuntions(ServiceFunctionChain chain, int serviceIndex)
.
``ServiceFunctionChain chain``: the chain which will be rendered
``int serviceIndex``: the initial service index for this rendered service path
``List<String>``: a list of service function names which scheduled by the Service Function Scheduling Algorithm.
Please refer the API docs generated in the mdsal-apidocs.
SFC Proof of Transit Developer Guide¶
SFC Proof of Transit implements the in-situ OAM (iOAM) Proof of Transit verification for SFCs and other paths. The implementation is broadly divided into the North-bound (NB) and the South-bound (SB) side of the application. The NB side is primarily charged with augmenting the RSP with user-inputs for enabling the PoT on the RSP, while the SB side is dedicated to auto-generated SFC PoT parameters, periodic refresh of these parameters and delivering the parameters to the NETCONF and iOAM capable nodes (eg. VPP instances).
The following diagram gives the high level overview of the different parts.

SFC Proof of Transit Internal Architecture¶
The Proof of Transit feature is enabled by two sub-features:
ODL SFC PoT:
feature:install odl-sfc-pot
ODL SFC PoT NETCONF Renderer:
feature:install odl-sfc-pot-netconf-renderer
The following classes and handlers are involved.
The class (SfcPotRpc) sets up RPC handlers for enabling the feature.
There are new RPC handlers for two new RPCs (EnableSfcIoamPotRenderedPath and DisableSfcIoamPotRenderedPath) and effected via SfcPotRspProcessor class.
When a user configures via a POST RPC call to enable Proof of Transit on a particular SFC (via the Rendered Service Path), the configuration drives the creation of necessary augmentations to the RSP (to modify the RSP) to effect the Proof of Transit configurations.
The augmentation meta-data added to the RSP are defined in the sfc-ioam-nb-pot.yang file.
Note
There are no auto generated configuration parameters added to the RSP to avoid RSP bloat.
Adding SFC Proof of Transit meta-data to the RSP is done in the SfcPotRspProcessor class.
Once the RSP is updated, the RSP data listeners in the SB renderer modules (odl-sfc-pot-netconf-renderer) will listen to the RSP changes and send out configurations to the necessary network nodes that are part of the SFC.
The configurations are handled mainly in the SfcPotAPI, SfcPotConfigGenerator, SfcPotPolyAPI, SfcPotPolyClass and SfcPotPolyClassAPI classes.
There is a sfc-ioam-sb-pot.yang file that shows the format of the iOAM PoT configuration data sent to each node of the SFC.
A timer is started based on the “ioam-pot-refresh-period” value in the SB renderer module that handles configuration refresh periodically.
The SB and timer handling are done in the odl-sfc-pot-netconf-renderer module. Note: This is NOT done in the NB odl-sfc-pot module to avoid periodic updates to the RSP itself.
ODL creates a new profile of a set of keys and secrets at a constant rate and updates an internal data store with the configuration. The controller labels the configurations per RSP as “even” or “odd” – and the controller cycles between “even” and “odd” labeled profiles. The rate at which these profiles are communicated to the nodes is configurable and in future, could be automatic based on profile usage. Once the profile has been successfully communicated to all nodes (all Netconf transactions completed), the controller sends an “enable pot-profile” request to the ingress node.
The nodes are to maintain two profiles (an even and an odd pot-profile). One profile is currently active and in use, and one profile is about to get used. A flag in the packet is indicating whether the odd or even pot-profile is to be used by a node. This is to ensure that during profile change we’re not disrupting the service. I.e. if the “odd” profile is active, the controller can communicate the “even” profile to all nodes and only if all the nodes have received it, the controller will tell the ingress node to switch to the “even” profile. Given that the indicator travels within the packet, all nodes will switch to the “even” profile. The “even” profile gets active on all nodes – and nodes are ready to receive a new “odd” profile.
HashedTimerWheel implementation is used to support the periodic configuration refresh. The default refresh is 5 seconds to start with.
Depending on the last updated profile, the odd or the even profile is updated in the fresh timer pop and the configurations are sent down appropriately.
SfcPotTimerQueue, SfcPotTimerWheel, SfcPotTimerTask, SfcPotTimerData and SfcPotTimerThread are the classes that handle the Proof of Transit protocol profile refresh implementation.
The RSP data store is NOT being changed periodically and the timer and configuration refresh modules are present in the SB renderer module handler and hence there are are no scale or RSP churn issues affecting the design.
The following diagram gives the overall sequence diagram of the interactions between the different classes.

SFC Proof of Transit Sequence Diagram¶
Logical Service Function Forwarder¶
When the current SFC is deployed in a cloud environment, it is assumed that each switch connected to a Service Function is configured as a Service Function Forwarder and each Service Function is connected to its Service Function Forwarder depending on the Compute Node where the Virtual Machine is located. This solution allows the basic cloud use cases to be fulfilled, as for example, the ones required in OPNFV Brahmaputra, however, some advanced use cases, like the transparent migration of VMs can not be implemented. The Logical Service Function Forwarder enables the following advanced use cases:
Service Function mobility without service disruption
Service Functions load balancing and failover
As shown in the picture below, the Logical Service Function Forwarder concept extends the current SFC northbound API to provide an abstraction of the underlying Data Center infrastructure. The Data Center underlaying network can be abstracted by a single SFF. This single SFF uses the logical port UUID as data plane locator to connect SFs globally and in a location-transparent manner. SFC makes use of Genius project to track the location of the SF’s logical ports.

The SFC internally distributes the necessary flow state over the relevant switches based on the internal Data Center topology and the deployment of SFs.
The Logical Service Function Forwarder concept extends the current SFC northbound API to provide an abstraction of the underlying Data Center infrastructure.
The Logical SFF simplifies the configuration of the current SFC data model by reducing the number of parameters to be be configured in every SFF, since the controller will discover those parameters by interacting with the services offered by the Genius project.
The following picture shows the Logical SFF data model. The model gets simplified as most of the configuration parameters of the current SFC data model are discovered in runtime. The complete YANG model can be found here logical SFF model.

There are other minor changes in the data model; the SFC encapsulation type has been added or moved in the following files:
Feature sfc-genius functionally enables SFC integration with Genius. This allows configuring a Logical SFF and SFs attached to this Logical SFF via logical interfaces (i.e. neutron ports) that are registered with Genius.
As shown in the following picture, SFC will interact with Genius project’s services to provide the Logical SFF functionality.

The following are the main Genius’ services used by SFC:
Interaction with Interface Tunnel Manager (ITM)
Interaction with the Interface Manager
Interaction with Resource Manager
Genius handles the coexistence of different network services. As such, SFC service is registered with Genius performing the following actions:
- SFC Service Binding
As soon as a Service Function associated to the Logical SFF is involved in a Rendered Service Path, SFC service is bound to its logical interface via Genius Interface Manager. This has the effect of forwarding every incoming packet from the Service Function to the SFC pipeline of the attached switch, as long as it is not consumed by a different bound service with higher priority.
- SFC Service Terminating Action
As soon as SFC service is bound to the interface of a Service Function for the first time on a specific switch, a terminating service action is configured on that switch via Genius Interface Tunnel Manager. This has the effect of forwarding every incoming packet from a different switch to the SFC pipeline as long as the traffic is VXLAN encapsulated on VNI 0.
The following sequence diagrams depict how the overall process takes place:

SFC genius module interaction with Genius at RSP creation.¶

SFC genius module interaction with Genius at RSP removal.¶
For more information on how Genius allows different services to coexist, see the Genius User Guide.
During path rendering, Genius is queried to obtain needed information, such as:
Location of a logical interface on the data-plane.
Tunnel interface for a specific pair of source and destination switches.
Egress OpenFlow actions to output packets to a specific interface.
See RSP Rendering section for more information.
Upon VM migration, it’s logical interface is first unregistered and then registered with Genius, possibly at a new physical location. sfc-genius reacts to this by re-rendering all the RSPs on which the associated SF participates, if any.
The following picture illustrates the process:

SFC genius module at VM migration.¶
Construction of the auxiliary rendering graph
When starting the rendering of a RSP, the SFC renderer builds an auxiliary graph with information about the required hops for traffic traversing the path. RSP processing is achieved by iteratively evaluating each of the entries in the graph, writing the required flows in the proper switch for each hop.
It is important to note that the graph includes both traffic ingress (i.e. traffic entering into the first SF) and traffic egress (i.e. traffic leaving the chain from the last SF) as hops. Therefore, the number of entries in the graph equals the number of SFs in the chain plus one.
The process of rendering a chain when the switches involved are part of the Logical SFF also starts with the construction of the hop graph. The difference is that when the SFs used in the chain are using a logical interface, the SFC renderer will also retrieve from Genius the DPIDs for the switches, storing them in the graph. In this context, those switches are the ones in the compute nodes each SF is hosted on at the time the chain is rendered.
New transport processor
Transport processors are classes which calculate and write the correct flows for a chain. Each transport processor specializes on writing the flows for a given combination of transport type and SFC encapsulation.
A specific transport processor has been created for paths using a Logical SFF. A particularity of this transport processor is that its use is not only determined by the transport / SFC encapsulation combination, but also because the chain is using a Logical SFF. The actual condition evaluated for selecting the Logical SFF transport processor is that the SFs in the chain are using logical interface locators, and that the DPIDs for those locators can be successfully retrieved from Genius.
The main differences between the Logical SFF transport processor and other processors are the following:
Instead of srcSff, dstSff fields in the hops graph (which are all equal in a path using a Logical SFF), the Logical SFF transport processor uses previously stored srcDpnId, dstDpnId fields in order to know whether an actual hop between compute nodes must be performed or not (it is possible that two consecutive SFs are collocated in the same compute node).
When a hop between switches really has to be performed, it relies on Genius for getting the actions to perform that hop. The retrieval of those actions involve two steps:
First, Genius’ Overlay Tunnel Manager module is used in order to retrieve the target interface for a jump between the source and the destination DPIDs.
Then, egress instructions for that interface are retrieved from Genius’s Interface Manager.
There are no next hop rules between compute nodes, only egress instructions (the transport zone tunnels have all the required routing information).
Next hop information towards SFs uses mac adresses which are also retrieved from the Genius datastore.
The Logical SFF transport processor performs NSH decapsulation in the last switch of the chain.
Post-rendering update of the operational data model
When the rendering of a chain finishes successfully, the Logical SFF Transport Processor perform two operational datastore modifications in order to provide some relevant runtime information about the chain. The exposed information is the following:
Rendered Service Path state: when the chain uses a Logical SFF, DPIDs for the switches in the compute nodes on which the SFs participating in the chain are hosted are added to the hop information.
SFF state: A new list of all RSPs which use each DPID is has been added. It is updated on each RSP addition / deletion.
This section explains the changes made to the SFC classifier, enabling it to be attached to Logical SFFs.
Refer to the following image to better understand the concept, and the required steps to implement the feature.

SFC classifier integration with Genius.¶
As stated in the SFC User Guide, the classifier needs to be provisioned using logical interfaces as attachment points.
When that happens, MDSAL will trigger an event in the odl-sfc-scf-openflow feature (i.e. the sfc-classifier), which is responsible for installing the classifier flows in the classifier switches.
The first step of the process, is to bind the interfaces to classify in Genius, in order for the desired traffic (originating from the VMs having the provisioned attachment-points) to enter the SFC pipeline. This will make traffic reach table 82 (SFC classifier table), coming from table 0 (table managed by Genius, shared by all applications).
The next step, is deciding which flows to install in the SFC classifier table. A table-miss flow will be installed, having a MatchAny clause, whose action is to jump to Genius’s egress dispatcher table. This enables traffic intended for other applications to still be processed.
The flow that allows the SFC pipeline to continue is added next, having higher match priority than the table-miss flow. This flow has two responsabilities:
Push the NSH header, along with its metadata (required within the SFC pipeline)
Features the specified ACL matches as match criteria, and push NSH along with its metadata into the Action list.
Advance the SFC pipeline
Forward the traffic to the first Service Function in the RSP. This steers packets into the SFC domain, and how it is done depends on whether the classifier is co-located with the first service function in the specified RSP.
Should the classifier be co-located (i.e. in the same compute node), a new instruction is appended to the flow, telling all matches to jump to the transport ingress table.
If not, Genius’s tunnel manager service is queried to get the tunnel interface connecting the classifier node with the compute node where the first Service Function is located, and finally, Genius’s interface manager service is queried asking for instructions on how to reach that tunnel interface.
These actions are then appended to the Action list already containing push NSH and push NSH metadata Actions, and written in an Apply-Actions Instruction into the datastore.
YANG Tools Developer Guide¶
Overview¶
YANG Tools is set of libraries and tooling providing support for use YANG for Java (or other JVM-based language) projects and applications.
YANG Tools provides following features in OpenDaylight:
parsing of YANG sources and semantic inference of relationship across YANG models as defined in RFC6020
representation of YANG-modeled data in Java
Normalized Node representation - DOM-like tree model, which uses conceptual meta-model more tailored to YANG and OpenDaylight use-cases than a standard XML DOM model allows for.
serialization / deserialization of YANG-modeled data driven by YANG models
XML - as defined in RFC6020
JSON - as defined in draft-lhotka-netmod-yang-json-01
support for third-party generators processing YANG models.
YANG Tools project consists of following logical subsystems:
Commons - Set of general purpose code, which is not specific to YANG, but is also useful outside YANG Tools implementation.
YANG Model and Parser - YANG semantic model and lexical and semantic parser of YANG models, which creates in-memory cross-referenced represenation of YANG models, which is used by other components to determine their behaviour based on the model.
YANG Data - Definition of Normalized Node APIs and Data Tree APIs, reference implementation of these APIs and implementation of XML and JSON codecs for Normalized Nodes.
YANG Maven Plugin - Maven plugin which integrates YANG parser into Maven build lifecycle and provides code-generation framework for components, which wants to generate code or other artefacts based on YANG model.
Project defines base concepts and helper classes which are project-agnostic and could be used outside of YANG Tools project scope.
yang-common
yang-data-api
yang-data-codec-gson
yang-data-codec-xml
yang-data-impl
yang-data-jaxen
yang-data-transform
yang-data-util
yang-maven-plugin
yang-maven-plugin-it
yang-maven-plugin-spi
yang-model-api
yang-model-export
yang-model-util
yang-parser-api
yang-parser-impl
Yang Statement Parser works on the idea of statement concepts as defined in RFC6020, section 6.3. We come up here with basic ModelStatement and StatementDefinition, following RFC6020 idea of having sequence of statements, where every statement contains keyword and zero or one argument. ModelStatement is extended by DeclaredStatement (as it comes from source, e.g. YANG source) and EffectiveStatement, which contains other substatements and tends to represent result of semantic processing of other statements (uses, augment for YANG). IdentifierNamespace represents common superclass for YANG model namespaces.
Input of the Yang Statement Parser is a collection of StatementStreamSource objects. StatementStreamSource interface is used for inference of effective model and is required to emit its statements using supplied StatementWriter. Each source (e.g. YANG source) has to be processed in three steps in order to emit different statements for each step. This package provides support for various namespaces used across statement parser in order to map relations during declaration phase process.
Currently, there are two implementations of StatementStreamSource in Yangtools:
YangStatementSourceImpl - intended for yang sources
YinStatementSourceImpl - intended for yin sources
Codecs which enable serialization of NormalizedNodes into YANG-modeled data in XML or JSON format and deserialization of YANG-modeled data in XML or JSON format into NormalizedNodes.
Maven plugin which integrates YANG parser into Maven build lifecycle and provides code-generation framework for components, which wants to generate code or other artefacts based on YANG model.
How to / Tutorials¶
First thing you need to do if you want to work with YANG models is to instantiate a SchemaContext object. This object type describes one or more parsed YANG modules.
In order to create it you need to utilize YANG statement parser which takes one or more StatementStreamSource objects as input and then produces the SchemaContext object.
StatementStreamSource object contains the source file information. It has two implementations, one for YANG sources - YangStatementSourceImpl, and one for YIN sources - YinStatementSourceImpl.
Here is an example of creating StatementStreamSource objects for YANG files, providing them to the YANG statement parser and building the SchemaContext:
StatementStreamSource yangModuleSource == new YangStatementSourceImpl("/example.yang", false);
StatementStreamSource yangModuleSource2 == new YangStatementSourceImpl("/example2.yang", false);
CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild();
reactor.addSources(yangModuleSource, yangModuleSource2);
SchemaContext schemaContext == reactor.buildEffective();
First, StatementStreamSource objects with two constructor arguments should be instantiated: path to the yang source file (which is a regular String object) and a boolean which determines if the path is absolute or relative.
Next comes the initiation of new yang parsing cycle - which is represented by CrossSourceStatementReactor.BuildAction object. You can get it by calling method newBuild() on CrossSourceStatementReactor object (RFC6020_REACTOR) in YangInferencePipeline class.
Then you should feed yang sources to it by calling method addSources() that takes one or more StatementStreamSource objects as arguments.
Finally you call the method buildEffective() on the reactor object which returns EffectiveSchemaContext (that is a concrete implementation of SchemaContext). Now you are ready to work with contents of the added yang sources.
Let us explain how to work with models contained in the newly created SchemaContext. If you want to get all the modules in the schemaContext, you have to call method getModules() which returns a Set of modules. If you want to get all the data definitions in schemaContext, you need to call method getDataDefinitions, etc.
Set<Module> modules == schemaContext.getModules();
Set<DataSchemaNodes> dataSchemaNodes == schemaContext.getDataDefinitions();
Usually you want to access specific modules. Getting a concrete module from SchemaContext is a matter of calling one of these methods:
findModuleByName(),
findModuleByNamespace(),
findModuleByNamespaceAndRevision().
In the first case, you need to provide module name as it is defined in the yang source file and module revision date if it specified in the yang source file (if it is not defined, you can just pass a null value). In order to provide the revision date in proper format, you can use a utility class named SimpleDateFormatUtil.
Module exampleModule == schemaContext.findModuleByName("example-module", null);
// or
Date revisionDate == SimpleDateFormatUtil.getRevisionFormat().parse("2015-09-02");
Module exampleModule == schemaContext.findModuleByName("example-module", revisionDate);
In the second case, you have to provide module namespace in form of an URI object.
Module exampleModule == schema.findModuleByNamespace(new URI("opendaylight.org/example-module"));
In the third case, you provide both module namespace and revision date as arguments.
Once you have a Module object, you can access its contents as they are defined in YANG Model API. One way to do this is to use method like getIdentities() or getRpcs() which will give you a Set of objects. Otherwise you can access a DataSchemaNode directly via the method getDataChildByName() which takes a QName object as its only argument. Here are a few examples.
Set<AugmentationSchema> augmentationSchemas == exampleModule.getAugmentations();
Set<ModuleImport> moduleImports == exampleModule.getImports();
ChoiceSchemaNode choiceSchemaNode == (ChoiceSchemaNode) exampleModule.getDataChildByName(QName.create(exampleModule.getQNameModule(), "example-choice"));
ContainerSchemaNode containerSchemaNode == (ContainerSchemaNode) exampleModule.getDataChildByName(QName.create(exampleModule.getQNameModule(), "example-container"));
The YANG statement parser can work in three modes:
default mode
mode with active resolution of if-feature statements
mode with active semantic version processing
The default mode is active when you initialize the parsing cycle as usual by calling the method newBuild() without passing any arguments to it. The second and third mode can be activated by invoking the newBuild() with a special argument. You can either activate just one of them or both by passing proper arguments. Let us explain how these modes work.
Mode with active resolution of if-features makes yang statements containing an if-feature statement conditional based on the supported features. These features are provided in the form of a QName-based java.util.Set object. In the example below, only two features are supported: example-feature-1 and example-feature-2. The Set which contains this information is passed to the method newBuild() and the mode is activated.
Set<QName> supportedFeatures = ImmutableSet.of(
QName.create("example-namespace", "2016-08-31", "example-feature-1"),
QName.create("example-namespace", "2016-08-31", "example-feature-2"));
CrossSourceStatementReactor.BuildAction reactor = YangInferencePipeline.RFC6020_REACTOR.newBuild(supportedFeatures);
In case when no features should be supported, you should provide an empty Set<QName> object.
Set<QName> supportedFeatures = ImmutableSet.of();
CrossSourceStatementReactor.BuildAction reactor = YangInferencePipeline.RFC6020_REACTOR.newBuild(supportedFeatures);
When this mode is not activated, all features in the processed YANG sources are supported.
Mode with active semantic version processing changes the way how YANG import statements work - each module import is processed based on the specified semantic version statement and the revision-date statement is ignored. In order to activate this mode, you have to provide StatementParserMode.SEMVER_MODE enum constant as argument to the method newBuild().
CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild(StatementParserMode.SEMVER_MODE);
Before you use a semantic version statement in a YANG module, you need to define an extension for it so that the YANG statement parser can recognize it.
module semantic-version {
namespace "urn:opendaylight:yang:extension:semantic-version";
prefix sv;
yang-version 1;
revision 2016-02-02 {
description "Initial version";
}
sv:semantic-version "0.0.1";
extension semantic-version {
argument "semantic-version" {
yin-element false;
}
}
}
In the example above, you see a YANG module which defines semantic version as an extension. This extension can be imported to other modules in which we want to utilize the semantic versioning concept.
Below is a simple example of the semantic versioning usage. With semantic version processing mode being active, the foo module imports the bar module based on its semantic version. Notice how both modules import the module with the semantic-version extension.
module foo {
namespace foo;
prefix foo;
yang-version 1;
import semantic-version { prefix sv; revision-date 2016-02-02; sv:semantic-version "0.0.1"; }
import bar { prefix bar; sv:semantic-version "0.1.2";}
revision "2016-02-01" {
description "Initial version";
}
sv:semantic-version "0.1.1";
...
}
module bar {
namespace bar;
prefix bar;
yang-version 1;
import semantic-version { prefix sv; revision-date 2016-02-02; sv:semantic-version "0.0.1"; }
revision "2016-01-01" {
description "Initial version";
}
sv:semantic-version "0.1.2";
...
}
Every semantic version must have the following form: x.y.z. The x corresponds to a major version, the y corresponds to a minor version and the z corresponds to a patch version. If no semantic version is specified in a module or an import statement, then the default one is used - 0.0.0.
A major version number of 0 indicates that the model is still in development and is subject to change.
Following a release of major version 1, all modules will increment major version number when backwards incompatible changes to the model are made.
The minor version is changed when features are added to the model that do not impact current clients use of the model.
The patch version is incremented when non-feature changes (such as bugfixes or clarifications of human-readable descriptions that do not impact model functionality) are made that maintain backwards compatibility.
When importing a module with activated semantic version processing mode, only the module with the newest (highest) compatible semantic version is imported. Two semantic versions are compatible when all of the following conditions are met:
the major version in the import statement and major version in the imported module are equal. For instance, 1.5.3 is compatible with 1.5.3, 1.5.4, 1.7.2, etc., but it is not compatible with 0.5.2 or 2.4.8, etc.
the combination of minor version and patch version in the import statement is not higher than the one in the imported module. For instance, 1.5.2 is compatible with 1.5.2, 1.5.4, 1.6.8 etc. In fact, 1.5.2 is also compatible with versions like 1.5.1, 1.4.9 or 1.3.7 as they have equal major version. However, they will not be imported because their minor and patch version are lower (older).
If the import statement does not specify a semantic version, then the default one is chosen - 0.0.0. Thus, the module is imported only if it has a semantic version compatible with the default one, for example 0.0.0, 0.1.3, 0.3.5 and so on.
If you want to work with YANG Data you are going to need NormalizedNode objects that are specified in the YANG Data API. NormalizedNode is an interface at the top of the YANG Data hierarchy. It is extended through sub-interfaces which define the behaviour of specific NormalizedNode types like AnyXmlNode, ChoiceNode, LeafNode, ContainerNode, etc. Concrete implemenations of these interfaces are defined in yang-data-impl module. Once you have one or more NormalizedNode instances, you can perform CRUD operations on YANG data tree which is an in-memory database designed to store normalized nodes in a tree-like structure.
In some cases it is clear which NormalizedNode type belongs to which yang statement (e.g. AnyXmlNode, ChoiceNode, LeafNode). However, there are some normalized nodes which are named differently from their yang counterparts. They are listed below:
LeafSetNode - leaf-list
OrderedLeafSetNode - leaf-list that is ordered-by user
LeafSetEntryNode - concrete entry in a leaf-list
MapNode - keyed list
OrderedMapNode - keyed list that is ordered-by user
MapEntryNode - concrete entry in a keyed list
UnkeyedListNode - unkeyed list
UnkeyedListEntryNode - concrete entry in an unkeyed list
In order to create a concrete NormalizedNode object you can use the utility class Builders or ImmutableNodes. These classes can be found in yang-data-impl module and they provide methods for building each type of normalized node. Here is a simple example of building a normalized node:
// example 1
ContainerNode containerNode == Builders.containerBuilder().withNodeIdentifier(new YangInstanceIdentifier.NodeIdentifier(QName.create(moduleQName, "example-container")).build();
// example 2
ContainerNode containerNode2 == Builders.containerBuilder(containerSchemaNode).build();
Both examples produce the same result. NodeIdentifier is one of the four types of YangInstanceIdentifier (these types are described in the javadoc of YangInstanceIdentifier). The purpose of YangInstanceIdentifier is to uniquely identify a particular node in the data tree. In the first example, you have to add NodeIdentifier before building the resulting node. In the second example it is also added using the provided ContainerSchemaNode object.
ImmutableNodes class offers similar builder methods and also adds an overloaded method called fromInstanceId() which allows you to create a NormalizedNode object based on YangInstanceIdentifier and SchemaContext. Below is an example which shows the use of this method.
YangInstanceIdentifier.NodeIdentifier contId == new YangInstanceIdentifier.NodeIdentifier(QName.create(moduleQName, "example-container");
NormalizedNode<?, ?> contNode == ImmutableNodes.fromInstanceId(schemaContext, YangInstanceIdentifier.create(contId));
Let us show a more complex example of creating a NormalizedNode. First, consider the following YANG module:
module example-module {
namespace "opendaylight.org/example-module";
prefix "example";
container parent-container {
container child-container {
list parent-ordered-list {
ordered-by user;
key "parent-key-leaf";
leaf parent-key-leaf {
type string;
}
leaf parent-ordinary-leaf {
type string;
}
list child-ordered-list {
ordered-by user;
key "child-key-leaf";
leaf child-key-leaf {
type string;
}
leaf child-ordinary-leaf {
type string;
}
}
}
}
}
}
In the following example, two normalized nodes based on the module above are written to and read from the data tree.
TipProducingDataTree inMemoryDataTree == InMemoryDataTreeFactory.getInstance().create(TreeType.OPERATIONAL);
inMemoryDataTree.setSchemaContext(schemaContext);
// first data tree modification
MapEntryNode parentOrderedListEntryNode == Builders.mapEntryBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifierWithPredicates(
parentOrderedListQName, parentKeyLeafQName, "pkval1"))
.withChild(Builders.leafBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(parentOrdinaryLeafQName))
.withValue("plfval1").build()).build();
OrderedMapNode parentOrderedListNode == Builders.orderedMapBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(parentOrderedListQName))
.withChild(parentOrderedListEntryNode).build();
ContainerNode parentContainerNode == Builders.containerBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(parentContainerQName))
.withChild(Builders.containerBuilder().withNodeIdentifier(
new NodeIdentifier(childContainerQName)).withChild(parentOrderedListNode).build()).build();
YangInstanceIdentifier path1 == YangInstanceIdentifier.of(parentContainerQName);
DataTreeModification treeModification == inMemoryDataTree.takeSnapshot().newModification();
treeModification.write(path1, parentContainerNode);
// second data tree modification
MapEntryNode childOrderedListEntryNode == Builders.mapEntryBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifierWithPredicates(
childOrderedListQName, childKeyLeafQName, "chkval1"))
.withChild(Builders.leafBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(childOrdinaryLeafQName))
.withValue("chlfval1").build()).build();
OrderedMapNode childOrderedListNode == Builders.orderedMapBuilder().withNodeIdentifier(
new YangInstanceIdentifier.NodeIdentifier(childOrderedListQName))
.withChild(childOrderedListEntryNode).build();
ImmutableMap.Builder<QName, Object> builder == ImmutableMap.builder();
ImmutableMap<QName, Object> keys == builder.put(parentKeyLeafQName, "pkval1").build();
YangInstanceIdentifier path2 == YangInstanceIdentifier.of(parentContainerQName).node(childContainerQName)
.node(parentOrderedListQName).node(new NodeIdentifierWithPredicates(parentOrderedListQName, keys)).node(childOrderedListQName);
treeModification.write(path2, childOrderedListNode);
treeModification.ready();
inMemoryDataTree.validate(treeModification);
inMemoryDataTree.commit(inMemoryDataTree.prepare(treeModification));
DataTreeSnapshot snapshotAfterCommits == inMemoryDataTree.takeSnapshot();
Optional<NormalizedNode<?, ?>> readNode == snapshotAfterCommits.readNode(path1);
Optional<NormalizedNode<?, ?>> readNode2 == snapshotAfterCommits.readNode(path2);
First comes the creation of in-memory data tree instance. The schema context (containing the model mentioned above) of this tree is set. After that, two normalized nodes are built. The first one consists of a parent container, a child container and a parent ordered list which contains a key leaf and an ordinary leaf. The second normalized node is a child ordered list that also contains a key leaf and an ordinary leaf.
In order to add a child node to a node, method withChild() is used. It takes a NormalizedNode as argument. When creating a list entry, YangInstanceIdentifier.NodeIdentifierWithPredicates should be used as its identifier. Its arguments are the QName of the list, QName of the list key and the value of the key. Method withValue() specifies a value for the ordinary leaf in the list.
Before writing a node to the data tree, a path (YangInstanceIdentifier) which determines its place in the data tree needs to be defined. The path of the first normalized node starts at the parent container. The path of the second normalized node points to the child ordered list contained in the parent ordered list entry specified by the key value “pkval1”.
Write operation is performed with both normalized nodes mentioned earlier. It consist of several steps. The first step is to instantiate a DataTreeModification object based on a DataTreeSnapshot. DataTreeSnapshot gives you the current state of the data tree. Then comes the write operation which writes a normalized node at the provided path in the data tree. After doing both write operations, method ready() has to be called, marking the modification as ready for application to the data tree. No further operations within the modification are allowed. The modification is then validated - checked whether it can be applied to the data tree. Finally we commit it to the data tree.
Now you can access the written nodes. In order to do this, you have to create a new DataTreeSnapshot instance and call the method readNode() with path argument pointing to a particular node in the tree.
If you want to deserialize YANG-modeled data which have the form of an XML document, you can use the XML parser found in the module yang-data-codec-xml. The parser walks through the XML document containing YANG-modeled data based on the provided SchemaContext and emits node events into a NormalizedNodeStreamWriter. The parser disallows multiple instances of the same element except for leaf-list and list entries. The parser also expects that the YANG-modeled data in the XML source are wrapped in a root element. Otherwise it will not work correctly.
Here is an example of using the XML parser.
InputStream resourceAsStream == ExampleClass.class.getResourceAsStream("/example-module.yang");
XMLInputFactory factory == XMLInputFactory.newInstance();
XMLStreamReader reader == factory.createXMLStreamReader(resourceAsStream);
NormalizedNodeResult result == new NormalizedNodeResult();
NormalizedNodeStreamWriter streamWriter == ImmutableNormalizedNodeStreamWriter.from(result);
XmlParserStream xmlParser == XmlParserStream.create(streamWriter, schemaContext);
xmlParser.parse(reader);
NormalizedNode<?, ?> transformedInput == result.getResult();
The XML parser utilizes the javax.xml.stream.XMLStreamReader for parsing an XML document. First, you should create an instance of this reader using XMLInputFactory and then load an XML document (in the form of InputStream object) into it.
In order to emit node events while parsing the data you need to instantiate a NormalizedNodeStreamWriter. This writer is actually an interface and therefore you need to use a concrete implementation of it. In this example it is the ImmutableNormalizedNodeStreamWriter, which constructs immutable instances of NormalizedNodes.
There are two ways how to create an instance of this writer using the static overloaded method from(). One version of this method takes a NormalizedNodeResult as argument. This object type is a result holder in which the resulting NormalizedNode will be stored. The other version takes a NormalizedNodeContainerBuilder as argument. All created nodes will be written to this builder.
Next step is to create an instance of the XML parser. The parser itself is represented by a class named XmlParserStream. You can use one of two versions of the static overloaded method create() to construct this object. One version accepts a NormalizedNodeStreamWriter and a SchemaContext as arguments, the other version takes the same arguments plus a SchemaNode. Node events are emitted to the writer. The SchemaContext is used to check if the YANG data in the XML source comply with the provided YANG model(s). The last argument, a SchemaNode object, describes the node that is the parent of nodes defined in the XML data. If you do not provide this argument, the parser sets the SchemaContext as the parent node.
The parser is now ready to walk through the XML. Parsing is initiated by calling the method parse() on the XmlParserStream object with XMLStreamReader as its argument.
Finally you can access the result of parsing - a tree of NormalizedNodes containg the data as they are defined in the parsed XML document - by calling the method getResult() on the NormalizedNodeResult object.