Contents:
The OpenDaylight project is an open source platform for Software Defined Networking (SDN) that uses open protocols to provide centralized, programmatic control and network device monitoring. Like many other SDN controllers, OpenDaylight supports OpenFlow, as well as offering ready-to-install network solutions as part of its platform.
Much as your operating system provides an interface for the devices that comprise your computer, OpenDaylight provides an interface that allows you to connect network devices quickly and intelligently for optimal network performance.
It’s extremely helpful to understand that setting up your networking environment with OpenDaylight is not a single software installation. While your first chronological step is to install OpenDaylight, you install additional functionality packaged as Karaf features to suit your specific needs.
Before walking you through the initial OpenDaylight installation, this guide presents a fuller picture of OpenDaylight’s framework and functionality so you understand how to set up your networking environment. The guide then takes you through the installation process.
Major distinctions of OpenDaylight’s SDN compared to traditional SDN options are the following:
Note
A thorough understanding of the microservices architecture is important for experienced network developers who want to create new solutions in OpenDaylight. If you are new to networking and OpenDaylight, you most likely won’t design solutions, but you should comprehend the microservices concept to understand how OpenDaylight works and how it differs from other SDN programs.
To set up your environment, you first install OpenDaylight followed by the Apache Karaf features that offer the functionality you require. The OpenDaylight Getting Started Guide covers feature descriptions, OpenDaylight installation procedures, and feature installation.
The Getting Started Guide also includes other helpful information, with the following organization:
OpenDaylight performs the following functions:
Common use cases for SDN are as follows:
OpenDaylight is for users considering open options in network programming. This guide provides information for the following types of users:
Note
If you develop code to build new functionality for OpenDaylight and push it upstream (not required), it can become part of the OpenDaylight release. Users can then install the features to implement the solution you’ve created.
In this section we discuss some of the concepts and tools you encounter with basic use of OpenDaylight. The guide walks you through the installation process in a subsequent section, but for now familiarize yourself with the information below.
To date, OpenDaylight developers have formed more than 50 projects to address ways to extend network functionality. The projects are a formal structure for developers from the community to meet, document release plans, code, and release the functionality they create in an OpenDaylight release.
The typical OpenDaylight user will not join a project team, but you should know what projects are as we refer to their activities and the functionality they create. The Karaf features to install that functionality often share the project team’s name.
Apache Karaf provides a lightweight runtime to install the Karaf features you want to implement and is included in the OpenDaylight platform software. By default, OpenDaylight has no pre-installed features.
After installing OpenDaylight, you install your selected features using the Karaf console to expand networking capabilities. In the Karaf feature list below are the ones you’re most likely to use when creating your network environment.
As a short example of installing a Karaf feature, OpenDaylight Beryllium offers Application Layer Traffic Optimization (ALTO). The Karaf feature to install ALTO is odl-alto-all. On the Karaf console, the command to install it is:
feature:install odl-alto-all
DLUX is a web-based interface that OpenDaylight provides for you to manage your network. Its Karaf feature installation name is “odl-dlux-core”.
DLUX draws information from OpenDaylight’s topology and host databases to display the following information:
To enable the DLUX UI after installing OpenDaylight, run:
feature:install odl-dlux-core
on the Karaf console.
Network embedded Experience (NeXt) is a developer toolkit that provides tools to draw network-centric topology UI elements that offer visualizations of the following:
NeXt can work with DLUX to build OpenDaylight applications. Check out the NeXt_demo for more information on the interface.
Model-Driven Service Abstraction Layer (MD-SAL) is the OpenDaylight framework that allows developers to create new Karaf features in the form of services and protocol drivers and connects them to one another. You can think of the MD-SAL as having the following two components:
- The Config Datastore, which maintains a representation of the desired network state.
- The Operational Datastore, which is a representation of the actual network state based on data from the managed network elements.
If you’re interacting with OpenDaylight through DLUX or the REST APIs while using the the OpenDaylight interfaces, the microservices architecture allows you to select available services, protocols, and REST APIs.
This section provides brief descriptions of the most commonly used Karaf features developed by OpenDaylight project teams. They are presented in alphabetical order. OpenDaylight installation instructions and a feature table that lists installation commands and compatibility follow.
Standards-compliant Authentication, Authorization and Accounting Services. RESTCONF is the most common consumer of AAA, which installs the AAA features automatically. AAA provides:
The Beryllium release of AAA includes experimental support for having the database of users and credentials stored in the cluster-aware MD-SAL datastore.
Implements the Application-Layer Traffic Optimization (ALTO) base IETF protocol to provide network information to applications. It defines abstractions and services to enable simplified network views and network services to guide application usage of network resources and includes five services:
Is a southbound plugin that provides support for Border Gateway Protocol (including Link-state Distribution) as a source of L3 topology information.
Is a southbound plugin that provides support for BGP Monitoring Protocol as a monitoring station.
Enables OpenDaylight to manage CAPWAP-compliant wireless termination point (WTP) network devices. Intelligent applications, e.g., radio planning, can be developed by tapping into the operational states made available via REST APIs of WTP network devices.
Creates a repository called the Unified-Security Plugin (USecPlugin) to provide controller security information to northbound applications, such as the following:
Information collected at the plugin may also be used to configure firewalls and create IP blacklists for the network.
Provides device-specific functionality, which means that code enabling a feature understands the capability and limitations of the device it runs on. For example, configuring VLANs and adjusting FlowMods are features, and there may be different implementations for different device types. Device-specific functionality is implemented as Device Drivers.
Web based OpenDaylight user interface that includes:
Creates a common abstraction layer on top of a physical network so northbound APIs or services can be more easily mapped onto the physical network as a concrete device configuration.
Defines an application-centric policy model for OpenDaylight that separates information about application connectivity requirements from information about the underlying details of the network infrastructure. Provides support for:
Developing a data-centric middleware to act as a oneM2M-compliant IoT Data Broker (IoTDB) and enable authorized applications to retrieve IoT data uploaded by any device.
LACP can auto-discover and aggregate multiple links between an OpenDaylight-controlled network and LACP-enabled endpoints or switches.
LISP (RFC6830) enables separation of Endpoint Identity (EID) from Routing Location (RLOC) by defining an overlay in the EID space, which is mapped to the underlying network in the RLOC space.
LISP Mapping Service provides the EID-to-RLOC mapping information, including forwarding policy (load balancing, traffic engineering, and so on) to LISP routers for tunneling and forwarding purposes. The LISP Mapping Service can serve the mapping data to data plane nodes as well as to OpenDaylight applications.
To leverage this service, a northbound API allows OpenDaylight applications and services to define the mappings and policies in the LISP Mapping Service. A southbound LISP plugin enables LISP data plane devices to interact with OpenDaylight via the LISP protocol.
Is a Domain Specific Language (DSL) for the abstraction of network models and identification of operation patterns. NEMO enables network users/applications to describe their demands for network resources, services, and logical operations in an intuitive way that can be explained and executed by a language engine.
Offers four features:
Enables portability and cooperation inside a single network by using a client/server multi-controller architecture. It provides an interoperability layer allowing SDN Applications written for other SDN Controllers to run on OpenDaylight. NetIDE details:
Several services and plugins in OpenDaylight work together to provide simplified integration with the OpenStack Neutron framework. These services enable OpenStack to offload network processing to OpenDaylight while enabling OpenDaylight to provide enhanced network services to OpenStack.
OVSDB Services are at parity with the Neutron Reference Implementation in OpenStack, including support for:
Provides a process for an Operation Context containing an OpenFlow Switch that uses OF-CONFIG to communicate with an OpenFlow Configuration Point, enabling remote configuration of OpenFlow datapaths.
Supports connecting to OpenFlow-enabled network devices via the OpenFlow specification. It currently supports OpenFlow versions 1.0 and 1.3.2.
In addition to support for the core OpenFlow specification, OpenDaylight Beryllium also includes preliminary support for the Table Type Patterns and OF-CONFIG specifications.
Is a southbound plugin that provides support for performing Create, Read, Update, and Delete (CRUD) operations on Multiprotocol Label Switching (MPLS) tunnels in the underlying network.
Leverages manufacturer-installed IEEE 802.1AR certificates to secure initial communications for a zero-touch approach to bootstrapping using Docker. SNBi devices and controllers automatically do the following:
SNBi creates a basic infrastructure to host, run, and lifecycle-manage multiple network functions within a network device, including individual network element services, such as:
SNBi also provides a Linux side abstraction layer to forward elements as well as enhancements to feature the abstraction and bootstrapping infrastructure. You can also use the device type and domain information to initiate controller federation processes.
Provides the ability to define an ordered list of network services (e.g. firewalls, load balancers) that are then “stitched” together in the network to create a service chain. SFC provides the chaining logic and APIs necessary for OpenDaylight to provision a service chain in the network and an end-user application for defining such chains. It includes:
The SNMP southbound plugin allows applications acting as an SNMP Manager to interact with devices that support an SNMP agent. The SNMP plugin implements a general SNMP implementation, which differs from the SNMP4SDN as that project leverages only select SNMP features to implement the specific use case of making an SNMP-enabled device emulate some features of an OpenFlow-enabled device.
Provides a southbound SNMP plugin to optimize delivery of SDN controller benefits to traditional/legacy ethernet switches through the SNMP interface. It offers support for flow configuration on ACLs and enables flow configuration via REST API and multi-vendor support.
Enables creation of a tag that allows you to filter traffic instead of using protocol-specific information like addresses and ports. Via SXP an external entity creates the tags, assigns them to traffic appropriately, and publishes information about the tags to network devices so they can enforce the tags appropriately.
More specifically, SXP Is an IETF-published control protocol designed to propagate the binding between an IP address and a source group, which has a unique source group tag (SGT). Within the SXP protocol, source groups with common network policies are endpoints connecting to the network. SXP updates the firewall with SGTs, enabling the firewalls to create topology-independent Access Control Lists (ACLs) and provide ACL automation.
SXP source groups have the same meaning as endpoint groups in OpenDaylight’s Group Based Policy (GBP), which is used to manipulate policy groups, so you can use OpenDaylight GPB with SXP SGTs. The SXP topology-independent policy definition and automation can be extended through OpenDaylight for other services and networking devices.
Provides a framework for simplified aggregation and topology data query to enable a unified topology view, including multi-protocol, Underlay, and Overlay resources.
Creates a framework for collecting, storing, querying, and maintaining time series data in OpenDaylight. You can leverage various data-driven applications built on top of TSDR when you install a datastore and at least one collector.
Functionality of TDSR includes:
TSDR has multiple features to enable the functionality above. To begin, select one of these data stores:
Then select any “collectors” you want to use:
See these TSDR_Directions for more information.
Provides a central server to coordinate encrypted communications between endpoints. Its client-side agent informs the controller about its encryption capabilities and can be instructed to encrypt select flows based on business policies.
A possible use case is encrypting controller-to-controller communications; however, the framework is very flexible, and client side software is available for multiple platforms and device types, enabling USC and OpenDaylight to centralize the coordination of encryption across a wide array of endpoint and device types.
Implements the infrastructure services required to support L3 VPN service. It initially leverages open source routing applications as pluggable components. L3 services include:
The VPN Service offers:
Provides multi-tenant virtual network on an SDN controller, allowing you to define the network with a look and feel of a conventional L2/L3 network. Once the network is designed on VTN, it automatically maps into the underlying physical network and is then configured on the individual switch, leveraging the SDN control protocol.
By defining a logical plane with VTN, you can conceal the complexity of the underlying network and better manage network resources to reduce network configuration time and errors.
Adds AMQP bindings to the MD-SAL, which makes all MD-SAL APIs available via that mechanism. AMQP bindings integration exposes the MD-SAL datatree, rpcs, and notifications via AMQP, when installed.
Offers an interface with an abstraction layer for you to communicate “intentions,” i.e., what you expect from the network. The Intent model, which is part of NIC’s core architecture, describes your networking services requirements and transforms the details of the desired state to OpenDaylight. NIC has four features:
Formed to initiate the development of data models and APIs that facilitate OpenDaylight software applications’ and/or service orchestrators’ ability to configure and provision connectivity services.
An experimental feature Plugin that allows subscriptions to be placed on targeted subtrees of YANG datastores residing on remote devices. Changes in YANG objects within the remote subtree can be pushed to OpenDaylight as specified and don’t require OpenDaylight to make continuous fetch requests. YANG-PUBSUB is developed as a Java project. Development requires Maven version 3.1.1 or later.
Provides the OpenDaylight OpFlex Agent , which is a policy agent that works with Open vSwitch (OVS), to enforce network policy, e.g., from Group-Based Policy, for locally-attached virtual machines or containers.
Provides a network-centric topology UI that offers visualizations of the following:
NeXt can work with DLUX to build OpenDaylight applications. NeXt does not support Internet Explorer. Check out the NeXt_demo for more information on the interface.
We are in the process of creating automatically generated API documentation for all of OpenDaylight. The following are links to the preliminary documentation that you can reference. We will continue to add more API documentation as it becomes available.
You complete the following steps to install your networking environment, with specific instructions provided in the subsections below.
Before detailing the instructions for these, we address the following: Java Runtime Environment (JRE) and operating system information Target environment Known issues and limitations
The default distribution can be found on the OpenDaylight software download page: http://www.opendaylight.org/software/downloads
The Karaf distribution has no features enabled by default. However, all of the features are available to be installed.
Note
For compatibility reasons, you cannot enable all the features simultaneously. We try to document known incompatibilities in the Install the Karaf features section below.
To run the Karaf distribution:
./bin/karaf
.For Example:
$ ls distribution-karaf-0.4.0-Beryllium.zip
distribution-karaf-0.4.0-Beryllium.zip
$ unzip distribution-karaf-0.4.0-Beryllium.zip
Archive: distribution-karaf-0.4.0-Beryllium.zip
creating: distribution-karaf-0.4.0-Beryllium/
creating: distribution-karaf-0.4.0-Beryllium/configuration/
creating: distribution-karaf-0.4.0-Beryllium/data/
creating: distribution-karaf-0.4.0-Beryllium/data/tmp/
creating: distribution-karaf-0.4.0-Beryllium/deploy/
creating: distribution-karaf-0.4.0-Beryllium/etc/
creating: distribution-karaf-0.4.0-Beryllium/externalapps/
...
inflating: distribution-karaf-0.4.0-Beryllium/bin/start.bat
inflating: distribution-karaf-0.4.0-Beryllium/bin/status.bat
inflating: distribution-karaf-0.4.0-Beryllium/bin/stop.bat
$ cd distribution-karaf-0.4.0-Beryllium
$ ./bin/karaf
________ ________ .__ .__ .__ __
\_____ \ ______ ____ ____ \______ \ _____ ___.__.\| \| \|__\| ____ \| \|___/ \|_
/ \| \\____ \_/ __ \ / \ \| \| \\__ \< \| \|\| \| \| \|/ ___\\| \| \ __\
/ \| \ \|_> > ___/\| \| \\| ` \/ __ \\___ \|\| \|_\| / /_/ > Y \ \|
\_______ / __/ \___ >___\| /_______ (____ / ____\|\|____/__\___ /\|___\| /__\|
\/\|__\| \/ \/ \/ \/\/ /_____/ \/
tab
for a list of available commands[cmd] --help
will show help for a specific command.ctrl-d
or type system:shutdown
or logout
to shutdown OpenDaylight.To install a feature, use the following command, where feature1 is the feature name listed in the table below:
feature:install <feature1>
You can install multiple features using the following command:
feature:install <feature1> <feature2> ... <featureN-name>
Note
For compatibility reasons, you cannot enable all Karaf features simultaneously. The table below documents feature installation names and known incompatibilities.Compatibility values indicate the following:
To uninstall a feature, you must shut down OpenDaylight, delete the data directory, and start OpenDaylight up again.
Important
Uninstalling a feature using the Karaf feature:uninstall command is not supported and can cause unexpected and undesirable behavior.
To find the complete list of Karaf features, run the following command:
feature:list
To list the installed Karaf features, run the following command:
feature:list -i
Features to implement networking functionality provide release notes, which you can find in the Project-specific Release Notes section.
Windows 10 cannot be identify by Karaf (equinox). Issue occurs during installation of karaf features e.g.:
opendaylight-user@root>feature:install odl-restconf
Error executing command: Can't install feature odl-restconf/0.0.0:
Could not start bundle mvn:org.fusesource.leveldbjni/leveldbjni-all/1.8-odl in feature(s) odl-akka-leveldb-0.7: The bundle "org.fusesource.leveldbjni.leveldbjni-all_1.8.0 [300]" could not be resolved. Reason: No match found for native code: META-INF/native/windows32/leveldbjni.dll; processor=x86; osname=Win32, META-INF/native/windows64/leveldbjni.dll; processor=x86-64; osname=Win32, META-INF/native/osx/libleveldbjni.jnilib; processor=x86; osname=macosx, META-INF/native/osx/libleveldbjni.jnilib; processor=x86-64; osname=macosx, META-INF/native/linux32/libleveldbjni.so; processor=x86; osname=Linux, META-INF/native/linux64/libleveldbjni.so; processor=x86-64; osname=Linux, META-INF/native/sunos64/amd64/libleveldbjni.so; processor=x86-64; osname=SunOS, META-INF/native/sunos64/sparcv9/libleveldbjni.so; processor=sparcv9; osname=SunOS
Workaround is to add
org.osgi.framework.os.name = Win32
to the karaf file
etc/system.properties
The workaround and further info are in this thread: http://stackoverflow.com/questions/35679852/karaf-exception-is-thrown-while-installing-org-fusesource-leveldbjni
Feature Name | Feature Description | Karaf feature name | Compatibility |
---|---|---|---|
Authentication | Enables authentication with support for federation using Apache Shiro | odl-aaa-shiro | all |
BGP | Provides support for Border Gateway Protocol (including Link-State Distribution) as a source of L3 topology information | odl-bgpcep-bgp | all |
BMP | Provides support for BGP Monitoring Protocol as a monitoring station | odl-bgpcep-bmp | all |
DIDM | Device Identification and Driver Management | odl-didm-all | all |
Centinel | Provides interfaces for streaming analytics | odl-centinel-all | all |
DLUX | Provides an intuitive graphical user interface for OpenDaylight | odl-dlux-all | all |
Fabric as a Service (Faas) | Creates a common abstraction layer on top of a physical network so northbound APIs or services can be more easiliy mapped onto the physical network as a concrete device configuration | odl-faas-all | all |
Group Based Policy | Enables Endpoint Registry and Policy Repository REST APIs and associated functionality for Group Based Policy with the default renderer for OpenFlow renderers | odl-groupbasedpolicy-ofoverlay | all |
GBP User Interface | Enables a web-based user interface for Group Based Policy | odl-groupbasedpolicyi-ui | all |
GBP FaaS renderer | Enables the Fabric as a Service renderer for Group Based Policy | odl-groupbasedpolicy-faas | self+all |
GBP Neutron Support | Provides OpenStack Neutron support using Group Based Policy | odl-groupbasedpolicy-neutronmapper | all |
L2 Switch | Provides L2 (Ethernet) forwarding across connected OpenFlow switches and support for host tracking | odl-l2switch-switch-ui | self+all |
LACP | Enables support for the Link Aggregation Control Protocol | odl-lacp-ui | self+all |
LISP Flow Mapping | Enables LISP control plane services including the mapping system services REST API and LISP protocol SB plugin | odl-lispflowmapping-msmr | all |
NEMO CLI | Provides intent mappings and implementation with CLI for legacy devices | odl-nemo-cli-renderer | all |
NEMO OpenFlow | Provides intent mapping and implementation for OpenFlow devices | odl-nemo-openflow-renderer | self+all |
NetIDE | Enables portabilty and cooperation inside a single network by using a client/server multi-controller architecture | odl-netide-rest | all |
NETCONF over SSH | Provides support to manage NETCONF-enabled devices over SSH | odl-netconf-connector-ssh | all |
OF-CONFIG | Enables remote configuration of OpenFlow datapaths | odl-of-config-rest | all |
OVSDB OpenStack Neutron | OpenStack Network Virtualization using OpenDaylight’s OVSDB support | odl-ovsdb-openstack | all |
OVSDB Southbound | OVSDB MDSAL southbound plugin for Open_vSwitch schema | odl-ovsdb-southbound-impl-ui | all |
OVSDB HWVTEP Southbound | OVSDB MDSAL hwvtep southbound plugin for the hw_vtep schema | odl-ovsdb-hwvtepsouthbound-ui | all |
OVSDB NetVirt SFC | OVSDB NetVirt support for SFC | odl-ovsdb-sfc-ui | all |
OpenFlow Flow Programming | Enables discovery and control of OpenFlow switches and the topoology between them | odl-openflowplugin-flow-services-ui | all |
OpenFlow Table Type Patterns | Allows OpenFlow Table Type Patterns to be manually associated with network elements | odl-ttp-all | all |
Packetcable PCMM | Enables flow-based dynamic QoS management of CMTS use in the DOCSIS infrastructure and a policy server | odl-packetcable-policy-server | self+all |
PCEP | Enables support for PCEP | odl-bgpcep-pcep | all |
RESTCONF API Support | Enables REST API access to the MD-SAL including the data store | odl-restconf | all |
SDNinterface | Provides support for interaction and sharing of state between (non-clustered) OpenDaylight instances | odl-sdninterfaceapp-all | all |
SFC over L2 | Supports implementing Service Function Chaining using Layer 2 forwarding | odl-sfcofl2 | self+all |
SFC over LISP | Supports implementing Service Function Chaining using LISP | odl-sfclisp | all |
SFC over REST | Supports implementing Service Function Chaining using REST CRUD operations on network elements | odl-sfc-sb-rest | all |
SFC over VXLAN | Supports implementing Service Function Chaining using VXLAN tunnels | odl-sfc-ovs | self+all |
SNMP Plugin | Enables monitoring and control of network elements via SNMP | odl-snmp-plugin | all |
SNMP4SDN | Enables OpenFlow-like control of network elements via SNMP | odl-snmp4sdn-all | all |
SSSD Federated Authentication | Enables support for federated authentication using SSSD | odl-aaa-sssd-plugin | all |
Secure tag eXchange Protocol (SXP) | Enables distribution of shared tags to network devices | odl-sxp-controller | all |
Time Series Data Repository (TSDR) | Enables support for storing and querying time series data with the default data collector for OpenFlow statistics the default data store for HSQLDB | odl-tsdr-hsqldb-all | all |
TSDR Data Collectors | Enables support for various TSDR data sources (collectors) including OpenFlow statistics, NetFlow statistics, NetFlow statistics, SNMP data, Syslog, and OpenDaylight (controller) metrics | odl-tsdr-openflow-statistics-collector, odl-tsdr-netflow-statistics-collector, odl-tsdr-snmp-data-collector, odl-tsdr-syslog-collector, odl-tsdr-controller-metrics-collector | all |
TSDR Data Stores | Enables support for TSDR data stores including HSQLDB, HBase, and Cassandra | odl-tsdr-hsqldb, odl-tsdr-hbase, or odl-tsdr-cassandra | all |
Topology Processing Framework | Enables merged and filtered views of network topologies | odl-topoprocessing-framework | all |
Unified Secure Channel (USC) | Enables support for secure, remote connections to network devices | odl-usc-channel-ui | all |
VPN Service | Enables support for OpenStack VPNaaS | odl-vpnservice-core | all |
VTN Manager | Enables Virtual Tenant Network support | odl-vtn-manager-rest | self+all |
VTN Manager Neutron | Enables OpenStack Neutron support of VTN Manager | odl-vtn-manager-neutron | self+all |
Feature Name | Feature Description | Karaf feature name | Compatibility |
---|---|---|---|
OpFlex | Provides OpFlex agent for Open vSwitch to enforce network policy, such as GBP, for locally-attached virtual machines or containers | n/a | all |
NeXt | Provides a developer toolkit for designing network-centric topology user interfaces | n/a | all |
The following functionality is labeled as experimental in OpenDaylight Beryllium and should be used accordingly. In general, it is not supposed to be used in production unless its limitations are well understood by those deploying it.
Feature Name | Feature Description | Karaf feature name | Compatibility |
---|---|---|---|
Authorization | Enables configurable role-based authorization | odl-aaa-authz | all |
ALTO | Enables support for Application-Layer Traffic Optimization | odl-alto-core | self+all |
CAPWAP | Enables control of supported wireless APs | odl-capwap-ac-rest | all |
Clustered Authentication | Enables the use of the MD-SAL clustered data store for the authentication database | odl-aaa-authn-mdsal-cluster | all |
Controller Shield | Provides controller security information to northbound applications | odl-usecplugin | all |
GBP IO Visor Renderer | Provides support for rendering Group Based Policy to IO Visor | odl-groupbasedpolicy-iovisor | all |
Internet of Things Data Management | Enables support for the oneM2M specification | odl-iotdm-onem2m | all |
LISP Flow Mapping OpenStack Network Virtualization | Experimental support for OpenStack Neutron virtualization | odl-lispflowmapping-neutron | self+all |
Messaging4Transport | Introduces an AMQP Northbound to MD-SAL | odl-messaging4transport | all |
Network Intent Composition (NIC) | Provides abstraction layer for communcating network intents (including a distributed intent mapping service REST API) using either Hazelcast or the MD-SAL as the backing data store for intents | odl-nic-core-hazelcast or odl-nic-core-mdsal | all |
NIC Console | Provides a Karaf CLI extension for intent CRUD operations and mapping service operations | odl-nic-console | all |
NIC VTN renderer | Virtual Tenant Network renderer for Network Intent Composition | odl-nic-renderer-vtn | self+all |
NIC GBP renderer | Group Based Policy renderer for Network Intent Composition | odl-nic-renderer-gbp | self+all |
NIC OpenFlow renderer | OpenFlow renderer for Network Intent Composition | odl-nic-renderer-of | self+all |
NIC NEMO renderer | NEtwork MOdeling renderer for Network Intent Composition | odl-nic-renderer-nemo | self+all |
OVSDB NetVirt UI | OVSDB DLUX UI | odl-ovsdb-ui | all |
Secure Networking Bootstrap | Defines a SNBi domain and associated white lists of devices to be accommodated to the domain | odl-snbi-all | self+all |
UNI Manager | Initiates the development of data models and APIs to facilitate configuration and provisioning connectivity services for OpenDaylight applications and services | odl-unimgr | all |
YANG PUBSUB | Allows subscriptions to be placed on targeted subtrees of YANG datastores residing on remote devices to obviate the need for OpenDaylight to make continuous fetch requests | odl-yangpush-rest | all |
Most components that offer REST APIs will automatically load the RESTCONF API Support component, but if for whatever reason they seem to be missing, install the “odl-restconf” feature to activate this support.
The OpenDaylight Karaf container, OSGi bundles, and Java class files are portable and should run on any Java 7- or Java 8-compliant JVM to run. Certain projects and certain features of some projects may have additional requirements. Those are noted in the project-specific release notes.
Projects and features which have known additional requirements are:
OpenDaylight is written primarily in Java project and primarily uses Maven as a build tool Consequently the two main requirements to develop projects within OpenDaylight are:
Applications and tools built on top of OpenDaylight using it’s REST APIs should have no special requirements beyond whatever is needed to run the application or tool and make the REST calls.
In some places, OpenDaylight makes use of the Xtend language. While Maven will download the appropriate tools to build this, additional plugins may be required for IDE support.
The projects with additional requirements for execution typically have similar or more extensive additional requirements for development. See the project-specific release notes for details.
Other than as noted in project-specific release notes, we know of the following limitations:
All OpenDaylight Security Advisories can be found on the Security Advisories wiki page. Of particular note to OpenDaylight Beryllium users are:
There are known and documented mitigations described on the Security Advisory page linked above. Because of the efficacy of the mitigations, we do not intend to release another version of Beryllium to address them. Instead, we encourage all of those who are using Beryllium to carefully understand the mitigations in the context of their deployments.
For the release notes of individual projects, please see the following pages on the OpenDaylight Wiki.
TBD: add Boron release notes
The following projects participated in Boron, but intentionally do not have release notes.
This page details changes and bug fixes between the Beryllium Stability Release 2 (Beryllium-SR2) and the Beryllium Stability Release 3 (Beryllium-SR3) of OpenDaylight.
The following projects had no noteworthy changes in the Beryllium-SR3 Release:
This page details changes and bug fixes between the Beryllium Stability Release 3 (Beryllium-SR3) and the Beryllium Stability Release 4 (Beryllium-SR4) of OpenDaylight.
The following projects had no noteworthy changes in the Beryllium-SR4 Release:
This document is for the user to install the artifacts that are needed for using Centinel functionality in the OpenDaylight by enabling the default Centinel feature. Centinel is a distributed reliable framework for collection, aggregation and analysis of streaming data which is added in OpenDaylight Beryllium Release.
The Centinel project aims at providing a distributed, reliable framework for efficiently collecting, aggregating and sinking streaming data across Persistence DB and stream analyzers (e.g., Graylog, Elasticsearch, Spark, Hive). This framework enables SDN applications/services to receive events from multiple streaming sources (e.g., Syslog, Thrift, Avro, AMQP, Log4j, HTTP/REST).
In Beryllium, we develop a “Log Service” and plug-in for log analyzer (e.g., Graylog). The Log service process real time events coming from log analyzer. Additionally, we provide stream collector (Flume- and Sqoop-based) that collects logs from OpenDaylight and sinks them to persistence service (integrated with TSDR). Centinel also includes a RESTCONF interface to inject events to north bound applications for real-time analytic/network configuration. Further, a Centinel User Interface (web interface) will be available to operators to enable rules/alerts/dashboard etc.
There are some additional pre-requisites for Centinel, which can be done by integrate Graylog server, Apache Drill, Apache Flume and HBase.
Install MongoDB
import the MongoDB public GPG key into apt:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
Create the MongoDB source list:
echo 'deb http://downloads-distro.mongodb.org/repo/debian-sysvinit dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
Update your apt package database:
sudo apt-get update
Install the latest stable version of MongoDB with this command:
sudo apt-get install mongodb-org
Install Elasticsearch
Graylog2 v0.20.2 requires Elasticsearch v.0.90.10. Download and install it with these commands:
cd ~; wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.10.deb
sudo dpkg -i elasticsearch-0.90.10.deb
We need to change the Elasticsearch cluster.name setting. Open the Elasticsearch configuration file:
sudo vi /etc/elasticsearch/elasticsearch.yml
Find the section that specifies cluster.name. Uncomment it, and replace the default value with graylog2:
cluster.name: graylog2
Find the line that specifies network.bind_host and uncomment it so it looks like this:
network.bind_host: localhost
script.disable_dynamic: true
Save and quit. Next, restart Elasticsearch to put our changes into effect:
sudo service elasticsearch restart
After a few seconds, run the following to test that Elasticsearch is running properly:
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
Install Graylog2 server
Download the Graylog2 archive to /opt with this command:
cd /opt; sudo wget https://github.com/Graylog2/graylog2-server/releases/download/0.20.2/graylog2-server-0.20.2.tgz
Then extract the archive:
sudo tar xvf graylog2-server-0.20.2.tgz
Let’s create a symbolic link to the newly created directory, to simplify the directory name:
sudo ln -s graylog2-server-0.20.2 graylog2-server
Copy the example configuration file to the proper location, in /etc:
sudo cp /opt/graylog2-server/graylog2.conf.example /etc/graylog2.conf
Install pwgen, which we will use to generate password secret keys:
sudo apt-get install pwgen
Now must configure the admin password and secret key. The password secret key is configured in graylog2.conf, by the password_secret parameter. Generate a random key and insert it into the Graylog2 configuration with the following two commands:
SECRET=$(pwgen -s 96 1)
sudo -E sed -i -e 's/password_secret =.*/password_secret = '$SECRET'/' /etc/graylog2.conf
PASSWORD=$(echo -n password | shasum -a 256 | awk '{print $1}')
sudo -E sed -i -e 's/root_password_sha2 =.*/root_password_sha2 = '$PASSWORD'/' /etc/graylog2.conf
Open the Graylog2 configuration to make a few changes: (sudo vi /etc/graylog2.conf):
rest_transport_uri = http://127.0.0.1:12900/
elasticsearch_shards = 1
Now let’s install the Graylog2 init script. Copy graylog2ctl to /etc/init.d:
sudo cp /opt/graylog2-server/bin/graylog2ctl /etc/init.d/graylog2
Update the startup script to put the Graylog2 logs in /var/log and to look for the Graylog2 server JAR file in /opt/graylog2-server by running the two following sed commands:
sudo sed -i -e 's/GRAYLOG2_SERVER_JAR=\${GRAYLOG2_SERVER_JAR:=graylog2-server.jar}/GRAYLOG2_SERVER_JAR=\${GRAYLOG2_SERVER_JAR:=\/opt\/graylog2-server\/graylog2-server.jar}/' /etc/init.d/graylog2
sudo sed -i -e 's/LOG_FILE=\${LOG_FILE:=log\/graylog2-server.log}/LOG_FILE=\${LOG_FILE:=\/var\/log\/graylog2-server.log}/' /etc/init.d/graylog2
Install the startup script:
sudo update-rc.d graylog2 defaults
Start the Graylog2 server with the service command:
sudo service graylog2 start
Download hbase-0.98.15-hadoop2.tar.gz
Unzip the tar file using below command:
tar -xvf hbase-0.98.15-hadoop2.tar.gz
Create directory using below command:
sudo mkdir /usr/lib/hbase
Move hbase-0.98.15-hadoop2 to hbase using below command:
mv hbase-0.98.15-hadoop2/usr/lib/hbase/hbase-0.98.15-hadoop2 hbase
Configuring HBase with java
Open your hbase/conf/hbase-env.sh and set the path to the java installed in your system:
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
Set the HBASE_HOME path in bashrc file
Open bashrc file using this command:
gedit ~/.bashrc
In bashrc file append the below 2 statements:
export HBASE_HOME=/usr/lib/hbase/hbase-0.98.15-hadoop2
export PATH=$PATH:$HBASE_HOME/bin
To start HBase issue following commands:
HBASE_PATH$ bin/start-hbase.sh
HBASE_PATH$ bin/hbase shell
Create centinel table in HBase with stream,alert,dashboard and stringdata as column families using below command:
create 'centinel','stream','alert','dashboard','stringdata'
To stop HBase issue following command:
HBASE_PATH$ bin/stop-hbase.sh
Download apache-flume-1.6.0.tar.gz
Copy the downloaded file to the directory where you want to install Flume.
Extract the contents of the apache-flume-1.6.0.tar.gz file using below command. Use sudo if necessary:
tar -xvzf apache-flume-1.6.0.tar.gz
Starting flume
Navigate to the Flume installation directory.
Issue the following command to start flume-ng agent:
./flume-ng agent --conf conf --conf-file multiplecolumn.conf --name a1 -Dflume.root.logger=INFO,console
Download apache-drill-1.1.0.tar.gz
Copy the downloaded file to the directory where you want to install Drill.
Extract the contents of the apache-drill-1.1.0.tar.gz file using below command:
tar -xvzf apache-drill-1.1.0.tar.gz
Starting Drill:
Navigate to the Drill installation directory.
Issue the following command to launch Drill in embedded mode:
bin/drill-embedded
Access the Apache Drill UI on link: http://localhost:8047/
Go to “Storage” tab and enable “HBase” storage plugin.
Use the following command to download git repository of Centinel:
git clone https://git.opendaylight.org/gerrit/p/centinel
Navigate to the installation directory and build the code using maven by running below command:
mvn clean install
After building the maven project, a jar file named centinel-SplittingSerializer-0.0.1-SNAPSHOT.jar
will be created in centinel/plugins/centinel-SplittingSerializer/target
inside the workspace directory.
Copy and rename this jar file to centinel-SplittingSerializer.jar
(as mentioned in configuration file of flume)
and save at location apache-flume-1.6.0-bin/lib
inside flume directory.
After successful build, copy the jar files present at below locations to /opt/graylog/plugin
in graylog server(VM):
centinel/plugins/centinel-alertcallback/target/centinel-alertcallback-0.1.0-SNAPSHOT.jar
centinel/plugins/centinel-output/target/centinel-output-0.1.0-SNAPSHOT.jar
Restart the server after adding plugin using below command:
sudo graylog-ctl restart graylog-server
Make changes to following file:
/etc/rsyslog.conf
Uncomment $InputTCPServerRun 1514
Add the following lines:
module(load="imfile" PollingInterval="10") #needs to be done just once
input(type="imfile"
File="<karaf.log>" #location of log file
StateFile="statefile1"
Tag="tag1")
*.* @@127.0.0.1:1514 # @@used for TCP
Use the following format and comment the previous one:
$ActionFileDefaultTemplate RSYSLOG_SyslogProtocol23Format
Use the below command to send Centinel logs to a port:
tail -f <location of log file>/karaf.log|logger
Restart rsyslog service after making above changes in configuration file:
sudo service rsyslog restart
Finally, from the Karaf console install the Centinel feature with this command:
feature:install odl-centinel-all
If the feature install was successful you should be able to see the following Centinel commands added:
centinel:list
centinel:purgeAll
Check the ../data/log/karaf.log
for any exception related to Centinel related features
Beryllium being the first release supporting Centinel functionality, only fresh installation is possible.
To uninstall the Centinel functionality, you need to do the following from Karaf console:
feature:uninstall centinel-all
Its recommended to restart the Karaf container after uninstallation of the Centinel functionality.
You’ll need to install the following packages and their dependencies:
Packages are available for Red Hat Enterprise Linux 7 and Ubuntu 14.04 LTS. Some of the examples below are specific to RHEL7 but you can run the equivalent commands for upstart instead of systemd.
Note that many of these steps may be performed automatically if you’re deploying this along with a larger orchestration system.
You’ll need to set up your VM host uplink interface. You should ensure that the MTU of the underlying network is sufficient to handle tunneled traffic. We will use an example of setting up eth0 as your uplink interface with a vlan of 4093 used for the networking control infrastructure and tunnel data plane.
We just need to set the MTU and disable IPv4 and IPv6 autoconfiguration. The MTU needs to be large enough to allow both the VXLAN header and VLAN tags to pass through without fragmenting for best performance. We’ll use 1600 bytes which should be sufficient assuming you are using a default 1500 byte MTU on your virtual machine traffic. If you already have any NetworkManager connections configured for your uplink interface find the connection name and proceed to the next step. Otherwise, create a connection with (be sure to update the variable UPLINK_IFACE as needed):
UPLINK_IFACE=eth0
nmcli c add type ethernet ifname $UPLINK_IFACE
Now, configure your interface as follows:
CONNECTION_NAME="ethernet-$UPLINK_IFACE"
nmcli connection mod "$CONNECTION_NAME" connection.autoconnect yes \
ipv4.method link-local \
ipv6.method ignore \
802-3-ethernet.mtu 9000 \
ipv4.routes '224.0.0.0/4 0.0.0.0 2000'
Then bring up the interface with:
nmcli connection up "$CONNECTION_NAME"
Next, create the infrastructure interface using the infrastructure VLAN (4093 by default). We’ll need to create a vlan subinterface of your uplink interface, the configure DHCP on that interface. Run the following commands. Be sure to replace the variable values if needed. If you’re not using NIC teaming, replace the variable team0 below:
UPLINK_IFACE=team0
INFRA_VLAN=4093
nmcli connection add type vlan ifname $UPLINK_IFACE.$INFRA_VLAN dev $UPLINK_IFACE id $INFRA_VLAN
nmcli connection mod vlan-$UPLINK_IFACE.$INFRA_VLAN \
ethernet.mtu 1600 ipv4.routes '224.0.0.0/4 0.0.0.0 1000'
sed "s/CLIENT_ID/01:$(ip link show $UPLINK_IFACE | awk '/ether/ {print $2}')/" \
> /etc/dhcp/dhclient-$UPLINK_IFACE.$INFRA_VLAN.conf <<EOF
send dhcp-client-identifier CLIENT_ID;
request subnet-mask, domain-name, domain-name-servers, host-name;
EOF
Now bring up the new interface with:
nmcli connection up vlan-$UPLINK_IFACE.$INFRA_VLAN
If you were successful, you should be able to see an IP address when you run:
ip addr show dev $UPLINK_IFACE.$INFRA_VLAN
We’ll need to configure an OVS bridge which will handle the traffic for any virtual machines or containers that are hosted on the VM host. First, enable the openvswitch service and start it:
# systemctl enable openvswitch
ln -s '/usr/lib/systemd/system/openvswitch.service' '/etc/systemd/system/multi-user.target.wants/openvswitch.service'
# systemctl start openvswitch
# systemctl status openvswitch
openvswitch.service - Open vSwitch
Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled)
Active: active (exited) since Fri 2014-12-12 17:20:13 PST; 3s ago
Process: 3053 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 3053 (code=exited, status=0/SUCCESS)
Dec 12 17:20:13 ovs-server.cisco.com systemd[1]: Started Open vSwitch.
Next, we can create an OVS bridge (you may wish to use a different bridge name):
# ovs-vsctl add-br br0
# ovs-vsctl show
34aa83d7-b918-4e49-bcec-1b521acd1962
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
ovs_version: "2.3.90"
Next, we configure a tunnel interface on our new bridge as follows:
# ovs-vsctl add-port br0 br0_vxlan0 -- \
set Interface br0_vxlan0 type=vxlan \
options:remote_ip=flow options:key=flow options:dst_port=8472
# ovs-vsctl show
34aa83d7-b918-4e49-bcec-1b521acd1962
Bridge "br0"
Port "br0_vxlan0"
Interface "br0_vxlan0"
type: vxlan
options: {dst_port="8472", key=flow, remote_ip=flow}
Port "br0"
Interface "br0"
type: internal
ovs_version: "2.3.90"
Open vSwitch is now configured and ready.
Before enabling the agent, we’ll need to edit its configuration file, which is located at “/etc/opflex-agent-ovs/opflex-agent-ovs.conf”.
First, we’ll configure the Opflex protocol parameters. If you’re using an ACI fabric, you’ll need the OpFlex domain from the ACI configuration, which is the name of the VMM domain you mapped to the interface for this hypervisor. Set the “domain” field to this value. Next, set the “name” field to a hostname or other unique identifier for the VM host. Finally, set the “peers” list to contain the fixed static anycast peer address of 10.0.0.30 and port 8009. Here is an example of a completed section (bold text shows areas you’ll need to modify):
"opflex": {
// The globally unique policy domain for this agent.
"domain": "[CHANGE ME]",
// The unique name in the policy domain for this agent.
"name": "[CHANGE ME]",
// a list of peers to connect to, by hostname and port. One
// peer, or an anycast pseudo-peer, is sufficient to bootstrap
// the connection without needing an exhaustive list of all
// peers.
"peers": [
{"hostname": "10.0.0.30", "port": 8009}
],
"ssl": {
// SSL mode. Possible values:
// disabled: communicate without encryption
// encrypted: encrypt but do not verify peers
// secure: encrypt and verify peer certificates
"mode": "encrypted",
// The path to a directory containing trusted certificate
// authority public certificates, or a file containing a
// specific CA certificate.
"ca-store": "/etc/ssl/certs/"
}
},
Next, configure the appropriate policy renderer for the ACI fabric. You’ll want to use a stitched-mode renderer. You’ll need to configure the bridge name and the uplink interface name. The remote anycast IP address will need to be obtained from the ACI configuration console, but unless the configuration is unusual, it will be 10.0.0.32:
// Renderers enforce policy obtained via OpFlex.
"renderers": {
// Stitched-mode renderer for interoperating with a
// hardware fabric such as ACI
"stitched-mode": {
"ovs-bridge-name": "br0",
// Set encapsulation type. Must set either vxlan or vlan.
"encap": {
// Encapsulate traffic with VXLAN.
"vxlan" : {
// The name of the tunnel interface in OVS
"encap-iface": "br0_vxlan0",
// The name of the interface whose IP should be used
// as the source IP in encapsulated traffic.
"uplink-iface": "eth0.4093",
// The vlan tag, if any, used on the uplink interface.
// Set to zero or omit if the uplink is untagged.
"uplink-vlan": 4093,
// The IP address used for the destination IP in
// the encapsulated traffic. This should be an
// anycast IP address understood by the upstream
// stitched-mode fabric.
"remote-ip": "10.0.0.32"
}
},
// Configure forwarding policy
"forwarding": {
// Configure the virtual distributed router
"virtual-router": {
// Enable virtual distributed router. Set to true
// to enable or false to disable. Default true.
"enabled": true,
// Override MAC address for virtual router.
// Default is "00:22:bd:f8:19:ff"
"mac": "00:22:bd:f8:19:ff",
// Configure IPv6-related settings for the virtual
// router
"ipv6" : {
// Send router advertisement messages in
// response to router solicitation requests as
// well as unsolicited advertisements.
"router-advertisement": true
}
},
// Configure virtual distributed DHCP server
"virtual-dhcp": {
// Enable virtual distributed DHCP server. Set to
// true to enable or false to disable. Default
// true.
"enabled": true,
// Override MAC address for virtual dhcp server.
// Default is "00:22:bd:f8:19:ff"
"mac": "00:22:bd:f8:19:ff"
}
},
// Location to store cached IDs for managing flow state
"flowid-cache-dir": "DEFAULT_FLOWID_CACHE_DIR"
}
}
Finally, enable the agent service:
# systemctl enable agent-ovs
ln -s '/usr/lib/systemd/system/agent-ovs.service' '/etc/systemd/system/multi-user.target.wants/agent-ovs.service'
# systemctl start agent-ovs
# systemctl status agent-ovs
agent-ovs.service - Opflex OVS Agent
Loaded: loaded (/usr/lib/systemd/system/agent-ovs.service; enabled)
Active: active (running) since Mon 2014-12-15 10:03:42 PST; 5min ago
Main PID: 6062 (agent_ovs)
CGroup: /system.slice/agent-ovs.service
└─6062 /usr/bin/agent_ovs
The agent is now running and ready to enforce policy. You can add endpoints to the local VM hosts using the OpFlex Group-based policy plugin from OpenStack, or manually.
This guide is geared towards installing OpenDaylight to use the OVSDB project to provide Neutron support for OpenStack.
Open vSwitch (OVS) is generally accepted as the unofficial standard for Virtual Switching in the Open hypervisor based solutions. For information on OVS, see Open vSwitch.
With OpenStack within the SDN context, controllers and applications interact using two channels: OpenFlow and OVSDB. OpenFlow addresses the forwarding-side of the OVS functionality. OVSDB, on the other hand, addresses the management-plane. A simple and concise overview of Open Virtual Switch Database (OVSDB) is available at: http://networkstatic.net/getting-started-ovsdb/
Follow the instructions in Installing OpenDaylight.
Note
By default the ODL OVSDB L3 forwarding is disabled. Enable the functionality by editing the ovsdb.l3.fwd.enabled
setting and setting it to yes
:
vi etc/custom.properties
ovsdb.l3.fwd.enabled=yes
Install the required features with the following command:
feature:install odl-ovsdb-openstack
Recognize that for the feature to fully install requires installing other required features and may take 30–60 seconds as those other features are automatically installed. The Karaf prompt will return before the feature is fully installed.
To verify that the installation was successful, use the following commands in Karaf and verify that the required features have been installed:
opendaylight-user@root>feature:list -i | grep ovsdb
odl-ovsdb-openstack | 1.2.1-Beryllium | x | ovsdb-1.2.1-Beryllium | OpenDaylight :: OVSDB :: OpenStack Network Virtual
odl-ovsdb-library | 1.2.1-Beryllium | x | odl-ovsdb-library-1.2.1-Beryllium | OpenDaylight :: library
odl-ovsdb-southbound-api | 1.2.1-Beryllium | x | odl-ovsdb-southbound-1.2.1-Beryllium | OpenDaylight :: southbound :: api
odl-ovsdb-southbound-impl | 1.2.1-Beryllium | x | odl-ovsdb-southbound-1.2.1-Beryllium | OpenDaylight :: southbound :: impl
odl-ovsdb-southbound-impl-rest | 1.2.1-Beryllium | x | odl-ovsdb-southbound-1.2.1-Beryllium | OpenDaylight :: southbound :: impl :: REST
odl-ovsdb-southbound-impl-ui | 1.2.1-Beryllium | x | odl-ovsdb-southbound-1.2.1-Beryllium | OpenDaylight :: southbound :: impl :: UI
opendaylight-user@root>feature:list -i | grep neutron
odl-neutron-service | 0.6.0-Beryllium | x | odl-neutron-0.6.0-Beryllium | OpenDaylight :: Neutron :: API
odl-neutron-northbound-api | 0.6.0-Beryllium | x | odl-neutron-0.6.0-Beryllium | OpenDaylight :: Neutron :: Northbound
odl-neutron-spi | 0.6.0-Beryllium | x | odl-neutron-0.6.0-Beryllium | OpenDaylight :: Neutron :: API
odl-neutron-transcriber | 0.6.0-Beryllium | x | odl-neutron-0.6.0-Beryllium | OpenDaylight :: Neutron :: Implementation
opendaylight-user@root>feature:list -i | grep openflowplugin
odl-openflowplugin-southbound | 0.2.0-Beryllium | x | openflowplugin-0.2.0-Beryllium | OpenDaylight :: Openflow Plugin :: SouthBound
odl-openflowplugin-nsf-services | 0.2.0-Beryllium | x | openflowplugin-0.2.0-Beryllium | OpenDaylight :: OpenflowPlugin :: NSF :: Services
odl-openflowplugin-nsf-model | 0.2.0-Beryllium | x | openflowplugin-0.2.0-Beryllium | OpenDaylight :: OpenflowPlugin :: NSF :: Model
odl-openflowplugin-nxm-extensions | 0.2.0-Beryllium | x | openflowplugin-extension-0.2.0-Beryllium | OpenDaylight :: Openflow Plugin :: Nicira Extension
Use the following command in karaf to view the logs. Verify that there are no errors logs relating to odl-ovsdb-openstack
:
log:display
Look for the following log to indicate that the odl-ovsdb-openstack
feature has been fully installed:
Successfully pushed configuration snapshot netvirt-providers-impl-default-config.xml(odl-ovsdb-openstack,odl-ovsdb-openstack)
Reference the following link to the OVSDB NetVirt project wiki. The link has very helpful information for understanding the OVSDB Network Virtualization project:
Uninstall the odl-ovsdb-openstack
feature by using the following command:
feature:uninstall odl-ovsdb-openstack
The shut down OpenDaylight with the following command:
system:shutdown
Use the following command to clean and reset the working state before starting OpenDaylight again:
rm -rf data/* journal/* snapshots/*
This document is for the user to install the artifacts that are needed for using Time Series Data Repository (TSDR) functionality in the ODL Controller by enabling either an HSQLDB, HBase, or Cassandra Data Store.
The Time Series Data Repository (TSDR) project in OpenDaylight (ODL) creates a framework for collecting, storing, querying, and maintaining time series data in the OpenDaylight SDN controller. Please refer to the User Guide for the detailed description of the functionality of the project and how to use the corresponding features provided in TSDR.
The software requirements for TSDR HBase Data Store are as follows:
No additional software is required for the HSQLDB Data Stores.
Once OpenDaylight distribution is up, from karaf console install the HSQLDB data store using the following command:
feature:install odl-tsdr-hsqldb-all
This will install hsqldb related dependency features (and can take sometime) as well as openflow statistics collector before returning control to the console.
Installing TSDR HBase Data Store contains two steps:
In Beryllium, we only support HBase single node running together on the same machine as OpenDaylight. Therefore, follow the steps to download and install HBase server onto the same machine as where OpenDaylight is running:
Create a folder in Linux operating system for the HBase server. For example, create an hbase directory under /usr/lib
:
mkdir /usr/lib/hbase
Unzip the downloaded HBase server tar file.
Run the following command to unzip the installation package:
tar xvf <hbase-installer-name> /usr/lib/hbase
Make proper changes in hbase-site.xml
Under <hbase-install-directory>/conf/, there is a hbase-site.xml. Although it is not recommended, an experienced user with HBase can modify the data directory for hbase server to store the data.
Modify the value of the property with name “hbase.rootdir” in the file to reflect the desired file directory for storing hbase data.
The following is an example of the file:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///usr/lib/hbase/data</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/lib/hbase/zookeeper</value>
</property>
</configuration>
start hbase server:
cd <hbase-installation-directory>
./start-hbase.sh
start hbase shell:
cd <hbase-insatllation-directory>
./hbase shell
start Karaf console
install hbase data store feature from Karaf console:
feature:install odl-tsdr-hbase
Installing TSDR Cassandra Data Store contains two steps:
In Beryllium, we only support Cassadra single node running together on the same machine as OpenDaylight. Therefore, follow these steps to download and install Cassandra server onto the same machine as where OpenDaylight is running:
Install Cassandra (latest stable version) by downloading the zip file and untar the tar ball to cassandra/ directory on the testing machine:
mkdir cassandra
wget http://www.eu.apache.org/dist/cassandra/2.1.10/apache-cassandra-2.1.10-bin.tar.gz[2.1.10 is current stable version, it can vary]
mv apache-cassandra-2.1.10-bin.tar.gz cassandra/
cd cassandra
tar -xvzf apache-cassandra-2.1.10-bin.tar.gz
Start Cassandra from cassandra directory by running:
./apache-cassandra-2.1.10/bin/cassandra
Start cassandra shell by running:
./apache-cassandra-2.1.10/bin/cqlsh
Start Karaf according to the instructions above.
Install Cassandra data store feature from Karaf console:
feature:install odl-tsdr-cassandra
After the TSDR data store is installed, no matter whether it is HBase data store, Cassandra data store, or HSQLDB data store, the user can verify the installation with the following steps.
Verify if the following two tsdr commands are available from Karaf console:
tsdr:list
tsdr:purgeAll
Verify if openflow statisitcs data can be received successfully:
Run “feature:install odl-tsdr-openflow-statistics-collector” from Karaf.
Run mininet to connect to ODL controller. For example, use the following command to start a three node topology:
mn --topo single,3 --controller 'remote,ip=172.17.252.210,port=6653' --switch ovsk,protocols=OpenFlow13
From Karaf console, the user should be able to retrieve the statistics data of OpenFlow statistics data from the console:
tsdr:list FLOWSTATS
Check the ../data/log/karaf.log
for any exception related to TSDR features.
The feature installation takes care of automated configuration of the datasource by installing a file in <install folder>/etc named org.ops4j.datasource-metric.cfg. This contains the default location of <install folder>/tsdr where the HSQLDB datastore files are stored. If you want to change the default location of the datastore files to some other location update the last portion of the url property in the org.ops4j.datasource-metric.cfg and then restart the Karaf container.
Please refer to HBase Data Store User Guide.
There is no post configuration for TSDR Cassandra data store.
The HBase data store was supported in the previous release as well as in this release. However, we do not support data store upgrade for HBase data store. The user needs to reinstall TSDR and start to collect data in TSDR HBase datastore after the installation.
HSQLDB and Cassandra are new data stores introduced in this release. Therefore, upgrading from previous release does not apply in these two data store scenarios.
To uninstall the TSDR functionality with the default store, you need to do the following from karaf console:
feature:uninstall odl-tsdr-hsqldb-all
feature:uninstall odl-tsdr-core
feature:uninstall odl-tsdr-hsqldb
feature:uninstall odl-tsdr-openflow-statistics-collector
It is recommended to restart the Karaf container after the uninstallation of the TSDR functionality with the default store.
To uninstall the TSDR functionality with the HBase data store,
Uninstall HBase data store related features from karaf console:
feature:uninstall odl-tsdr-hbase
feature:uninstall odl-tsdr-core
stop hbase server:
cd <hbase-installation-directory>
./stop-hbase.sh
remove the file directory that contains the HBase server installation:
rm -r <hbase-installation-directory>
It is recommended to restart the Karaf container after the uninstallation of the TSDR data store.
To uninstall the TSDR functionality with the Cassandra store,
uninstall cassandra data store related features following from karaf console:
feature:uninstall odl-tsdr-cassandra
feature:uninstall odl-tsdr-core
stop cassandra database:
ps auwx | grep cassandra
sudo kill pid
remove the cassandra installation files:
rm <cassandra-installation-directory>
It is recommended to restart the Karaf container after uninstallation of the TSDR data store.
OpenDaylight Virtual Tenant Network (VTN) is an application that provides multi-tenant virtual network on an SDN controller.
Conventionally, huge investment in the network systems and operating expenses are needed because the network is configured as a silo for each department and system. Therefore various network appliances must be installed for each tenant and those boxes cannot be shared with others. It is a heavy work to design, implement and operate the entire complex network.
The uniqueness of VTN is a logical abstraction plane. This enables the complete separation of logical plane from physical plane. Users can design and deploy any desired network without knowing the physical network topology or bandwidth restrictions.
VTN allows the users to define the network with a look and feel of conventional L2/L3 network. Once the network is designed on VTN, it will automatically be mapped into underlying physical network, and then configured on the individual switch leverage SDN control protocol. The definition of logical plane makes it possible not only to hide the complexity of the underlying network but also to better manage network resources. It achieves reducing reconfiguration time of network services and minimizing network configuration errors. OpenDaylight Virtual Tenant Network (VTN) is an application that provides multi-tenant virtual network on an SDN controller. It provides API for creating a common virtual network irrespective of the physical network.
It is implemented as two major components
An OpenDaylight Plugin that interacts with other modules to implement the components of the VTN model. It also provides a REST interface to configure VTN components in OpenDaylight. VTN Manager is implemented as one plugin to the OpenDaylight. This provides a REST interface to create/update/delete VTN components. The user command in VTN Coordinator is translated as REST API to VTN Manager by the OpenDaylight Driver component. In addition to the above mentioned role, it also provides an implementation to the OpenStack L2 Network Functions API.
The VTN Coordinator is an external application that provides a REST interface for an user to use OpenDaylight VTN Virtualization. It interacts with VTN Manager plugin to implement the user configuration. It is also capable of multiple OpenDaylight orchestration. It realizes VTN provisioning in OpenDaylight instances. In the OpenDaylight architecture VTN Coordinator is part of the network application, orchestration and services layer. VTN Coordinator will use the REST interface exposed by the VTN Manger to realize the virtual network using OpenDaylight. It uses OpenDaylight APIs (REST) to construct the virtual network in OpenDaylight instances. It provides REST APIs for northbound VTN applications and supports virtual networks spanning across multiple OpenDaylight by coordinating across OpenDaylight.
Follow the instructions in Installing OpenDaylight.
Arrange a physical/virtual server with any one of the supported 64-bit OS environment.
Install these packages:
yum install perl-Digest-SHA uuid libxslt libcurl unixODBC json-c bzip2
rpm -ivh http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-redhat93-9.3-1.noarch.rpm
yum install postgresql93-libs postgresql93 postgresql93-server postgresql93-contrib postgresql93-odbc
Install Feature:
feature:install odl-vtn-manager-neutron odl-vtn-manager-rest
Note
The above command will install all features of VTN Manager. You can install only REST or Neutron also.
Enter into the externalapps directory in the top directory of Beryllium:
cd distribution-karaf-0.4.0-Beryllium/externalapps
Run the below command to extract VTN Coordinator from the tar.bz2 file in the externalapps directory:
tar –C/ -jxvf distribution.vtn-coordinator-6.2.0-Beryllium-bin.tar.bz2
This will install VTN Coordinator to /usr/local/vtn directory. The name of the tar.bz2 file name varies depending on the version. Please give the same tar.bz2 file name which is there in your directory.
Configuring database for VTN Coordinator:
/usr/local/vtn/sbin/db_setup
To start the Coordinator:
/usr/local/vtn/bin/vtn_start
Using VTN REST API:
Get the version of VTN REST API using the below command, and make sure the setup is working:
curl --user admin:adminpass -H 'content-type: application/json' -X GET http://<VTN_COORDINATOR_IP_ADDRESS>:8083/vtn-webapi/api_version.json
The response should be like this, but version might differ:
{"api_version":{"version":"V1.2"}}
In the karaf prompt, type the below command to ensure that vtn packages are installed:
feature:list | grep vtn
Run any VTN Manager REST API:
curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X GET http://localhost:8181/restconf/operational/vtn:vtns
ps –ef | grep unc will list all the vtn apps
Run any REST API for VTN Coordinator version
This section introduces you to the OpenDaylight User Experience (DLUX) application.
DLUX provides a number of different Karaf features, which you can enable and disable separately. In Boron they are:
To log in to DLUX, after installing the application:
Note
OpenDaylight’s default credentials are admin for both the username and password.
After you login to DLUX, if you enable only odl-dlux-core feature, you will see only topology application available in the left pane.
Note
To make sure topology displays all the details, enable the odl-l2switch-switch feature in Karaf.
DLUX has other applications such as node, yang UI and those apps won’t show up, until you enable their features odl-dlux-node and odl-dlux-yangui respectively in the Karaf distribution.
Note
If you install your application in dlux, they will also show up on the left hand navigation after browser page refresh.
The Nodes module on the left pane enables you to view the network statistics and port information for the switches in the network.
To use the Nodes module:
The Topology tab displays a graphical representation of network topology created.
Note
DLUX does not allow for editing or adding topology information. The topology is generated and edited in other modules, e.g., the OpenFlow plugin. OpenDaylight stores this information in the MD-SAL datastore where DLUX can read and display it.
To view network topology:
The Yang UI module enables you to interact with the YANG-based MD-SAL datastore. For more information about YANG and how it interacts with the MD-SAL datastore, see the Controller and YANG Tools section of the OpenDaylight Developer Guide.
To use Yang UI:
Select Yang UI on the left pane. The right pane is divided in two parts.
The top part displays a tree of APIs, subAPIs, and buttons to call possible functions (GET, POST, PUT, and DELETE).
Note
Not every subAPI can call every function. For example, subAPIs in the operational store have GET functionality only.
Inputs can be filled from OpenDaylight when existing data from OpenDaylight is displayed or can be filled by user on the page and sent to OpenDaylight.
Buttons under the API tree are variable. It depends on subAPI specifications. Common buttons are:
You must specify the xpath for all these operations. This path is displayed in the same row before buttons and it may include text inputs for specific path element identifiers.
The bottom part of the right pane displays inputs according to the chosen subAPI.
Lists are handled as a special case. For example, a device can store multiple flows. In this case “flow” is name of the list and every list element is identified by a unique key value. Elements of a list can, in turn, contain other lists.
In Yang UI, each list element is rendered with the name of the list it belongs to, its key, its value, and a button for removing it from the list.
After filling in the relevant inputs, click the Show Preview button under the API tree to display request that will be sent to OpenDaylight. A pane is displayed on the right side with text of request when some input is filled.
To display topology:
Lists in Yang UI are displayed as trees. To expand or collapse a list, click the arrow before name of the list. To configure list elements in Yang UI:
To add a new list element with empty inputs use the plus icon-button + that is provided after list name.
To remove several list elements, use the X button that is provided after every list element.
In the YANG-based data store all elements of a list must have a unique key. If you try to assign two or more elements the same key, a warning icon ! is displayed near their name buttons.
When the list contains at least one list element, after the + icon, there are buttons to select each individual list element. You can choose one of them by clicking on it. In addition, to the right of the list name, there is a button which will display a vertically scrollable pane with all the list elements.
Clustering is a mechanism that enables multiple processes and programs to work together as one entity. For example, when you search for something on google.com, it may seem like your search request is processed by only one web server. In reality, your search request is processed by may web servers connected in a cluster. Similarly, you can have multiple instances of OpenDaylight working together as one entity.
Advantages of clustering are:
The following sections describe how to set up clustering on both individual and multiple OpenDaylight instances.
The following sections describe how to set up multiple node clusters in OpenDaylight.
To implement clustering, the deployment considerations are as follows:
To set up a cluster with multiple nodes, we recommend that you use a minimum of three machines. You can set up a cluster with just two nodes. However, if one of the two nodes fail, the cluster will not be operational.
Note
This is because clustering in OpenDaylight requires a majority of the nodes to be up and one node cannot be a majority of two nodes.
Every device that belongs to a cluster needs to have an identifier.
OpenDaylight uses the node’s role
for this purpose. After you define the
first node’s role as member-1 in the akka.conf
file, OpenDaylight uses
member-1 to identify that node.
Data shards are used to contain all or a certain segment of a OpenDaylight’s MD-SAL datastore. For example, one shard can contain all the inventory data while another shard contains all of the topology data.
If you do not specify a module in the modules.conf
file and do not specify
a shard in module-shards.conf
, then (by default) all the data is placed in
the default shard (which must also be defined in module-shards.conf
file).
Each shard has replicas configured. You can specify the details of where the
replicas reside in module-shards.conf
file.
If you have a three node cluster and would like to be able to tolerate any single node crashing, a replica of every defined data shard must be running on all three cluster nodes.
Note
This is because OpenDaylight’s clustering implementation requires a majority of the defined shard replicas to be running in order to function. If you define data shard replicas on two of the cluster nodes and one of those nodes goes down, the corresponding data shards will not function.
If you have a three node cluster and have defined replicas for a data shard on each of those nodes, that shard will still function even if only two of the cluster nodes are running. Note that if one of those remaining two nodes goes down, the shard will not be operational.
It is recommended that you have multiple seed nodes configured. After a cluster member is started, it sends a message to all of its seed nodes. The cluster member then sends a join command to the first seed node that responds. If none of its seed nodes reply, the cluster member repeats this process until it successfully establishes a connection or it is shut down.
After a node is unreachable, it remains down for configurable period of time (10 seconds, by default). Once a node goes down, you need to restart it so that it can rejoin the cluster. Once a restarted node joins a cluster, it will synchronize with the lead node automatically.
OpenDaylight includes some scripts to help with the clustering configuration.
Note
Scripts are stored in the OpenDaylight distribution/bin folder, and maintained in the distribution project repository in the folder distribution-karaf/src/main/assembly/bin/.
This script is used to configure the cluster parameters (e.g. akka.conf, module-shards.conf) on a member of the controller cluster. The user should restart the node to apply the changes.
Note
The script can be used at any time, even before the controller is started for the first time.
Usage:
bin/configure_cluster.sh <index> <seed_nodes_list>
The IP address at the provided index should belong to the member executing the script. When running this script on multiple seed nodes, keep the seed_node_list the same, and vary the index from 1 through N.
Optionally, shards can be configured in a more granular way by modifying the file “custom_shard_configs.txt” in the same folder as this tool. Please see that file for more details.
Example:
bin/configure_cluster.sh 2 192.168.0.1 192.168.0.2 192.168.0.3
The above command will configure the member 2 (IP address 192.168.0.2) of a cluster made of 192.168.0.1 192.168.0.2 192.168.0.3.
To run OpenDaylight in a three node cluster, perform the following:
First, determine the three machines that will make up the cluster. After that, do the following on each machine:
Copy the OpenDaylight distribution zip file to the machine.
Unzip the distribution.
Open the following .conf files:
In each configuration file, make the following changes:
Find every instance of the following lines and replace _127.0.0.1_ with the hostname or IP address of the machine on which this file resides and OpenDaylight will run:
netty.tcp {
hostname = "127.0.0.1"
Note
The value you need to specify will be different for each node in the cluster.
Find the following lines and replace _127.0.0.1_ with the hostname or IP address of any of the machines that will be part of the cluster:
cluster {
seed-nodes = ["akka.tcp://opendaylight-cluster-data@127.0.0.1:2550",
<url-to-cluster-member-2>,
<url-to-cluster-member-3>]
Find the following section and specify the role for each member node. Here we assign the first node with the member-1 role, the second node with the member-2 role, and the third node with the member-3 role:
roles = [
"member-1"
]
Note
This step should use a different role on each node.
Open the configuration/initial/module-shards.conf file and update the replicas so that each shard is replicated to all three nodes:
replicas = [
"member-1",
"member-2",
"member-3"
]
For reference, view a sample config files <<_sample_config_files,below>>.
Move into the +<karaf-distribution-directory>/bin+ directory.
Run the following command:
JAVA_MAX_MEM=4G JAVA_MAX_PERM_MEM=512m ./karaf
Enable clustering by running the following command at the Karaf command line:
feature:install odl-mdsal-clustering
OpenDaylight should now be running in a three node cluster. You can use any of the three member nodes to access the data residing in the datastore.
Sample akka.conf
file:
odl-cluster-data {
bounded-mailbox {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.MeteredBoundedMailbox"
mailbox-capacity = 1000
mailbox-push-timeout-time = 100ms
}
metric-capture-enabled = true
akka {
loglevel = "DEBUG"
loggers = ["akka.event.slf4j.Slf4jLogger"]
actor {
provider = "akka.cluster.ClusterActorRefProvider"
serializers {
java = "akka.serialization.JavaSerializer"
proto = "akka.remote.serialization.ProtobufSerializer"
}
serialization-bindings {
"com.google.protobuf.Message" = proto
}
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "10.194.189.96"
port = 2550
maximum-frame-size = 419430400
send-buffer-size = 52428800
receive-buffer-size = 52428800
}
}
cluster {
seed-nodes = ["akka.tcp://opendaylight-cluster-data@10.194.189.96:2550",
"akka.tcp://opendaylight-cluster-data@10.194.189.98:2550",
"akka.tcp://opendaylight-cluster-data@10.194.189.101:2550"]
auto-down-unreachable-after = 10s
roles = [
"member-1"
]
}
}
}
odl-cluster-rpc {
bounded-mailbox {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.MeteredBoundedMailbox"
mailbox-capacity = 1000
mailbox-push-timeout-time = 100ms
}
metric-capture-enabled = true
akka {
loglevel = "INFO"
loggers = ["akka.event.slf4j.Slf4jLogger"]
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "10.194.189.96"
port = 2551
}
}
cluster {
seed-nodes = ["akka.tcp://opendaylight-cluster-rpc@10.194.189.96:2551"]
auto-down-unreachable-after = 10s
}
}
}
Sample module-shards.conf
file:
module-shards = [
{
name = "default"
shards = [
{
name="default"
replicas = [
"member-1",
"member-2",
"member-3"
]
}
]
},
{
name = "topology"
shards = [
{
name="topology"
replicas = [
"member-1",
"member-2",
"member-3"
]
}
]
},
{
name = "inventory"
shards = [
{
name="inventory"
replicas = [
"member-1",
"member-2",
"member-3"
]
}
]
},
{
name = "toaster"
shards = [
{
name="toaster"
replicas = [
"member-1",
"member-2",
"member-3"
]
}
]
}
]
This script is used to enable or disable the config datastore persistence. The default state is enabled but there are cases where persistence may not be required or even desired. The user should restart the node to apply the changes.
Note
The script can be used at any time, even before the controller is started for the first time.
Usage:
bin/set_persistence.sh <on/off>
Example:
bin/set_persistence.sh off
The above command will disable the config datastore persistence.
XSQL is an XML-based query language that describes simple stored procedures which parse XML data, query or update database tables, and compose XML output. XSQL allows you to query tree models like a sequential database. For example, you could run a query that lists all of the ports configured on a particular module and their attributes.
The following sections cover the XSQL installation process, supported XSQL commands, and the way to structure queries.
To run commands from the XSQL console, you must first install XSQL on your system:
Navigate to the directory in which you unzipped OpenDaylight
Start Karaf:
./bin/karaf
Install XSQL:
feature:install odl-mdsal-xsql
To enter a command in the XSQL console, structure the command as follows:
odl:xsql _<XSQL command>_
The following table describes the commands supported in this OpenDaylight release.
Supported XSQL Console Commands
Command | Description |
---|---|
r | Repeats the last command you executed. |
list vtables | Lists the schema node containers that are currently installed. Whenever an OpenDaylight module is installed, its YANG model is placed in the schema context. At that point, the XSQL receives a notification, confirms that the module’s YANG model resides in the schema context and then maps the model to XSQL by setting up the necessary vtables and vfields. This command is useful when you need to determine vtable information for a query. |
list vfields <vtable name> | Lists the vfields present in a specific vtable. This command is useful when you need to determine vfields information for a query. |
jdbc <ip address> | When the ODL server is behind a firewall, and the JDBC client cannot connect to the JDBC server, run this command to start the client as a server and establish a connection. |
exit | Closes the console. |
tocsv | Enables or disables the forwarding of query output as a .csv file. |
filename <filename> | Specifies the .tocsv file to which the query data is exported. If you do not specify a value for this option when the toccsv option is enabled, the filename for the query data file is generated automatically. |
You can run a query to extract information that meets the criteria you specify using the information provided by the list vtables and list vfields _<vtable name>_ commands. Any query you run should be structured as follows:
select _<vfields you want to search for, separated by a comma and a space>_ from _<vtables you want to search in, separated by a comma and a space>_ where _<criteria>_ ‘*_<criteria operator>_‘;*
For example, if you want to search the nodes/node ID field in the nodes/node-connector table and find every instance of the Hardware-Address object that contains _BA_ in its text string, enter the following query:
select nodes/node.ID from nodes/node-connector where Hardware-Address like '%BA%';
The following criteria operators are supported:
Supported XSQL Query Criteria Operators
Criteria Operators | Description |
---|---|
= | Lists results that equal the value you specify. |
!= | Lists results that do not equal the value you specify. |
like | Lists results that contain the substring you specify. For example, if you specify like %BC%, every string that contains that particular substring is displayed. |
< | Lists results that are less than the value you specify. |
> | Lists results that are more than the value you specify. |
and | Lists results that match both values you specify. |
or | Lists results that match either of the two values you specify. |
>= | Lists results that are more than or equal to the value you specify. |
<= | Lists results that are less than or equal to the value you specify. |
is null | Lists results for which no value is assigned. |
not null | Lists results for which any value is assigned. |
skip | Use this operator to list matching results from a child node, even if its parent node does not meet the specified criteria. See the following example for more information. |
If you are looking at the following structure and want to determine all of the ports that belong to a YY type module:
If you specify Module.Type=’YY’ in your query criteria, the ports associated with module 1.1 will not be returned since its parent module is type XX. Instead, enter Module.Type=’YY’ or skip Module!=’YY’. This tells XSQL to disregard any parent module data that does not meet the type YY criteria and collect results for any matching child modules. In this example, you are instructing the query to skip module 1 and collect the relevant data from module 1.1.
This feature allows NETCONF/RESTCONF users to determine the version of OpenDaylight they are communicating with.
Follow these steps to install the version feature:
Navigate to the directory in which you installed OpenDaylight
Start Karaf:
./bin/karaf
Install Version feature:
feature:install odl-distribution-version
Note
For RESTCONF access, it is recommended to install odl-restconf and odl-netconf-connector-ssh.
Example of RESTCONF request using curl from bash:
$ curl -u 'admin:admin' localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-distribution-version:odl-version/odl-distribution-version
Example response (formatted):
{
"module": [
{
"type": "odl-distribution-version:odl-version",
"name": "odl-distribution-version",
"odl-distribution-version:version": "0.5.0-SNAPSHOT"
}
]
}
This document discusses the various security issues that might affect OpenDaylight. The document also lists specific recommendations to mitigate security risks.
This document also contains information about the corrective steps you can take if you discover a security issue with OpenDaylight, and if necessary, contact the Security Response Team, which is tasked with identifying and resolving security threats.
There are many different kinds of security vulnerabilities that could affect an OpenDaylight deployment, but this guide focuses on those where (a) the servers, virtual machines or other devices running OpenDaylight have been properly physically (or virtually in the case of VMs) secured against untrusted individuals and (b) individuals who have access, either via remote logins or physically, will not attempt to attack or subvert the deployment intentionally or otherwise.
While those attack vectors are real, they are out of the scope of this document.
What remains in scope is attacks launched from a server, virtual machine, or device other than the one running OpenDaylight where the attack does not have valid credentials to access the OpenDaylight deployment.
The rest of this document gives specific recommendations for deploying OpenDaylight in a secure manner, but first we highlight some high-level security advantages of OpenDaylight.
Separating the control and management planes from the data plane (both logically and, in many cases, physically) allows possible security threats to be forced into a smaller attack surface.
Having centralized information and network control gives network administrators more visibility and control over the entire network, enabling them to make better decisions faster. At the same time, centralization of network control can be an advantage only if access to that control is secure.
Note
While both previous advantages improve security, they also make an OpenDaylight deployment an attractive target for attack making understanding these security considerations even more important.
The ability to more rapidly evolve southbound protocols and how they are used provides more and faster mechanisms to enact appropriate security mitigations and remediations.
OpenDaylight is built from OSGi bundles and the Karaf Java container. Both Karaf and OSGi provide some level of isolation with explicit code boundaries, package imports, package exports, and other security-related features.
OpenDaylight has a history of rapidly addressing known vulnerabilities and a well-defined process for reporting and dealing with them.
We recommend that you follow the deployment guidelines in setting up OpenDaylight to minimize security threats.
The default credentials should be changed before deploying OpenDaylight.
OpenDaylight should be deployed in a private network that cannot be accessed from the internet.
Separate the data network (that connects devices using the network) from the management network (that connects the network devices to OpenDaylight).
Note
Deploying OpenDaylight on a separate, private management network does not eliminate threats, but only mitigates them. By construction, some messages must flow from the data network to the management network, e.g., OpenFlow packet_in messages, and these create an attack surface even if it is a small one.
Implement an authentication policy for devices that connect to both the data and management network. These are the devices which bridge, likely untrusted, traffic from the data network to the management network.
OSGi is a Java-specific framework that improves the way that Java classes interact within a single JVM. It provides an enhanced version of the java.lang.SecurityManager (ConditionalPermissionAdmin) in terms of security.
Java provides a security framework that allows a security policy to grant permissions, such as reading a file or opening a network connection, to specific code. The code maybe classes from the jarfile loaded from a specific URL, or a class signed by a specific key. OSGi builds on the standard Java security model to add the following features:
For more information, refer to http://www.osgi.org/Main/HomePage.
Apache Karaf is a OSGi-based runtime platform which provides a lightweight container for OpenDaylight and applications. Apache Karaf uses either Apache Felix Framework or Eclipse Equinox OSGi frameworks, and provide additional features on top of the framework.
Apache Karaf provides a security framework based on Java Authentication and Authorization Service (JAAS) in compliance with OSGi recommendations, while providing RBAC (Role-Based Access Control) mechanism for the console and Java Management Extensions (JMX).
The Apache Karaf security framework is used internally to control the access to the following components:
The remote management capabilities are present in Apache Karaf by default, however they can be disabled by using various configuration alterations. These configuration options may be applied to the OpenDaylight Karaf distribution.
Note
Refer to the following list of publications for more information on implementing security for the Karaf container.
You can lock down your deployment post installation. Set
karaf.shutdown.port=-1
in etc/custom.properties
or etc/config.properties
to
disable the remote shutdown port.
Many individual southbound plugins provide mechanisms to secure their communication with network devices. For example, the OpenFlow plugin supports TLS connections with bi-directional authentication and the NETCONF plugin supports connecting over SSH. Meanwhile, the Unified Secure Channel plugin provides a way to form secure, remote connections for supported devices.
When deploying OpenDaylight, you should carefully investigate the secure mechanisms to connect to devices using the relevant plugins.
AAA stands for Authentication, Authorization, and Accounting. All three of can help improve the security posture of and OpenDaylight deployment. In this release, only authentication is fully supported, while authorization is an experimental feature and accounting remains a work in progress.
The vast majority of OpenDaylight’s northbound APIs (and all RESTCONF APIs) are protected by AAA by default when installing the +odl-restconf+ feature. In the cases that APIs are not protected by AAA, this will be noted in the per-project release notes.
By default, OpenDaylight has only one user account with the username and password admin. This should be changed before deploying OpenDaylight.
While OpenDaylight clustering provides many benefits including high availability, scale-out performance, and data durability, it also opens a new attack surface in the form of the messages exchanged between the various instances of OpenDaylight in the cluster. In the current OpenDaylight release, these messages are neither encrypted nor authenticated meaning that anyone with access to the management network where OpenDaylight exchanges these clustering messages can forge and/or read the messages. This means that if clustering is enabled, it is even more important that the management network be kept secure from any untrusted entities.