Table of Contents
https://git.opendaylight.org/gerrit/#/q/topic:coe_integration
This spec proposes how to integrate COE and Netvirt projects for enabling networking(L2 and L3 support) for containers.
COE(Container Orchestration Engine) project aims at developing a framework for integrating Container Orchestration Engine (like Kubernetes) and OpenDaylight. Netvirt will serve as the backend for COE, as Netvirt provides generic enough constructs which will work for VMs as well as Containers.
Current Netvirt project does not have a driver that will work with Kubernetes baremetal cluster. COE project aims at enabling the same, and will require a plugin in Netvirt project to convert the events from Kubernetes to the required constructs in Netvirt.
k8s imposes some fundamental requirements on the networking implementation:
The Kubernetes model is for each pod to have an IP in a flat shared namespace that allows full communication with physical computers and containers across the network.
The high level view of the end to end solution is given in the below picture :
A new module called coe will be added in Netvirt which will serve as the watcher for all the container orchestration related events. This module will be responsible for converting the COE related constructs to Netvirt constructs.
COE will be using DHCP dynamic allocation feature in Netvirt, which has some missing parts for the integration to work. DHCP module’s bind service logic so far works only for neutron ports. This has to be enhanced to work for k8s pods as well.
ARP responder logic of Netvirt works only for neutron ports, this needs enhancements to work with k8s ports, so that ARP responses can be sent from OVS directly, without the need for sending the same to ODL.
IntefaceManager currently treats only nova port patterns(tap/vhu) as well as tunnel port patterns as unique. For any other portnames datapath-node-id will be prefixed to the port-name. CNI plugin creates unique ports which starts with “veth” prefix, and this needs to be added to the set of unique port patterns in Genius.
This will enable default Kubernetes behavior to allow all traffic from all sources inside or outside the cluster to all pods within the cluster. This use case does not add multi-tenancy support.
Network isolation policy will impose limitations on the connectivity from an optional set of traffic sources to an optional set of destination TCP/UDP ports. Regardless of network policy, pods should be accessible by the host on which they are running to allow local health checks. This use case does not address multi-tenancy.
More enhanced use cases can be added in the future, that will allow to add extra functionality
In order to support Kubernetes networking via Netvirt, we should define how COE model maps into Netvirt model.
COE entity | Netvirt entity | notes |
---|---|---|
node + namespace | elan-instance | Whenever the first pod under a namespace in a node is created,an elan-instance has to be created. |
namespace | vpn-instance | Whenever the first pod under a namespace is created, a vpn-instance has to be created. |
pod | elan-interface | For each pod created, an elan-interface has to be created, based on its node and namespace |
pod | vpn-interface | For each pod created, a vpn-interface has to be created, based on its namespace |
No pipeline changes will be introduced as part of this feature.
The feature should operate in ODL Clustered environment reliably.
Not covered by this Design Document.
Oxygen
An alternative for container networking is to use kuryr-kubernetes which will work with ODL as backend. However the same will not work in an environement where Openstack is not present. There are scenarios where Baremetal Kubernetes clusters have to work without Openstack, and this feature comes into picture there.
URL: restconf/config/dhcp_allocation_pool:dhcp_allocation_pool/
Sample JSON data
{
"dhcp_allocation_pool:network": [
{
"dhcp_allocation_pool:allocation-pool": [
{
"dhcp_allocation_pool:subnet": "192.168.10.0/24",
"dhcp_allocation_pool:allocate-to": "192.168.10.50",
"dhcp_allocation_pool:gateway": "192.168.10.2",
"dhcp_allocation_pool:allocate-from": "192.168.10.3"
}
],
"dhcp_allocation_pool:network-id": "pod-namespace"
}
]
}
URL: restconf/config/pod:coe
Sample JSON data
{
"pod:pods": [
{
"pod:version": "Some version",
"pod:uid": "AC092D9B-E9Eb-BAE2-eEd8-74Aca2B7Fa9C",
"pod:interface": [
{
"pod:uid": "7bA91A3A-f17E-2eBB-eDec-3BBBEa27DCa7",
"pod:ip-address": "0.147.0.7",
"pod:network-id": "fBAD80df-B0B4-0580-8D14-11FcaCED2ac6",
"pod:network-type": "FLAT",
"pod:segmentation-id": "0"
}
]
}
]
}
URL: restconf/config/service:service-information
Sample JSON data
{
"service:service-information": {
"service:services": [
{
"service:uid": "EeafFFB7-D9Fc-aAeD-FBc9-8Af8BFaacDD9",
"service:cluster-ip-address": "5.21.5.0",
"service:endpoints": [
"AFbcF0EB-Fc3f-acea-A438-5CFDfCEfbcb0"
]
}
]
}
}
Frederick Kautz <fkautz@redhat.com>
Mohamed El-serngawy <m.elserngawy@gmail.com>
This will require changes to User Guide and Developer Guide.