Difference between revisions of "Honeycomb/Releases/1609/Developing Honeycomb Plugins"
(→Changes to Module) |
(→Honeycomb plugin development tutorial) |
||
(7 intermediate revisions by 2 users not shown) | |||
Line 7: | Line 7: | ||
* data they can handle (Create, Read, Update, Delete operations) | * data they can handle (Create, Read, Update, Delete operations) | ||
* notifications do they emit | * notifications do they emit | ||
− | + | ||
+ | Note: RPCs are not supported in this release. | ||
A plugin usually consists of: | A plugin usually consists of: | ||
Line 33: | Line 34: | ||
Honeycomb provides a maven archetype to generate a plugin skeleton. To use that archetype, run maven: | Honeycomb provides a maven archetype to generate a plugin skeleton. To use that archetype, run maven: | ||
− | mvn -X archetype:generate -DarchetypeGroupId=io.fd.honeycomb.tools -DarchetypeArtifactId=honeycomb-plugin-archetype -DarchetypeVersion=1.16.9 | + | mvn -X archetype:generate -DarchetypeGroupId=io.fd.honeycomb.tools -DarchetypeArtifactId=honeycomb-plugin-archetype -DarchetypeVersion=1.16.9 |
Fill in the parameters e.g. | Fill in the parameters e.g. | ||
Line 39: | Line 40: | ||
groupId: io.fd.honeycomb.tutorial | groupId: io.fd.honeycomb.tutorial | ||
artifactId: sample-plugin | artifactId: sample-plugin | ||
− | version: 1.16.9 | + | version: 1.16.9 |
package: io.fd.honeycomb.tutorial | package: io.fd.honeycomb.tutorial | ||
Line 218: | Line 219: | ||
<groupId>io.fd.honeycomb.common</groupId> | <groupId>io.fd.honeycomb.common</groupId> | ||
<artifactId>minimal-distribution-parent</artifactId> | <artifactId>minimal-distribution-parent</artifactId> | ||
− | <version>1.16.9 | + | <version>1.16.9</version> |
</parent> | </parent> | ||
Line 224: | Line 225: | ||
<groupId>io.fd.honeycomb.tutorial</groupId> | <groupId>io.fd.honeycomb.tutorial</groupId> | ||
<artifactId>sample-distribution</artifactId> | <artifactId>sample-distribution</artifactId> | ||
− | <version>1.16.9 | + | <version>1.16.9</version> |
<properties> | <properties> | ||
<exec.parameters>-Xms128m -Xmx128m</exec.parameters> | <exec.parameters>-Xms128m -Xmx128m</exec.parameters> | ||
<main.class>io.fd.honeycomb.tutorial.Main</main.class> | <main.class>io.fd.honeycomb.tutorial.Main</main.class> | ||
− | <interfaces.mapping.version>1.16.9 | + | <interfaces.mapping.version>1.16.9</interfaces.mapping.version> |
− | <honeycomb.min.distro.version>1.16.9 | + | <honeycomb.min.distro.version>1.16.9</honeycomb.min.distro.version> |
</properties> | </properties> | ||
Line 339: | Line 340: | ||
The distribution can be started by: | The distribution can be started by: | ||
− | sudo ./sample-distribution/target/sample-distribution-1.16.9 | + | sudo ./sample-distribution/target/sample-distribution-1.16.9-hc/sample-distribution-1.16.9/honeycomb |
Note: honeycomb-start script is the background alternative | Note: honeycomb-start script is the background alternative | ||
Line 390: | Line 391: | ||
These are the shared-memory, binary APIs of VPP that would be difficult to use from Java. | These are the shared-memory, binary APIs of VPP that would be difficult to use from Java. | ||
− | But VPP contains a ''jvpp'' component, that's completely generated from VPP's API definition file and allows Java applications to manage VPP in plain Java using JNI in the background. Honeycomb provides a component that can be included in a distribution | + | But VPP contains a ''jvpp'' component, that's completely generated from VPP's API definition file and allows Java applications to manage VPP in plain Java using JNI in the background. Honeycomb provides a component that can be included in a distribution. |
=== Updating sample-plugin to manage VPP === | === Updating sample-plugin to manage VPP === | ||
Line 442: | Line 443: | ||
<groupId>io.fd.vpp</groupId> | <groupId>io.fd.vpp</groupId> | ||
<artifactId>jvpp-core</artifactId> | <artifactId>jvpp-core</artifactId> | ||
− | <version>16.09 | + | <version>16.09</version> |
</dependency> | </dependency> | ||
</source> | </source> | ||
Line 451: | Line 452: | ||
<groupId>io.fd.honeycomb.vpp</groupId> | <groupId>io.fd.honeycomb.vpp</groupId> | ||
<artifactId>vpp-translate-utils</artifactId> | <artifactId>vpp-translate-utils</artifactId> | ||
− | <version>1.16.9 | + | <version>1.16.9</version> |
</dependency> | </dependency> | ||
</source> | </source> | ||
Line 474: | Line 475: | ||
import io.fd.honeycomb.translate.v3po.util.NamingContext; | import io.fd.honeycomb.translate.v3po.util.NamingContext; | ||
import io.fd.honeycomb.translate.v3po.util.TranslateUtils; | import io.fd.honeycomb.translate.v3po.util.TranslateUtils; | ||
+ | import java.util.Collections; | ||
import java.util.List; | import java.util.List; | ||
import java.util.stream.Collectors; | import java.util.stream.Collectors; | ||
Line 534: | Line 536: | ||
} | } | ||
− | + | // Check for empty response (no vxlan tunnels to read) | |
+ | if (reply == null || reply.vxlanTunnelDetails == null) { | ||
+ | return Collections.emptyList(); | ||
+ | } | ||
+ | |||
return reply.vxlanTunnelDetails.stream() | return reply.vxlanTunnelDetails.stream() | ||
// Need a name of an interface here. Use context to look it up from index | // Need a name of an interface here. Use context to look it up from index | ||
Line 645: | Line 651: | ||
// be aware that instance identifier passes to subtreeAdd/subtreeAddAfter/subtreeAddBefore should define subtree, | // be aware that instance identifier passes to subtreeAdd/subtreeAddAfter/subtreeAddBefore should define subtree, | ||
// therefore it should be relative from handled node down - InstanceIdentifier.create(HandledNode), not parent.child(HandledNode.class) | // therefore it should be relative from handled node down - InstanceIdentifier.create(HandledNode), not parent.child(HandledNode.class) | ||
− | registry.add( | + | registry.add(new GenericListReader<>( |
− | + | // What part of subtree this reader handles is identified by an InstanceIdentifier | |
− | + | ROOT_STATE_CONTAINER_ID.child(Vxlans.class).child(VxlanTunnel.class), | |
+ | // Customizer (the actual translation code to do the heavy lifting) | ||
+ | new VxlanReadCustomizer(jvppCore, vxlanNamingContext))); | ||
} | } | ||
} | } | ||
Line 756: | Line 764: | ||
} | } | ||
} | } | ||
+ | |||
</source> | </source> | ||
Line 795: | Line 804: | ||
public void init(@Nonnull final ModifiableWriterRegistryBuilder registry) { | public void init(@Nonnull final ModifiableWriterRegistryBuilder registry) { | ||
// Unlike ReaderFactory, there's no need to add structural writers, just the writers that actually do something | // Unlike ReaderFactory, there's no need to add structural writers, just the writers that actually do something | ||
− | + | ||
// register writer for vxlan tunnel | // register writer for vxlan tunnel | ||
registry.add(new GenericWriter<>( | registry.add(new GenericWriter<>( | ||
Line 882: | Line 891: | ||
<groupId>io.fd.honeycomb.tutorial</groupId> | <groupId>io.fd.honeycomb.tutorial</groupId> | ||
<artifactId>sample-plugin-impl</artifactId> | <artifactId>sample-plugin-impl</artifactId> | ||
− | <version>1.16.9 | + | <version>1.16.9</version> |
</dependency> | </dependency> | ||
</source> | </source> | ||
Line 918: | Line 927: | ||
At this point, the vpp-integration distribution with sample-plugin can now be started. But first, make sure that a compatible version of VPP is installed and running. Next, start honeycomb with: | At this point, the vpp-integration distribution with sample-plugin can now be started. But first, make sure that a compatible version of VPP is installed and running. Next, start honeycomb with: | ||
− | sudo vpp-integration/minimal-distribution/target/vpp-integration-distribution-1.16.9 | + | sudo vpp-integration/minimal-distribution/target/vpp-integration-distribution-1.16.9-hc/vpp-integration-distribution-1.16.9/honeycomb |
==== Testing over RESTCONF ==== | ==== Testing over RESTCONF ==== | ||
Line 960: | Line 969: | ||
Netconf testing guide including Notifications, can be found in [[Honeycomb/Running_Honeycomb]]. | Netconf testing guide including Notifications, can be found in [[Honeycomb/Running_Honeycomb]]. | ||
− | Netconf and Restconf are equivalent interfaces to Honeycomb, being capable of providing the same APIs. The only difference is with notifications. Only NETCONF is capable of emitting the notifications. | + | |
+ | Note: Netconf and Restconf are equivalent interfaces to Honeycomb, being capable of providing the same APIs. The only difference is with notifications. Only NETCONF is capable of emitting the notifications. | ||
=== Full working sample === | === Full working sample === | ||
Full working sample on github: https://github.com/marosmars/honeycomb-samples/tree/vpp-plugin | Full working sample on github: https://github.com/marosmars/honeycomb-samples/tree/vpp-plugin | ||
+ | |||
+ | ... just a note on what further work for this plugin might contain: | ||
+ | * unit tests | ||
+ | * POSTMAN REST collection with sample requests | ||
+ | * logging |
Latest revision as of 14:57, 21 September 2016
Contents
- 1 Honeycomb plugin development tutorial
Honeycomb plugin development tutorial
Honeycomb plugin development guide for 16.09 Honeycomb release.
Plugin overview
Honeycomb provides a framework for plugins to participate in the data handling. The plugins use YANG modeling language to describe what:
- data they can handle (Create, Read, Update, Delete operations)
- notifications do they emit
Note: RPCs are not supported in this release.
A plugin usually consists of:
- YANG models - These models contain data and notification definitions that are implemented by the plugin. ODL's Yangtools project is used to generate Java APIs from those models (called Binding Aware APIs in ODL) and are later used in the translation code.
- Set of readers - Readers provide operational/state data from plugin or its underlying layer. This means that operational/state data is current state of the plugin or its underlying layer. Readers return these operational data by e.g. reading from underlying layer and transforming it into YANG modeled data.
- Set of writers - Writers handle configuration data for the plugin or its underlying layer This means that configuration data is the intent being sent to Honeycomb, that should be passed to plugins or their underlying layers. Writers handle these configuration data by transforming YANG modeled data into e.g. underlying layer calls.
- Set of initializers - Initializers are invoked right after Honeycomb starts. The gould here is to read current operational/state data of the plugin or its underlying layer and then transform the operational data into configuration data. This enables reconciliation in cases when Honeycomb looses it's persisted data, or is started fresh while the underlying layer already contains some configuration that is manifested as operational/state data
- Plugin configuration - Usually configuration in json format + it's Java equivalent.
- Set of notification producers - If there are any notifications, the producers transform the data into YANG model notifications and emit them.
- Module - Small class instantiating & exposing plugin's components
What's good to add:
- Unit tests
- Documentation
- Sample REST or NETCONF requests
Prerequisites
Make sure to check Honeycomb/Setting_Up_Your_Dev_Environment page to properly setup the environment.
Developing generic plugins
Since Honeycomb is a generic agent. Any plugin (translation code) can be injected into the framework, creating a custom agent providing RESTCONF/NETCONF northbound interfaces out-of-box.
Developing plugin code
Honeycomb provides a maven archetype to generate a plugin skeleton. To use that archetype, run maven:
mvn -X archetype:generate -DarchetypeGroupId=io.fd.honeycomb.tools -DarchetypeArtifactId=honeycomb-plugin-archetype -DarchetypeVersion=1.16.9
Fill in the parameters e.g.
groupId: io.fd.honeycomb.tutorial artifactId: sample-plugin version: 1.16.9 package: io.fd.honeycomb.tutorial
And following structure should be created:
sample-plugin/ ├── pom.xml ├── sample-plugin-api │ ├── pom.xml │ └── src │ └── main │ ├── java │ └── yang │ └── sample-plugin.yang └── sample-plugin-impl ├── pom.xml ├── Readme.adoc └── src ├── main │ └── java │ └── io │ └── fd │ └── honeycomb │ └── tutorial │ ├── CrudService.java │ ├── ElementCrudService.java │ ├── init │ │ └── ConfigDataInitializer.java │ ├── ModuleConfiguration.java │ ├── Module.java │ ├── read │ │ ├── ElementStateCustomizer.java │ │ └── ModuleStateReaderFactory.java │ └── write │ ├── ElementCustomizer.java │ └── ModuleWriterFactory.java └── test └── java
There are 2 modules:
- sample-plugin-api - Contains YANG models and generates Java APIs from the models.
- sample-plugin-impl - Contains: Readers, Writers, Initializers, Notification producers (not yet), Configuration and Wiring.
There is plenty of comments within the code, so its is advised to import the code into an IDE and take a look around.
The archetype generates a plugin that is fully working right from the start. Since it contains all the components necessary, works on a sample yang model and provides some sample values.
Building the code
To build the code, just execute maven:
mvn clean install
And that's it. This is a working Honeycomb plugin.
Adding notifications
No notification producer is generated by the archetype, but it is pretty straightforward to add one.
First, the notification has to be defined in YANG (sample-plugin-api/src/main/yang/sample-plugin.yang) e.g.
notification sample-notification { leaf content { type string; } }
Now rebuild the plugin to generate new APIs for our notification.
Next part is implementing the Notification producer. First thing to do is to add a dependency on notification-api, since it's not included by default. Update sample-plugin-impl's pom file with:
<dependency> <groupId>io.fd.honeycomb</groupId> <artifactId>notification-api</artifactId> <version>${honeycomb.infra.version}</version> </dependency>
Now, the producer code can be added:
package io.fd.honeycomb.tutorial.notif; import io.fd.honeycomb.notification.ManagedNotificationProducer; import io.fd.honeycomb.notification.NotificationCollector; import java.util.Collection; import java.util.Collections; import javax.annotation.Nonnull; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.SampleNotification; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.SampleNotificationBuilder; import org.opendaylight.yangtools.yang.binding.Notification; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Notification producer for sample plugin */ public class SampleNotificationProducer implements ManagedNotificationProducer { private static final Logger LOG = LoggerFactory.getLogger(SampleNotificationProducer.class); private Thread thread; @Override public void start(@Nonnull final NotificationCollector collector) { LOG.info("Starting notification stream for interfaces"); // Simulating notification producer thread = new Thread(() -> { while(true) { if (Thread.currentThread().isInterrupted()) { return; } try { Thread.sleep(2000); } catch (InterruptedException e) { Thread.currentThread().interrupt(); break; } final SampleNotification notification = new SampleNotificationBuilder() .setContent("Hello world " + System.currentTimeMillis()) .build(); LOG.info("Emitting notification: {}", notification); collector.onNotification(notification); } }, "NotificationProducer"); thread.setDaemon(true); thread.start(); } @Override public void stop() { if(thread != null) { thread.interrupt(); } } @Nonnull @Override public Collection<Class<? extends Notification>> getNotificationTypes() { // Producing only this single type of notification return Collections.singleton(SampleNotification.class); } @Override public void close() throws Exception { stop(); } }
This is placed sample-plugin/sample-plugin-impl/src/main/java/io/fd/honeycomb/tutorial/notif/SampleNotificationProducer.java.
Note: This is a sample producer, that creates a thread to periodically emit a sample notification
Now it needs to be exposed from the plugin. The configure method in Module class needs to be updated with:
Multibinder.newSetBinder(binder(), ManagedNotificationProducer.class).addBinding().to(SampleNotificationProducer.class);
Plugin needs to be rebuilt, but that's it for notification producers.
Creating custom distribution
The plugin is now ready to have a Honeycomb distribution for it. This section will provides information on how to create a custom Honeycomb distribution.
A new maven module needs to be created. So in sample-plugin folder:
mkdir sample-distribution cd sample-distribution mkdir -p src/main/java/io/fd/honeycomb/tutorial
Then create the pom:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <groupId>io.fd.honeycomb.common</groupId> <artifactId>minimal-distribution-parent</artifactId> <version>1.16.9</version> </parent> <modelVersion>4.0.0</modelVersion> <groupId>io.fd.honeycomb.tutorial</groupId> <artifactId>sample-distribution</artifactId> <version>1.16.9</version> <properties> <exec.parameters>-Xms128m -Xmx128m</exec.parameters> <main.class>io.fd.honeycomb.tutorial.Main</main.class> <interfaces.mapping.version>1.16.9</interfaces.mapping.version> <honeycomb.min.distro.version>1.16.9</honeycomb.min.distro.version> </properties> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> </plugin> <plugin> <groupId>org.codehaus.gmaven</groupId> <artifactId>groovy-maven-plugin</artifactId> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> </plugin> <plugin> <artifactId>maven-assembly-plugin</artifactId> </plugin> </plugins> </build> <dependencies> <!-- Dependency on sample plugin --> <dependency> <groupId>io.fd.honeycomb.tutorial</groupId> <artifactId>sample-plugin-impl</artifactId> <version>${interfaces.mapping.version}</version> </dependency> <!-- Dependency on distribution base --> <dependency> <groupId>io.fd.honeycomb</groupId> <artifactId>minimal-distribution</artifactId> <version>${honeycomb.min.distro.version}</version> </dependency> </dependencies> </project>
Now, Main class has to be added in folder src/main/java/io/fd/honeycomb/tutorial:
package io.fd.honeycomb.tutorial; import com.google.common.collect.Lists; import com.google.inject.Module; import java.util.List; public class Main { public static void main(String[] args) { final List<Module> sampleModules = Lists.newArrayList(io.fd.honeycomb.infra.distro.Main.BASE_MODULES); sampleModules.add(new io.fd.honeycomb.tutorial.Module()); io.fd.honeycomb.infra.distro.Main.init(sampleModules); } }
Last thing to do is to update sample-plugin/pom.xml with:
<module>sample-distribution</module>
Another rebuild and the distribution should be created in sample-distribution/target.
Adding existing plugins to the mix
In previous section, a custom Honeycomb distribution was created. This section will show how to add existing plugins to the new distribution.
So in order to add another existing sample (sample interface plugin from Honeycomb) into the distribution, update the sample-plugin/sample-distribution/pom.xml with:
<dependency> <groupId>io.fd.honeycomb.samples.interfaces</groupId> <artifactId>interfaces-mapping</artifactId> <version>${interfaces.mapping.version}</version> </dependency>
Now in main, add this line:
sampleModules.add(new SampleInterfaceModule());
That's it, just rebuild.
Verifying distribution
The distribution with this sample plugin and sample interface plugin is now available and can be tested.
Distribution can now be found in sample-plugin/sample-distribution/target as:
- zip archive
- tar.gz archive
- folder
The distribution can be started by:
sudo ./sample-distribution/target/sample-distribution-1.16.9-hc/sample-distribution-1.16.9/honeycomb
Note: honeycomb-start script is the background alternative
Honeycomb will display following message in the log:
2016-09-02 13:20:30.424 CEST [main] INFO io.fd.honeycomb.infra.distro.Main - Honeycomb started successfully!
and that means Honeycomb was started successfully.
Testing over RESTCONF
Reading sample-plugin operational data:
curl -u admin:admin http://localhost:8181/restconf/operational/sample-plugin:sample-plugin-state
Writing sample-plugin operational data:
Not possible from YANG spec. Operational data is only for read.
Writing sample-plugin config data:
curl -H 'Content-Type: application/json' -H 'Accept: application/json' -u admin:admin -X PUT -d '{"sample-plugin":{"element":[{"id":10,"description":"This is a example of loaded data"}]}}' http://localhost:8181/restconf/config/sample-plugin:sample-plugin
Reading sample-plugin config data:
curl -u admin:admin http://localhost:8181/restconf/config/sample-plugin:sample-plugin
Testing over NETCONF
Netconf testing guide including Notifications, can be found in Honeycomb/Running_Honeycomb
Full working example
Full working example on github: https://github.com/marosmars/honeycomb-samples
Developing plugins for VPP
Honeycomb's primary use case is to provide an agent for VPP. This section provides a tutorial for how to develop a Honeycomb plugin that translates YANG modeled data into VPP binary API invocation.
Analyzing VPP's API
For this tutorial, VPP's VXLAN management API. Honeycomb already contains VXLAN management translation code inside V3PO plugin. This will be a simplified version.
Looking at VPP's API definition file, there are 3 calls related to VXLAN:
- vxlan_add_del_tunnel - Creates and Deletes VXLAN tunnel (Update not supported)
- vxlan_tunnel_dump - Reads all VXLAN tunnels
These are the shared-memory, binary APIs of VPP that would be difficult to use from Java. But VPP contains a jvpp component, that's completely generated from VPP's API definition file and allows Java applications to manage VPP in plain Java using JNI in the background. Honeycomb provides a component that can be included in a distribution.
Updating sample-plugin to manage VPP
This tutorial starts where the previous one left and will continue to modify the sample plugin in order to be able to manage VPP's VXLAN tunnels.
Updating YANG models
YANG models need to reflect the intent of managing VXLAN tunnels in VPP. As mentioned before, VPP exposes 2 calls to manage VXLAN tunnels. Each vxlan tunnel has a set of attributes, but for simplicity, only 2 of them will be exposed in YANG : source IP address and destination IP address. Rest of attributes will be set to default values in the code.
So let's update the sample-plugin-params grouping to:
grouping sample-plugin-params { container vxlans { list vxlan-tunnel { key id; leaf id { type string; } leaf src { type inet:ip-address; } leaf dst { type inet:ip-address; } } } }
Since ietf-inet-types YANG model is used for the ip-address type, it needs to be imported (after the prefix statement):
import ietf-inet-types { prefix "inet"; }
Note: The reason this works is that there are some general YANG models such as ietf-inet-types added to *-api module in its pom.xml.
Now rebuild the *-api module.
JVpp dependency
Another important thing that the plugin needs is dependency to VPP's JVpp (Java APIs). To do so, just update *-impl's pom.xml with:
<!-- VPP's core Java APIs --> <dependency> <groupId>io.fd.vpp</groupId> <artifactId>jvpp-core</artifactId> <version>16.09</version> </dependency>
Also add vpp-translate-utils dependency so that writing translation code is easier:
<dependency> <groupId>io.fd.honeycomb.vpp</groupId> <artifactId>vpp-translate-utils</artifactId> <version>1.16.9</version> </dependency>
Do not rebuild yet, since the APIs for this plugin have changed and the compilation would fail. But make sure to update the project if using an IDE to pick up the Jvpp dependency.
Updating the customizers
First of all, remove CrudService interface and ElementCrudService class. Will not be needed now.
Changes to ElementStateCustomizer
Rename it to VxlanReadCustomzier. Update the code to:
package io.fd.honeycomb.tutorial.read; import com.google.common.base.Preconditions; import io.fd.honeycomb.translate.read.ReadContext; import io.fd.honeycomb.translate.read.ReadFailedException; import io.fd.honeycomb.translate.spi.read.ListReaderCustomizer; import io.fd.honeycomb.translate.v3po.util.NamingContext; import io.fd.honeycomb.translate.v3po.util.TranslateUtils; import java.util.Collections; import java.util.List; import java.util.stream.Collectors; import javax.annotation.Nonnull; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.sample.plugin.params.VxlansBuilder; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.sample.plugin.params.vxlans.VxlanTunnel; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.sample.plugin.params.vxlans.VxlanTunnelBuilder; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.sample.plugin.params.vxlans.VxlanTunnelKey; import org.opendaylight.yangtools.concepts.Builder; import org.opendaylight.yangtools.yang.binding.DataObject; import org.opendaylight.yangtools.yang.binding.InstanceIdentifier; import org.openvpp.jvpp.VppBaseCallException; import org.openvpp.jvpp.core.dto.VxlanTunnelDetails; import org.openvpp.jvpp.core.dto.VxlanTunnelDetailsReplyDump; import org.openvpp.jvpp.core.dto.VxlanTunnelDump; import org.openvpp.jvpp.core.future.FutureJVppCore; /** * Reader for {@link VxlanTunnel} list node from our YANG model. */ public final class VxlanReadCustomizer implements ListReaderCustomizer<VxlanTunnel, VxlanTunnelKey, VxlanTunnelBuilder> { // JVpp core. This is the Java API for VPP's core API. private final FutureJVppCore jVppCore; // Naming context for interfaces // Honeycomb provides a "context" storage for plugins. This storage is used for storing metadata required during // data translation (just like in this plugin). An example of such metadata would be interface identifier. In Honeycomb // we use string names for interfaces, however VPP uses only indices (that are created automatically). // This means that translation layer has to store the mapping between HC interface name <-> VPP' interface index. // And since vxlan tunnel is a type of interface in VPP, the same applies here // // Honeycomb provides a couple utilities on top of context storage such as NamingContext. It is just a map // backed by context storage that makes the lookup and storing easier. private final NamingContext vxlanNamingContext; public VxlanReadCustomizer(final FutureJVppCore jVppCore, final NamingContext vxlanNamingContext) { this.jVppCore = jVppCore; this.vxlanNamingContext = vxlanNamingContext; } /** * Provide a list of IDs for all VXLANs in VPP */ @Nonnull @Override public List<VxlanTunnelKey> getAllIds(@Nonnull final InstanceIdentifier<VxlanTunnel> id, @Nonnull final ReadContext context) throws ReadFailedException { // Create Dump request final VxlanTunnelDump vxlanTunnelDump = new VxlanTunnelDump(); // Set Dump request attributes // Set interface index to 0, so all interfaces are dumped and we can get the list of all IDs vxlanTunnelDump.swIfIndex = 0; final VxlanTunnelDetailsReplyDump reply; try { reply = TranslateUtils.getReplyForRead(jVppCore.vxlanTunnelDump(vxlanTunnelDump).toCompletableFuture(), id); } catch (VppBaseCallException e) { throw new ReadFailedException(id, e); } // Check for empty response (no vxlan tunnels to read) if (reply == null || reply.vxlanTunnelDetails == null) { return Collections.emptyList(); } return reply.vxlanTunnelDetails.stream() // Need a name of an interface here. Use context to look it up from index // In case the naming context does not contain such mapping, it creates an artificial one .map(a -> new VxlanTunnelKey(vxlanNamingContext.getName(a.swIfIndex, context.getMappingContext()))) .collect(Collectors.toList()); } @Override public void merge(@Nonnull final Builder<? extends DataObject> builder, @Nonnull final List<VxlanTunnel> readData) { // Just set the readValue into parent builder // The cast has to be performed here ((VxlansBuilder) builder).setVxlanTunnel(readData); } @Nonnull @Override public VxlanTunnelBuilder getBuilder(@Nonnull final InstanceIdentifier<VxlanTunnel> id) { // Setting key from id is not necessary, builder will take care of that return new VxlanTunnelBuilder(); } /** * Read all the attributes of a single VXLAN tunnel */ @Override public void readCurrentAttributes(@Nonnull final InstanceIdentifier<VxlanTunnel> id, @Nonnull final VxlanTunnelBuilder builder, @Nonnull final ReadContext ctx) throws ReadFailedException { // The ID received here contains the name of a particular interface that should be read // It was either requested directly by HC users or is one of the IDs from getAllIds that could have been invoked // just before this method invocation // Create Dump request final VxlanTunnelDump vxlanTunnelDump = new VxlanTunnelDump(); // Set Dump request attributes // Set the vxlan index from naming context // Naming context must contain the mapping because: // 1. The vxlan tunnel was created in VPP using HC + this plugin meaning we stored the mapping in write customizer // 2. The vxlan tunnel was already present in VPP, but HC reconciliation mechanism took care of that (as long as proper Initializer is provided by this plugin) final String vxlanName = id.firstKeyOf(VxlanTunnel.class).getId(); vxlanTunnelDump.swIfIndex = vxlanNamingContext.getIndex(vxlanName, ctx.getMappingContext()); final VxlanTunnelDetailsReplyDump reply; try { reply = TranslateUtils.getReplyForRead(jVppCore.vxlanTunnelDump(vxlanTunnelDump).toCompletableFuture(), id); } catch (VppBaseCallException e) { throw new ReadFailedException(id, e); } Preconditions.checkState(reply != null && reply.vxlanTunnelDetails != null); final VxlanTunnelDetails singleVxlanDetail = reply.vxlanTunnelDetails.stream().findFirst().get(); // Now translate all attributes into provided builder final Boolean isIpv6 = TranslateUtils.byteToBoolean(singleVxlanDetail.isIpv6); builder.setSrc(TranslateUtils.arrayToIpAddress(isIpv6, singleVxlanDetail.srcAddress)); builder.setDst(TranslateUtils.arrayToIpAddress(isIpv6, singleVxlanDetail.dstAddress)); // There are additional attributes of a vxlan tunnel that wont be used here } }
The '"ReaderFactory also needs to be updated:
package io.fd.honeycomb.tutorial.read; import com.google.inject.Inject; import io.fd.honeycomb.translate.impl.read.GenericListReader; import io.fd.honeycomb.translate.read.ReaderFactory; import io.fd.honeycomb.translate.read.registry.ModifiableReaderRegistryBuilder; import io.fd.honeycomb.translate.v3po.util.NamingContext; import javax.annotation.Nonnull; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.SamplePluginState; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.SamplePluginStateBuilder; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.sample.plugin.params.Vxlans; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.sample.plugin.params.VxlansBuilder; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.sample.plugin.params.vxlans.VxlanTunnel; import org.opendaylight.yangtools.yang.binding.InstanceIdentifier; import org.openvpp.jvpp.core.future.FutureJVppCore; /** * Factory producing readers for sample-plugin plugin's data. */ public final class ModuleStateReaderFactory implements ReaderFactory { public static final InstanceIdentifier<SamplePluginState> ROOT_STATE_CONTAINER_ID = InstanceIdentifier.create(SamplePluginState.class); /** * Injected vxlan naming context shared with writer, provided by this plugin */ @Inject private NamingContext vxlanNamingContext; /** * Injected jvpp core APIs, provided by Honeycomb's infrastructure */ @Inject private FutureJVppCore jvppCore; @Override public void init(@Nonnull final ModifiableReaderRegistryBuilder registry) { // register reader that only delegate read's to its children registry.addStructuralReader(ROOT_STATE_CONTAINER_ID, SamplePluginStateBuilder.class); // register reader that only delegate read's to its children registry.addStructuralReader(ROOT_STATE_CONTAINER_ID.child(Vxlans.class), VxlansBuilder.class); // just adds reader to the structure // use addAfter/addBefore if you want to add specific order to readers on the same level of tree // use subtreeAdd if you want to handle multiple nodes in single customizer/subtreeAddAfter/subtreeAddBefore if you also want to add order // be aware that instance identifier passes to subtreeAdd/subtreeAddAfter/subtreeAddBefore should define subtree, // therefore it should be relative from handled node down - InstanceIdentifier.create(HandledNode), not parent.child(HandledNode.class) registry.add(new GenericListReader<>( // What part of subtree this reader handles is identified by an InstanceIdentifier ROOT_STATE_CONTAINER_ID.child(Vxlans.class).child(VxlanTunnel.class), // Customizer (the actual translation code to do the heavy lifting) new VxlanReadCustomizer(jvppCore, vxlanNamingContext))); } }
Changes to ElementCustomizer
Rename to VxlanWriteCustomizer. Update the code to:
package io.fd.honeycomb.tutorial.write; import io.fd.honeycomb.translate.spi.write.ListWriterCustomizer; import io.fd.honeycomb.translate.v3po.util.NamingContext; import io.fd.honeycomb.translate.v3po.util.TranslateUtils; import io.fd.honeycomb.translate.write.WriteContext; import io.fd.honeycomb.translate.write.WriteFailedException; import javax.annotation.Nonnull; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.sample.plugin.params.vxlans.VxlanTunnel; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.sample.plugin.params.vxlans.VxlanTunnelKey; import org.opendaylight.yangtools.yang.binding.InstanceIdentifier; import org.openvpp.jvpp.VppBaseCallException; import org.openvpp.jvpp.core.dto.VxlanAddDelTunnel; import org.openvpp.jvpp.core.dto.VxlanAddDelTunnelReply; import org.openvpp.jvpp.core.future.FutureJVppCore; /** * Writer for {@link VxlanTunnel} list node from our YANG model. */ public final class VxlanWriteCustomizer implements ListWriterCustomizer<VxlanTunnel, VxlanTunnelKey> { /** * JVpp APIs */ private final FutureJVppCore jvppCore; /** * Shared vxlan tunnel naming context */ private final NamingContext vxlanTunnelNamingContext; public VxlanWriteCustomizer(final FutureJVppCore jvppCore, final NamingContext vxlanTunnelNamingContext) { this.jvppCore = jvppCore; this.vxlanTunnelNamingContext = vxlanTunnelNamingContext; } @Override public void writeCurrentAttributes(@Nonnull final InstanceIdentifier<VxlanTunnel> id, @Nonnull final VxlanTunnel dataAfter, @Nonnull final WriteContext writeContext) throws WriteFailedException { // Create and set vxlan tunnel add request final VxlanAddDelTunnel vxlanAddDelTunnel = new VxlanAddDelTunnel(); // 1 for add, 0 for delete vxlanAddDelTunnel.isAdd = 1; // dataAfter is the new vxlanTunnel configuration final boolean isIpv6 = dataAfter.getSrc().getIpv6Address() != null; vxlanAddDelTunnel.isIpv6 = TranslateUtils.booleanToByte(isIpv6); vxlanAddDelTunnel.srcAddress = TranslateUtils.ipAddressToArray(isIpv6, dataAfter.getSrc()); vxlanAddDelTunnel.dstAddress = TranslateUtils.ipAddressToArray(isIpv6, dataAfter.getDst()); // There are other input parameters that are not exposed by our YANG model, default values will be used try { final VxlanAddDelTunnelReply replyForWrite = TranslateUtils .getReplyForWrite(jvppCore.vxlanAddDelTunnel(vxlanAddDelTunnel).toCompletableFuture(), id); // VPP returns the index of new vxlan tunnel final int newVxlanTunnelIndex = replyForWrite.swIfIndex; // It's important to store it in context so that reader knows to which name a vxlan tunnel is mapped vxlanTunnelNamingContext.addName(newVxlanTunnelIndex, dataAfter.getId(), writeContext.getMappingContext()); } catch (VppBaseCallException e) { throw new WriteFailedException.CreateFailedException(id, dataAfter, e); } } @Override public void updateCurrentAttributes(@Nonnull final InstanceIdentifier<VxlanTunnel> id, @Nonnull final VxlanTunnel dataBefore, @Nonnull final VxlanTunnel dataAfter, @Nonnull final WriteContext writeContext) throws WriteFailedException { // Not supported at VPP API level, throw exception throw new WriteFailedException.UpdateFailedException(id, dataBefore, dataAfter, new UnsupportedOperationException("Vxlan tunnel update is not supported by VPP")); } @Override public void deleteCurrentAttributes(@Nonnull final InstanceIdentifier<VxlanTunnel> id, @Nonnull final VxlanTunnel dataBefore, @Nonnull final WriteContext writeContext) throws WriteFailedException { // Create and set vxlan tunnel add request final VxlanAddDelTunnel vxlanAddDelTunnel = new VxlanAddDelTunnel(); // 1 for add, 0 for delete vxlanAddDelTunnel.isAdd = 0; // Vxlan tunnel is identified by its attributes when deleting, not index, so set all attributes // dataBefore is the vxlan tunnel that's being deleted final boolean isIpv6 = dataBefore.getSrc().getIpv6Address() != null; vxlanAddDelTunnel.isIpv6 = TranslateUtils.booleanToByte(isIpv6); vxlanAddDelTunnel.srcAddress = TranslateUtils.ipAddressToArray(isIpv6, dataBefore.getSrc()); vxlanAddDelTunnel.dstAddress = TranslateUtils.ipAddressToArray(isIpv6, dataBefore.getDst()); // There are other input parameters that are not exposed by our YANG model, default values will be used try { final VxlanAddDelTunnelReply replyForWrite = TranslateUtils .getReplyForWrite(jvppCore.vxlanAddDelTunnel(vxlanAddDelTunnel).toCompletableFuture(), id); // It's important to remove the mapping from context vxlanTunnelNamingContext.removeName(dataBefore.getId(), writeContext.getMappingContext()); } catch (VppBaseCallException e) { throw new WriteFailedException.DeleteFailedException(id, e); } } }
The '"WriterFactory also needs to be updated:
package io.fd.honeycomb.tutorial.write; import com.google.inject.Inject; import io.fd.honeycomb.translate.impl.write.GenericWriter; import io.fd.honeycomb.translate.v3po.util.NamingContext; import io.fd.honeycomb.translate.write.WriterFactory; import io.fd.honeycomb.translate.write.registry.ModifiableWriterRegistryBuilder; import javax.annotation.Nonnull; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.SamplePlugin; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.sample.plugin.params.Vxlans; import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.sample.plugin.rev160918.sample.plugin.params.vxlans.VxlanTunnel; import org.opendaylight.yangtools.yang.binding.InstanceIdentifier; import org.openvpp.jvpp.core.future.FutureJVppCore; /** * Factory producing writers for sample-plugin plugin's data. */ public final class ModuleWriterFactory implements WriterFactory { private static final InstanceIdentifier<SamplePlugin> ROOT_CONTAINER_ID = InstanceIdentifier.create(SamplePlugin.class); /** * Injected vxlan naming context shared with writer, provided by this plugin */ @Inject private NamingContext vxlanNamingContext; /** * Injected jvpp core APIs, provided by Honeycomb's infrastructure */ @Inject private FutureJVppCore jvppCore; @Override public void init(@Nonnull final ModifiableWriterRegistryBuilder registry) { // Unlike ReaderFactory, there's no need to add structural writers, just the writers that actually do something // register writer for vxlan tunnel registry.add(new GenericWriter<>( // What part of subtree this writer handles is identified by an InstanceIdentifier ROOT_CONTAINER_ID.child(Vxlans.class).child(VxlanTunnel.class), // Customizer (the actual translation code to do the heavy lifting) new VxlanWriteCustomizer(jvppCore, vxlanNamingContext))); } }
Changes to Module
The module needs to be updated to:
- Include new instance of naming context
- Remove crud service
and the code needs to look like:
package io.fd.honeycomb.tutorial; import com.google.inject.AbstractModule; import com.google.inject.multibindings.Multibinder; import io.fd.honeycomb.data.init.DataTreeInitializer; import io.fd.honeycomb.translate.read.ReaderFactory; import io.fd.honeycomb.translate.v3po.util.NamingContext; import io.fd.honeycomb.translate.write.WriterFactory; import io.fd.honeycomb.tutorial.init.ConfigDataInitializer; import io.fd.honeycomb.tutorial.read.ModuleStateReaderFactory; import io.fd.honeycomb.tutorial.write.ModuleWriterFactory; import net.jmob.guice.conf.core.ConfigurationModule; /** * Module class instantiating sample-plugin plugin components. */ public final class Module extends AbstractModule { @Override protected void configure() { // requests injection of properties install(ConfigurationModule.create()); requestInjection(ModuleConfiguration.class); // bind naming context instance for reader and writer factories // the first parameter is artificial name prefix in cases a name needs to be reconstructed for a vxlan tunnel // that is present in VPP but not in Honeycomb (could be extracted into configuration) // the second parameter is just the naming context ID (could be extracted into configuration) binder().bind(NamingContext.class).toInstance(new NamingContext("vxlan-tunnel", "vxlan-tunnel-context")); // creates reader factory binding // can hold multiple binding for separate yang modules final Multibinder<ReaderFactory> readerFactoryBinder = Multibinder.newSetBinder(binder(), ReaderFactory.class); readerFactoryBinder.addBinding().to(ModuleStateReaderFactory.class); // create writer factory binding // can hold multiple binding for separate yang modules final Multibinder<WriterFactory> writerFactoryBinder = Multibinder.newSetBinder(binder(), WriterFactory.class); writerFactoryBinder.addBinding().to(ModuleWriterFactory.class); // create initializer binding // can hold multiple binding for separate yang modules final Multibinder<DataTreeInitializer> initializerBinder = Multibinder.newSetBinder(binder(), DataTreeInitializer.class); initializerBinder.addBinding().to(ConfigDataInitializer.class); // Disable notification producer for now // Multibinder.newSetBinder(binder(), ManagedNotificationProducer.class).addBinding() // .to(SampleNotificationProducer.class); } }
Now it's time to rebuild the plugin using mvn clean install to make the jars available for integrating them with vpp-integration distribution in next sections
Integrating with vpp-integration distribution
The vxlan tunnel management plugin can now be integrated with any honeycomb distribution. Honeycomb provides a vpp-integration distribution, where all VPP related plugins integrate to create a distribution with all available VPP related features.
This distribution comes with honeycomb infrastructure + common components for VPP Honeycomb plugins (e.g. Java APIs for VPP).
In order to add this new plugin into vpp-integration:
- clone honeycomb codebase (since that's the home of vpp-integration distribution)
- add a dependency for this sample plugin in vpp-integration distribution (honeycomb/vpp-integration/minimal-distribution/pom.xml):
<dependency> <groupId>io.fd.honeycomb.tutorial</groupId> <artifactId>sample-plugin-impl</artifactId> <version>1.16.9</version> </dependency>
- modify Main of vpp-integration distribution to include sample-plugin (/home/mmarsale/Projects/honeycomb/vpp-integration/minimal-distribution/src/main/java/io/fd/honeycomb/vpp/integration/distro/Main.java):
package io.fd.honeycomb.vpp.integration.distro; import com.google.common.collect.Lists; import com.google.inject.Module; import io.fd.honeycomb.vpp.distro.VppCommonModule; import java.util.List; public class Main { public static void main(String[] args) { final List<Module> sampleModules = Lists.newArrayList(io.fd.honeycomb.infra.distro.Main.BASE_MODULES); // All the plugins should be listed here sampleModules.add(new VppCommonModule()); // Comment out V3po and Lisp module for the time being, since V3po and sample-plugin are in conflict over vxlan tunnel management // a plugin implementing VPP's API that's not yet covered by V3po or LISP plugin would not have to do this // sampleModules.add(new V3poModule()); // sampleModules.add(new LispModule()); sampleModules.add(new io.fd.honeycomb.tutorial.Module()); io.fd.honeycomb.infra.distro.Main.init(sampleModules); } }
Now just rebuild the honeycomb project.
Verifying distribution
At this point, the vpp-integration distribution with sample-plugin can now be started. But first, make sure that a compatible version of VPP is installed and running. Next, start honeycomb with:
sudo vpp-integration/minimal-distribution/target/vpp-integration-distribution-1.16.9-hc/vpp-integration-distribution-1.16.9/honeycomb
Testing over RESTCONF
Reading vxlans operational data (should return empty vxlans container at first):
curl -u admin:admin http://localhost:8181/restconf/operational/sample-plugin:sample-plugin-state
Adding a vxlan tunnel:
curl -H 'Content-Type: application/json' -H 'Accept: application/json' -u admin:admin -X PUT -d '{"vxlans":{"vxlan-tunnel": [{"id":"vxlan-test-tunnel", "src":"10.0.0.1", "dst":"10.0.0.2"}]}}' http://localhost:8181/restconf/config/sample-plugin:sample-plugin/vxlans
Reading vxlans config data (data that we posted to Honeycomb):
curl -u admin:admin http://localhost:8181/restconf/config/sample-plugin:sample-plugin
Reading vxlans operational data (data coming from VPP being transformed by ReaderCustomizer on the fly):
curl -u admin:admin http://localhost:8181/restconf/operational/sample-plugin:sample-plugin-state
Verifying vxlan tunnel existence in VPP:
telnet 0 5002 show interface
should show:
Name Idx State Counter Count local0 0 down vxlan_tunnel0 1 up
Deleting a vxlan tunnel:
curl -u admin:admin -X DELETE http://localhost:8181/restconf/config/sample-plugin:sample-plugin/vxlans/vxlan-tunnel/vxlan-test-tunnel
Disclaimer: The vxlan tunnel will be removed from Honeycomb, and delete command will be executed on VPP, but VPP will just disable that interface and keep it as some sort of placeholder for next vxlan tunnel (that's VPPs behavior, so a vxlan tunnel cant be really deleted). So that's why you would still see the tunnel in VPP's CLI after delete.
Testing over NETCONF
Netconf testing guide including Notifications, can be found in Honeycomb/Running_Honeycomb.
Note: Netconf and Restconf are equivalent interfaces to Honeycomb, being capable of providing the same APIs. The only difference is with notifications. Only NETCONF is capable of emitting the notifications.
Full working sample
Full working sample on github: https://github.com/marosmars/honeycomb-samples/tree/vpp-plugin
... just a note on what further work for this plugin might contain:
- unit tests
- POSTMAN REST collection with sample requests
- logging