Difference between revisions of "Honeycomb/Plan"

From fd.io
Jump to: navigation, search
(Created page with "<span style="color:red">UNDER CONSTRUCTION</span> == Honeycomb plan #1 == === Story: Going formal === Various tasks required by honeycomb project officially joining fd.io...")
 
(Story: vpp-japi refactoring)
Line 60: Line 60:
 
==== Tasks ====
 
==== Tasks ====
  
 +
# Design new version of japi with above requirements
 
# Document and put on wiki
 
# Document and put on wiki
#  
+
# Implement a POC to verify and measure base performance
 
+
# Develop the POC into a full japi v2 (sub task are subject to changes)
 +
## Update the code generation to generate async vpp api (generate interface also for easier integration)
 +
## Update the build of VPP to produce and deploy vpp-japi v2 jar
 +
# Migrate Honeycomb to use vpp-japi v2
 +
# Deprecate vpp-japi
 +
# Remove vpp-japi
  
 
=== New features ===
 
=== New features ===

Revision as of 08:55, 4 April 2016

UNDER CONSTRUCTION

Honeycomb plan #1

Story: Going formal

Various tasks required by honeycomb project officially joining fd.io

  1. Evacuate vbd sub-project into dedicated ODL project

Story: DataTree

Refactor/Redesign Data-store utilization in Honeycomb.

Using global data-store (current design) is very restrictive and does not allow for features like: commit refusal, change processing ordering, additional validation etc. Dedicated Data-tree needs to be used internally in order to introduce better control over the data processing in Honeycomb agent

Tasks

  1. Analyze DataTree APIs and design
  2. Document and put on wiki
  3. Implement custom DataBroker on top of a DataTree
    1. Provide APIs for translation layer
  4. Add a dedicated mount-point for the new pipeline while still keeping former implementation in place
    1. Wrap HC DataTree in a mountpoint
    2. Configure a dedicated NETCONF northbound just for HC mountpoint


Story: Translation layer

Refactor/Redesign Honeycomb Translation layer (YANG <-> VPP API) and introduce a framework.

The translation layer is very monolithic, hard to extend and buggy. Before any new VPP functionality is added, refactoring and redesign needs to take place to allow for “easy to develop and deploy” extensions

Tasks

  1. Design translation layer (extensible, easy to use, Binding aware, supporting CRUD, R separated from the rest of CRUD ops etc.)
  2. Document and put on wiki
  3. Implement R from CRUD
    1. Introduce Reader APIs
    2. Implement Readers in a composite, extensible manner
      1. Provide SPIs to customize read behavior
    3. Migrate existing reading code from Honeycomb under new translation layer
  4. Implement CUD from CRUD
    1. Introduce Writer APIs
    2. Implement Writers in a composite, extensible manner
      1. Provide SPIs to customize write behavior
    3. Migrate existing writing code from Honeycomb under new translation layer
  5. Integrate with DataTree story
  6. Remove former pipeline and mapping code from Honeycomb and keep only new pipeline (DataTree and Translation layer stories)


Story: vpp-japi refactoring

Generated vpp-japi (part of VPP project) today is asynchronous and works, but there are some drawbacks:

  • Not really asynchronous from Java perspective, requires Java to perform active wait loops (for all generated functions) – big overhead due to JNI boundary being crossed lots of times
  • Requires hand crafted implementations for dump calls

The japi should be:

  • Truly asynchronous with callbacks into Java
  • Lightweight (no caching in the C code, generate all the methods except connect, disconnect and ping etc.)

Tasks

  1. Design new version of japi with above requirements
  2. Document and put on wiki
  3. Implement a POC to verify and measure base performance
  4. Develop the POC into a full japi v2 (sub task are subject to changes)
    1. Update the code generation to generate async vpp api (generate interface also for easier integration)
    2. Update the build of VPP to produce and deploy vpp-japi v2 jar
  5. Migrate Honeycomb to use vpp-japi v2
  6. Deprecate vpp-japi
  7. Remove vpp-japi

New features

Story: Orchestration agent

Story: VNF

Story: vSwitch vRouter

Story: Minimal distro

Today, Honeycomb includes many (for Honycomb unnecessary) ODL features e.g. clustered DS.

The distribution needs to be shrunk either by minimizing ODL features in karaf or completely removing karaf from the agent using lightweight or static wiring and configuration.

Tasks

  1. Introduce new wiring based on a simple DI framework
    1. Analyze and pick suitable DI framework
    2. Add new wiring into existing honeycomb components
  2. Implement new startup mechanism (maybe just a simple Main)
  3. Provide minimal distribution including new wiring and startup mechanism
  4. Remove karaf related and ODL related stuff(distrobution, wiring etc) keeping only minimal distribution
  5. Document and put on wiki


Misc

Various tasks

Tasks

  1. Put this plan into JIRA
  2. Present new Honeycomb pipeline in TWS meeting. Blocked by: DataTree story, Translation layer story
  3. Provide tutorial/samples on: How-to-add-new-features-to-HC. Blocked by: DataTree story, Translation layer story
  4. Use Java8 in Honeycomb sources
  5. Cleanup maven structure in Honeycomb
    1. Enable proper checkstyle checks + license checks
    2. Enable jacoco coverage reports and display in Jenkins at least
    3. Remove unnecessary empty parent poms
  6. Sonar