Difference between revisions of "Honeycomb/Plan"
(Created page with "<span style="color:red">UNDER CONSTRUCTION</span> == Honeycomb plan #1 == === Story: Going formal === Various tasks required by honeycomb project officially joining fd.io...") |
(→Story: vpp-japi refactoring) |
||
Line 60: | Line 60: | ||
==== Tasks ==== | ==== Tasks ==== | ||
+ | # Design new version of japi with above requirements | ||
# Document and put on wiki | # Document and put on wiki | ||
− | # | + | # Implement a POC to verify and measure base performance |
− | + | # Develop the POC into a full japi v2 (sub task are subject to changes) | |
+ | ## Update the code generation to generate async vpp api (generate interface also for easier integration) | ||
+ | ## Update the build of VPP to produce and deploy vpp-japi v2 jar | ||
+ | # Migrate Honeycomb to use vpp-japi v2 | ||
+ | # Deprecate vpp-japi | ||
+ | # Remove vpp-japi | ||
=== New features === | === New features === |
Revision as of 08:55, 4 April 2016
UNDER CONSTRUCTION
Contents
Honeycomb plan #1
Story: Going formal
Various tasks required by honeycomb project officially joining fd.io
- Evacuate vbd sub-project into dedicated ODL project
Story: DataTree
Refactor/Redesign Data-store utilization in Honeycomb.
Using global data-store (current design) is very restrictive and does not allow for features like: commit refusal, change processing ordering, additional validation etc. Dedicated Data-tree needs to be used internally in order to introduce better control over the data processing in Honeycomb agent
Tasks
- Analyze DataTree APIs and design
- Document and put on wiki
- Implement custom DataBroker on top of a DataTree
- Provide APIs for translation layer
- Add a dedicated mount-point for the new pipeline while still keeping former implementation in place
- Wrap HC DataTree in a mountpoint
- Configure a dedicated NETCONF northbound just for HC mountpoint
Story: Translation layer
Refactor/Redesign Honeycomb Translation layer (YANG <-> VPP API) and introduce a framework.
The translation layer is very monolithic, hard to extend and buggy. Before any new VPP functionality is added, refactoring and redesign needs to take place to allow for “easy to develop and deploy” extensions
Tasks
- Design translation layer (extensible, easy to use, Binding aware, supporting CRUD, R separated from the rest of CRUD ops etc.)
- Document and put on wiki
- Implement R from CRUD
- Introduce Reader APIs
- Implement Readers in a composite, extensible manner
- Provide SPIs to customize read behavior
- Migrate existing reading code from Honeycomb under new translation layer
- Implement CUD from CRUD
- Introduce Writer APIs
- Implement Writers in a composite, extensible manner
- Provide SPIs to customize write behavior
- Migrate existing writing code from Honeycomb under new translation layer
- Integrate with DataTree story
- Remove former pipeline and mapping code from Honeycomb and keep only new pipeline (DataTree and Translation layer stories)
Story: vpp-japi refactoring
Generated vpp-japi (part of VPP project) today is asynchronous and works, but there are some drawbacks:
- Not really asynchronous from Java perspective, requires Java to perform active wait loops (for all generated functions) – big overhead due to JNI boundary being crossed lots of times
- Requires hand crafted implementations for dump calls
The japi should be:
- Truly asynchronous with callbacks into Java
- Lightweight (no caching in the C code, generate all the methods except connect, disconnect and ping etc.)
Tasks
- Design new version of japi with above requirements
- Document and put on wiki
- Implement a POC to verify and measure base performance
- Develop the POC into a full japi v2 (sub task are subject to changes)
- Update the code generation to generate async vpp api (generate interface also for easier integration)
- Update the build of VPP to produce and deploy vpp-japi v2 jar
- Migrate Honeycomb to use vpp-japi v2
- Deprecate vpp-japi
- Remove vpp-japi
New features
Story: Orchestration agent
Story: VNF
Story: vSwitch vRouter
Story: Minimal distro
Today, Honeycomb includes many (for Honycomb unnecessary) ODL features e.g. clustered DS.
The distribution needs to be shrunk either by minimizing ODL features in karaf or completely removing karaf from the agent using lightweight or static wiring and configuration.
Tasks
- Introduce new wiring based on a simple DI framework
- Analyze and pick suitable DI framework
- Add new wiring into existing honeycomb components
- Implement new startup mechanism (maybe just a simple Main)
- Provide minimal distribution including new wiring and startup mechanism
- Remove karaf related and ODL related stuff(distrobution, wiring etc) keeping only minimal distribution
- Document and put on wiki
Misc
Various tasks
Tasks
- Put this plan into JIRA
- Present new Honeycomb pipeline in TWS meeting. Blocked by: DataTree story, Translation layer story
- Provide tutorial/samples on: How-to-add-new-features-to-HC. Blocked by: DataTree story, Translation layer story
- Use Java8 in Honeycomb sources
- Cleanup maven structure in Honeycomb
- Enable proper checkstyle checks + license checks
- Enable jacoco coverage reports and display in Jenkins at least
- Remove unnecessary empty parent poms
- Sonar