Difference between revisions of "VPP/SecurityGroups"
(→Work list) |
(→API) |
||
Line 352: | Line 352: | ||
− | [https:// | + | [https://gerrit.fd.io/r/gitweb?p=vpp.git;a=blob;f=plugins/acl-plugin/acl/acl.api;h=58a5a17180e087efec829621368b2111e406a22e;hb=refs/heads/stable/1701 API file as implemented in 17.01] |
== MACIP (formerly "L2") API == | == MACIP (formerly "L2") API == |
Revision as of 15:20, 5 January 2017
Contents
VPP Security Groups
Introduction
Features are tracked as they are developed in the following VPP-427.
In-progress development is done on github: ACL branch
The first version of the plugin is committed via change 3423
Initial performance tests
The first version of the ACL plugin was explicitly focused on getting things "correct" as the first priority, even if at some expense of getting them "fast". But understanding the performance is very important, so we did limited performance testing. The performance testing was done using MoonGen, running on the same host as VPP. The VPP was run by doing "make release-build; make release-plugins; make release-run" process. The VPP is configured via VAT.
The performance was done by testing with 200000 unidirectional UDP streams, by submitting the line-rate 10Gbps of traffic by MoonGen (14.88Mpps of 64-byte packets) and observing the amount of traffic received on the other side.
First, we test the baseline configuration which just doing briding between the two interfaces.
sw_interface_set_flags TenGigabitEthernet81/0/0 admin-up link-up sw_interface_set_flags TenGigabitEthernet81/0/1 admin-up link-up bridge_domain_add_del bd_id 42 flood 1 uu-flood 1 forward 1 learn 1 arp-term 0 sw_interface_set_l2_bridge TenGigabitEthernet81/0/0 bd_id 42 sw_interface_set_l2_bridge TenGigabitEthernet81/0/1 bd_id 42
This configuration exhibited 9.52 Mpps performance.
Then we added a trivial case of a one-line "permit" ACL which is checked on input and output of the packet path, using the following VAT commands:
acl_add_replace permit acl_interface_add_del sw_if_index 1 add input acl 0 acl_interface_add_del sw_if_index 1 add output acl 0 acl_interface_add_del sw_if_index 2 add input acl 0 acl_interface_add_del sw_if_index 2 add output acl 0
This configuration causes two very simple ACL checks - on input of the packet path and on the output, see the below trace:
00:02:12:167665: dpdk-input TenGigabitEthernet81/0/0 rx queue 0 buffer 0x5358: current data 0, length 60, free-list 0, totlen-nifb 0, trace 0x0 PKT MBUF: port 0, nb_segs 1, pkt_len 60 buf_len 2176, data_len 60, ol_flags 0x180, data_off 128, phys_addr 0x6ee4d640 packet_type 0x0 Packet Offload Flags IP4: 01:02:03:04:05:06 -> 07:08:09:0a:0b:0c UDP: 10.0.80.47 -> 10.1.0.10 tos 0x00, ttl 64, length 46, checksum 0x1686 fragment id 0x0000 UDP: 1234 -> 319 length 26, checksum 0x956f 00:02:12:167682: ethernet-input IP4: 01:02:03:04:05:06 -> 07:08:09:0a:0b:0c 00:02:12:167691: l2-input l2-input: sw_if_index 1 dst 07:08:09:0a:0b:0c src 01:02:03:04:05:06 00:02:12:167693: l2-input-classify l2-classify: sw_if_index 1, table 0, offset 0, next 9 00:02:12:167700: acl-plugin-in ACL_IN: sw_if_index 1, next index 10, match: inacl 0 rule 0 trace_bits 00000000 00:02:12:167709: l2-learn l2-learn: sw_if_index 1 dst 07:08:09:0a:0b:0c src 01:02:03:04:05:06 bd_index 1 00:02:12:167712: l2-flood l2-flood: sw_if_index 1 dst 07:08:09:0a:0b:0c src 01:02:03:04:05:06 bd_index 1 00:02:12:167714: l2-output l2-output: sw_if_index 2 dst 07:08:09:0a:0b:0c src 01:02:03:04:05:06 00:02:12:167716: l2-output-classify l2-classify: sw_if_index 2, table 6, offset 0, next 5 00:02:12:167723: acl-plugin-out ACL_OUT: sw_if_index 2, next index 4, match: outacl 0 rule 0 trace_bits 00000000 00:02:12:167734: TenGigabitEthernet81/0/1-output TenGigabitEthernet81/0/1 IP4: 01:02:03:04:05:06 -> 07:08:09:0a:0b:0c UDP: 10.0.80.47 -> 10.1.0.10 tos 0x00, ttl 64, length 46, checksum 0x1686 fragment id 0x0000 UDP: 1234 -> 319 length 26, checksum 0x956f 00:02:12:167736: TenGigabitEthernet81/0/1-tx TenGigabitEthernet81/0/1 tx queue 0 buffer 0x5358: current data 0, length 60, free-list 0, totlen-nifb 0, trace 0x0 IP4: 01:02:03:04:05:06 -> 07:08:09:0a:0b:0c UDP: 10.0.80.47 -> 10.1.0.10 tos 0x00, ttl 64, length 46, checksum 0x1686 fragment id 0x0000 UDP: 1234 -> 319 length 26, checksum 0x956f
This configuration exhibited the performance of 4.62 Mpps.
To test the impact of the linear match, we add lines to ACL one by one:
acl_add_replace 0 permit src 1.1.1.1/32,permit
performance: 4.50Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32,permit
performance: 4.34 Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32,permit
performance: 4.19 Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32, src 1.1.1.4/32, permit
performance: 4.04 Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32, src 1.1.1.4/32, src 1.1.1.5/32, permit
performance: 3.90 Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32, src 1.1.1.4/32, src 1.1.1.5/32, src 1.1.1.6/32, permit
performance: 3.77 Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32, src 1.1.1.4/32, src 1.1.1.5/32, src 1.1.1.6/32, src 1.1.1.7/32, permit
performance: 3.64 Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32, src 1.1.1.4/32, src 1.1.1.5/32, src 1.1.1.6/32, src 1.1.1.7/32, src 1.1.1.8/32, permit
performance: 3.55 Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32, src 1.1.1.4/32, src 1.1.1.5/32, src 1.1.1.6/32, src 1.1.1.7/32, src 1.1.1.8/32, src 1.1.1.9/32, permit
performance: 3.23 Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32, src 1.1.1.4/32, src 1.1.1.5/32, src 1.1.1.6/32, src 1.1.1.7/32, src 1.1.1.8/32, src 1.1.1.9/32, src 1.1.1.10/32, permit
performance: 3.33 Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32, src 1.1.1.4/32, src 1.1.1.5/32, src 1.1.1.6/32, src 1.1.1.7/32, src 1.1.1.8/32, src 1.1.1.9/32, src 1.1.1.10/32, src 1.1.1.11/32, permit
performance: 3.24 Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32, src 1.1.1.4/32, src 1.1.1.5/32, src 1.1.1.6/32, src 1.1.1.7/32, src 1.1.1.8/32, src 1.1.1.9/32, src 1.1.1.10/32, src 1.1.1.11/32, src 1.1.1.12/32, permit
performance: 3.14 Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32, src 1.1.1.4/32, src 1.1.1.5/32, src 1.1.1.6/32, src 1.1.1.7/32, src 1.1.1.8/32, src 1.1.1.9/32, src 1.1.1.10/32, src 1.1.1.11/32, src 1.1.1.12/32, src 1.1.1.13/32, permit
performance: 3.06 Mpps
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32, src 1.1.1.4/32, src 1.1.1.5/32, src 1.1.1.6/32, src 1.1.1.7/32, src 1.1.1.8/32, src 1.1.1.9/32, src 1.1.1.10/32, src 1.1.1.11/32, src 1.1.1.12/32, src 1.1.1.13/32, src
1.1.1.14/32, permit
performance: 2.85 Mpps
We can see that the performance impact is not strictly linear, but it gives the impression.
Changing the last match to "permit and reflect" in order to cause the stateful processing path:
acl_add_replace 0 permit src 1.1.1.1/32,src 1.1.1.2/32, src 1.1.1.3/32, src 1.1.1.4/32, src 1.1.1.5/32, src 1.1.1.6/32, src 1.1.1.7/32, src 1.1.1.8/32, src 1.1.1.9/32, src 1.1.1.10/32, src 1.1.1.11/32, src 1.1.1.12/32, src 1.1.1.13/32, src 1.1.1.14/32, permit+reflect
performance: 3.68 Mpps
We can see that the performance of the stateful path is less than the performance of the stateless path with small ACLs - this is most probably due to the current code organization of session tracker nodes, which use some code sharing between the nodes to ease the maintenance of it.
Removing all ACLs:
acl_interface_add_del sw_if_index 1 del input acl 0 acl_interface_add_del sw_if_index 1 del output acl 0 acl_interface_add_del sw_if_index 2 del input acl 0 acl_interface_add_del sw_if_index 2 del output acl 0
We get to 9.47 Mpps.
Let's test multiple ACL matching:
acl_add_replace src 1.1.1.1/32 acl_add_replace src 1.1.1.1/32 acl_add_replace src 1.1.1.1/32 acl_add_replace src 1.1.1.1/32 acl_add_replace src 1.1.1.1/32
First same condition as before, with just one ACL check:
acl_interface_set_acl_list sw_if_index 1 input 0 output 0 acl_interface_set_acl_list sw_if_index 2 input 0 output 0
performance: 4.67 Mpps
Two ACLs:
acl_interface_set_acl_list sw_if_index 1 input 1 0 output 1 0 acl_interface_set_acl_list sw_if_index 2 input 1 0 output 1 0
performance: 4.16 Mpps
Three ACLs:
acl_interface_set_acl_list sw_if_index 1 input 2 1 0 output 2 1 0 acl_interface_set_acl_list sw_if_index 2 input 2 1 0 output 2 1 0
performance: 3.73 Mpps
Four ACLs:
acl_interface_set_acl_list sw_if_index 1 input 3 2 1 0 output 3 2 1 0 acl_interface_set_acl_list sw_if_index 2 input 3 2 1 0 output 3 2 1 0
performance: 3.38 Mpps
Five ACLs:
acl_interface_set_acl_list sw_if_index 1 input 4 3 2 1 0 output 4 3 2 1 0 acl_interface_set_acl_list sw_if_index 2 input 4 3 2 1 0 output 4 3 2 1 0
Performance: 3.10 Mpps
Six ACLs:
acl_interface_set_acl_list sw_if_index 1 input 5 4 3 2 1 0 output 5 4 3 2 1 0 acl_interface_set_acl_list sw_if_index 2 input 5 4 3 2 1 0 output 5 4 3 2 1 0
Performance: 2.85 Mpps
Convert to last "permit" to stateful:
acl_add_replace 0 permit+reflect
Performance: 3.67 Mpps
These two performance axis might combine 0 i.e. if the ACL 5 in the above test were to be long and not match, the performance will be worse than just with that ACL and worse than with 6 trivial ACLs.
Removing all ACLs:
acl_interface_set_acl_list sw_if_index 1 acl_interface_set_acl_list sw_if_index 2
The performance is 9.45 Mpps.
MACIP ACLs
macip_acl_add permit macip_acl_interface_add_del sw_if_index 1 add acl 0
Performance: 7.30 Mpps
macip_acl_add permit ip 128.1.0.0/7, permit ip 10.0.0.0/8 macip_acl_interface_add_del sw_if_index 1 add acl 1
Performance: 7.31 Mpps (the hit on the first classify table)
macip_acl_add permit ip 128.1.0.0/9, permit ip 10.0.0.0/8 macip_acl_interface_add_del sw_if_index 1 add acl 1
Performance: 7.31 Mpps (the hit on the first classify table)
(more testing with MACIP ACLs should be done)
Requirements
- Support classifiers/filters on any interface type (bridged / routed)
- Filter on IP-addresses with address mask or prefix length (IPv4 and IPv6)
- Filter on source and destination TCP/UDP port ranges
- Filter on source and destination L2 MAC addresses
- Support IPv6 with extension headers present
- Support fragmented packets and unknown transport layer headers
- Combinations of the above filters (e.g. MAC + IP)
- Filters on ingress and egress interfaces
- Stateful firewall. No application layer filtering.
Work list
Task | Owner | Priority | Status | Description |
---|---|---|---|---|
API definition | Ole | 0 | Done | VPP-513 |
Connection tracker | Andrew | 0 | Done | VPP-514 |
Stateful ACLs | 0 | VPP-515 | ||
ACL policy matching node (MVP) | Andrew | 0 | Done | input output |
Direct classifier policy matching | - | |||
Control Plane test code (new framework) | Pavel | 0 | WIP | |
Data Plane tests (performance + scale) | 0 |
1. Python tests/examples -> Ole + Pavel 2a. IPv4 matching in all plugin -> Andrew - done. 2b. make it “deny by default” -> Andrew - done. 2c. port range support -> Andrew - done. 2d. ICMP type/code matching -> Andrew - done. 3. Performance testing -> Andrew - done. --- MVP --- 4a. Plumbing for stateful sessions from ACL plugin (to be able to specify “match and track” (“permit and create the forward/return session”) -> Andrew - done. 4b. Stateful session tracking - timeouts -> Andrew - Done. 4c. Stateful session tracking - lightweight TCP state -> Andrew - Done 5. MACIP(L2) rules -> Andrew - done. 6. Code cleanups -> Andrew PHASE2: A. ACL/Sessions support for L3 (routed) mode - (big) ! B. Can we implement the ACL match purely in terms of classifier tables ? How expensive/(in)efficient that would be ? C. Extension header handling during the slow path lookup - easy in ACL plugin D. classifier match for the sessions with extension headers - currently no extension headers supported
API
API file as implemented in 17.01
MACIP (formerly "L2") API
MACIP (renamed to avoid confusion) is an ingress-only ACL which permits the traffic based on a mix of MAC and IP address matches.
The use of this mechanism is to prevent spoofing.
API as implemented supports MAC address masks and prefixes, however, be aware: the current implementation is done using chained classifier tables, so each variation of the masks/prefix lengths means an extra table and hence the performance impact.
These filters are per-packet so you will want to care for performance.
For best performance, use the exact match MAC mask (ff:ff:ff:ff:ff:ff) and the maximum prefix length (/32 for IPv4 and /128 for IPv6).
Design and prototyping
The ACL matching is implemented in this phase as a simple array search, under the assumption that given the rules are per-port, the rule list will be small.
The redirection of the traffic to the node performing the ACL match is done by installing an empty L2 classifier table whose "miss-next" index diverts the traffic to the node.
The ACL match node can also redirect the traffic to the stateful-session setup node (by having a "permit" = 2 in the ACE), which will create the session on that interface.
...TBD: more details...
CLI
Every activity with the ACL must be done via the API. The plugins do not add any CLI at this point.
Examples
YANG model
Open Issues
Closed Issues
- Security Group use case specific API. Done in a plugin. Two plugins, to be exact - one to ACL match, one to track sessions.
Existing functionality
The existing functionality has a classifier (https://wiki.fd.io/view/VPP/Introduction_To_N-tuple_Classifiers) matching.
As the above document explains, the classifier is a series of chained tables, with each table having a specific mask, but this mask is the same for all entries.
This has been tested to happen in the L2 bridged case (test case: http://stdio.be/vpp/t/aytest-bridge-tap-py.txt).
Therefore, if we have an example policy:
nova secgroup-create test-secgroup test nova secgroup-add-rule test-secgroup icmp -1 -1 0.0.0.0/0 nova secgroup-add-rule test-secgroup tcp 22 22 0.0.0.0/0
So, assuming we match with offset 0 (from the beginning of the packet) the mask will look like this for the first line:
000000000000 000000000000 0000 00 00 0000 0000 0000 00 FF 0000 00000000 00000000 00 00 0000 0000 eth dst eth src et ihl t len id fo ttl pr cs ip4src ip4dst t c cs id +-------- L2 ---------------+----------- L3 IPv4 ------------------------------+--------L4 ICMP -----+
For the TCP matching on port 22 it will look as follows:
000000000000 000000000000 0000 00 00 0000 0000 0000 00 FF 0000 00000000 00000000 0000 FFFF 00000000 00000000 0000 0000 0000 0000 eth dst eth src et ihl t len id fo ttl pr cs ip4src ip4dst sp dp seq ack fl win cs urg +-------- L2 ---------------+----------- L3 IPv4 ------------------------------+--------L4 TCP ---------------------------------+
(One would need to round up the number of bytes to the nearest 16-byte boundary that makes sense)
For IPv6 assuming no extension headers, it will look similar, with the L3 header being the IPv6 one:
000000000000 000000000000 0000 0 00 00000 0000 FF 00 00000000000000000000000000000000 00000000000000000000000000000000 00 00 0000 0000 eth dst eth src et v TC fll len nh hl ipv6 src ipv dst t c cs id +-------- L2 ---------------+----------- L3 IPv6 --------------------------------------------------------------------+--------L4 ICMP -----+
For the TCP matching on port 22 it will look as follows:
000000000000 000000000000 0000 0 00 00000 0000 FF 00 00000000000000000000000000000000 00000000000000000000000000000000 0000 FFFF 00000000 00000000 0000 0000 0000 0000 eth dst eth src et v TC fll len nh hl ipv6 src ipv dst sp dp seq ack fl win cs urg +-------- L2 ---------------+----------- L3 IPv6 --------------------------------------------------------------------+--------L4 TCP ---------------------------------
Then using these masks one would create 4 tables, by using the API call:
classify_add_del_table(is_add=1, skip_n_vectors=0, mask=<MMMM>, match_n_vectors=<NNNN>,nbuckets=32,memory_size=20000, next_table_index=-1, miss_next_index=-1)
Let's call these tables "IPv4PROTO", "IPv4PROTO_TCPDPORT", "IPv6PROTO", "IPv6PROTO_TCPDPORT".
One would mention "IPv4PROTO" table as "next_table_index" table for "IPv4PROTO_TCPDPORT", and "IPv6PROTO" as "next_table_index" table for IPv6PROTO_TCPDPORT table.
Then one needs to populate the tables with the correct matches for "ICMP" and "tcp dst port 22". That can be done using API call:
classify_add_del_session(is_add=1, table_index=<XXXX>, match=<bytes-to-match>, hit-next-index -1)
The bytes "XXXX" above would be the match of one or several vectors, corresponding to the packet contents with the desired value.
WARNING: if the "skip" is nonzero in the table configuration, the match is still the entire bitstring, without skipping any leading bytes !!!
Then one would apply the IPv4PROTO_TCPDPORT and IPv6PROTO_TCPDPORT as l2 input classify tables.
The CLI for that is set interface l2 output classify intfc <name> ip[46]-table <tableid>.
The API for this is
classify_set_interface_l2_tables(sw_if_index=<INTFC>, ip4_table_index=<IPv4PROTO_TCPDPORT>, ip6_table_index=<IPv6PROTO_TCPDPORT>, other_table_index=-1, is_input=0)
This would allow to create a unidirectional policy, assuming the other policy is "permit all" it would be fine. If not -
then a mirror table entries will need to be created using the same logic.
The full script showing this process in detail using the python API is at http://stdio.be/vpp/t/classifier_script_simple_policy.txt
The Java API is located in $ROOT/vpp-api/java..