<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 11pt; color: rgb(0, 0, 0);" class="elementToProof">
Hello Patrick,</div>
<div style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 11pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 11pt; color: rgb(0, 0, 0);" class="elementToProof">
Did you have time to check the Unit Test design ?</div>
<div style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 11pt; color: rgb(0, 0, 0);" class="elementToProof">
Do you think it can be used for short functional DTS tests ?</div>
<div style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 11pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 11pt; color: rgb(0, 0, 0);" class="elementToProof">
Regards,</div>
<div style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 11pt; color: rgb(0, 0, 0);" class="elementToProof">
Gregory</div>
<div id="appendonsend"></div>
<div style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 11pt; color: rgb(0, 0, 0);">
<br>
</div>
<hr style="display: inline-block; width: 98%;">
<div dir="ltr" id="divRplyFwdMsg"><span style="font-family: Calibri, sans-serif; font-size: 11pt; color: rgb(0, 0, 0);"><b>From:</b> Gregory Etelson<br>
<b>Sent:</b> Wednesday, January 31, 2024 09:43<br>
<b>To:</b> Patrick Robb <probb@iol.unh.edu><br>
<b>Cc:</b> Gregory Etelson <getelson@nvidia.com>; Jeremy Spewock <jspewock@iol.unh.edu>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Juraj Linkeš <juraj.linkes@pantheon.tech>; Paul Szczepanek
<Paul.Szczepanek@arm.com>; Yoan Picchi <yoan.picchi@foss.arm.com>; Luca Vizzarro <Luca.Vizzarro@arm.com>; ci@dpdk.org <ci@dpdk.org>; dev@dpdk.org <dev@dpdk.org>; nd <nd@arm.com>; Maayan Kashani <mkashani@nvidia.com>; Asaf Penso <asafp@nvidia.com><br>
<b>Subject:</b> Re: DTS testpmd and SCAPY integration</span>
<div> </div>
</div>
<div><span style="font-size: 11pt;">Hello Patrick,<br>
<br>
> External email: Use caution opening links or attachments<br>
> Thank you for sharing Gregory. I did not get an opportunity to look through the code today, but I did run<br>
> through the presentation. A few points I noted:<br>
> 1. The presentation shows an example testpmd testcase for creating a flow rule, and then shows a<br>
> validation step in which standard out is compared against the expected string ("flow rule x created") and<br>
> we can conclude whether we are able to create flow rules. Are you also sending packets according to the<br>
> flow rules and validating that what is sent/received corresponds to the expected behavior of the flow<br>
> rules? When I look at the old DTS framework, and an example flow rules testsuite<br>
> (<a href="https://doc.dpdk.org/dts/test_plans/rte_flow_test_plan.html" data-auth="NotApplicable" id="OWA0731b9c1-c3e1-a584-aaac-3a55e7e25427" class="OWAAutoLink" data-loopstyle="linkonly">https://doc.dpdk.org/dts/test_plans/rte_flow_test_plan.html</a>) which
we want feature parity with, I think<br>
> that validation for this testing framework needs to primarily rely on comparing packets sent and packets<br>
> received.<br>
<br>
The unit test infrastructure validates flow rule creation and<br>
a result produced by that flow.<br>
Flow result is triggered by a packet.<br>
However, flow result validation does not always can be done by testing a packet.<br>
Unit test implements 2 flow validation methods.<br>
<br>
The first validation method tests testpmd output triggered by a test packet.<br>
<br>
Example: use the MODIFY_FIELD action to copy packet VLAN ID to flow TAG item.<br>
Flow tag is internal flow resource. It must be validated in DPDK application.<br>
<br>
Test creates 2 flow rules:<br>
<br>
Rule 1: use MODIFY_FILED to copy packet VLAN ID to flow TAG item<br>
pattern eth / vlan / end \<br>
actions modify_field op set dst_type tag ... src_type vlan_id ... / end<br>
<br>
Rule 2: validate the TAG item:<br>
pattern tag data is 0x31 ... / end actions mark id 0xaaa / rss / end<br>
<br>
The test sends a packet with VLAN ID 0x31: / Dot1Q(vlan=0x31) /<br>
The test matches tespmd output triggered by the packet for<br>
`FDIR matched ID=0xaaa`.<br>
<br>
The second validation method tests a packet after it was processed by a flow.<br>
<br>
Unit test operates in a static environment. It does not compare<br>
source and target packets. The test "knows" valid target packet configuration.<br>
<br>
Example: push VLAN header into a packet.<br>
<br>
There is a single flow rule in that example:<br>
pattern eth / end \<br>
actions of_push_vlan ethertype 0x8100 / \<br>
of_set_vlan_vid vlan_vid 3103 .../ port_id id 1 / end<br>
<br>
<br>
There are 2 SCAPY processes in that test: `tg` runs on peer host and<br>
sends a source packet. `vm` runs on the same host as testpmd. It validates<br>
incoming packet.<br>
<br>
Phase 0 prepares test packet on the `tg` and starts AsyncSniffer on the `vm`.<br>
Phase 1 sends the packet.<br>
Phase 2 validates the packet.<br>
The test can repeat phases 1 and 2.<br>
<br>
<br>
phase0:<br>
vm: |<br>
sniff = AsyncSniffer(iface=pf1vf0, filter='udp and src port 1234')<br>
<br>
tg: |<br>
udp_packet = Ether(src='11:22:33:44:55:66',<br>
dst='aa:bb:cc:dd:ee:aa')/<br>
IP(src='1.1.1.1', dst='2.2.2.2')/<br>
UDP(sport=1234, dport=5678)/Raw('== TEST ==')<br>
<br>
phase1: &phase1<br>
vm: sniff.start()<br>
tg: sendp(udp_packet, iface=pf1)<br>
<br>
phase2: &phase2<br>
vm: |<br>
cap = sniff.stop()<br>
if len(cap[UDP]) > 0: cap[UDP][0][Ether].command()<br>
result:<br>
vm: vlan=3103<br>
<br>
>In any case, there may be some testsuites which can be written which are small<br>
>enough in scope<br>
> that validating by standard out in this way may be appropriate. I'm not sure but we should keep our<br>
> options open. <br>
><br>
> 2. If the implementation overhead is not too significant for the configuration step in the DTS execution a<br>
> "--fast" option like you use may be a good improvement for the framework. In your mind, is the main<br>
> benefit A. reduced execution time, B. reduced user setup time (don't have to write full config file) or C.<br>
> Something else?<br>
<br>
A user must always provide test configuration.<br>
However a host can already have prepared setup before the test execution.<br>
In that case a user can skip host setup phase and reduce execution time.<br>
<br>
><br>
> Thanks for making this available to use so we can use it as a reference in making DTS better. :) <br>
><br>
></span></div>
</body>
</html>