[dpdk-dev] [PATCH] doc: simplify L3fwd user guide examples

Pablo de Lara pablo.de.lara.guarch at intel.com
Mon Dec 19 17:34:12 CET 2016


L3 Forwarding sample app user guides have some inconsistencies
between the example command line and the configuration table.
Also, they were showing too complicated configuration, using two
different NUMA nodes for two ports, which will probably lead
to performance drop due to use cross-socket channel.

This patch simplifies the configuration of these examples,
by using a single NUMA node and a single queue per port.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch at intel.com>
---
 doc/guides/sample_app_ug/l3_forward.rst            | 28 +++++--------
 .../sample_app_ug/l3_forward_access_ctrl.rst       | 47 ++++++++--------------
 doc/guides/sample_app_ug/l3_forward_virtual.rst    | 23 ++++-------
 3 files changed, 34 insertions(+), 64 deletions(-)

diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index ab916b9..6a6b8fb 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -129,43 +129,33 @@ Where,
 
 * ``--parse-ptype:`` Optional, set to use software to analyze packet type. Without this option, hardware will check the packet type.
 
-For example, consider a dual processor socket platform where cores 0-7 and 16-23 appear on socket 0, while cores 8-15 and 24-31 appear on socket 1.
-Let's say that the programmer wants to use memory from both NUMA nodes, the platform has only two ports, one connected to each NUMA node,
-and the programmer wants to use two cores from each processor socket to do the packet processing.
+For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7 and 16-23 appear on socket 0,
+while cores 8-15 and 24-31 appear on socket 1.
 
-To enable L3 forwarding between two ports, using two cores, cores 1 and 2, from each processor,
-while also taking advantage of local memory access by optimizing around NUMA, the programmer must enable two queues from each port,
-pin to the appropriate cores and allocate memory from the appropriate NUMA node. This is achieved using the following command:
+To enable L3 forwarding between two ports, assuming that both ports are in the same socket, using two cores, cores 1 and 2,
+(which are in the same socket too), use the following command:
 
 .. code-block:: console
 
-    ./build/l3fwd -c 606 -n 4 -- -p 0x3 --config="(0,0,1),(0,1,2),(1,0,9),(1,1,10)"
+    ./build/l3fwd -l 1,2 -n 4 -- -p 0x3 --config="(0,0,1),(1,0,2)"
 
 In this command:
 
-*   The -c option enables cores 0, 1, 2, 3
+*   The -l option enables cores 1, 2
 
 *   The -p option enables ports 0 and 1
 
-*   The --config option enables two queues on each port and maps each (port,queue) pair to a specific core.
-    Logic to enable multiple RX queues using RSS and to allocate memory from the correct NUMA nodes
-    is included in the application and is done transparently.
+*   The --config option enables one queue on each port and maps each (port,queue) pair to a specific core.
     The following table shows the mapping in this example:
 
 +----------+-----------+-----------+-------------------------------------+
 | **Port** | **Queue** | **lcore** | **Description**                     |
 |          |           |           |                                     |
 +----------+-----------+-----------+-------------------------------------+
-| 0        | 0         | 0         | Map queue 0 from port 0 to lcore 0. |
+| 0        | 0         | 1         | Map queue 0 from port 0 to lcore 1. |
 |          |           |           |                                     |
 +----------+-----------+-----------+-------------------------------------+
-| 0        | 1         | 2         | Map queue 1 from port 0 to lcore 2. |
-|          |           |           |                                     |
-+----------+-----------+-----------+-------------------------------------+
-| 1        | 0         | 1         | Map queue 0 from port 1 to lcore 1. |
-|          |           |           |                                     |
-+----------+-----------+-----------+-------------------------------------+
-| 1        | 1         | 3         | Map queue 1 from port 1 to lcore 3. |
+| 1        | 0         | 2         | Map queue 0 from port 1 to lcore 2. |
 |          |           |           |                                     |
 +----------+-----------+-----------+-------------------------------------+
 
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 4049e01..3574a25 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -306,48 +306,35 @@ where,
 
 *   --no-numa: optional, disables numa awareness
 
-As an example, consider a dual processor socket platform where cores 0, 2, 4, 6, 8 and 10 appear on socket 0,
-while cores 1, 3, 5, 7, 9 and 11 appear on socket 1.
-Let's say that the user wants to use memory from both NUMA nodes,
-the platform has only two ports and the user wants to use two cores from each processor socket to do the packet processing.
+For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7 and 16-23 appear on socket 0,
+while cores 8-15 and 24-31 appear on socket 1.
 
-To enable L3 forwarding between two ports, using two cores from each processor,
-while also taking advantage of local memory access by optimizing around NUMA,
-the user must enable two queues from each port,
-pin to the appropriate cores and allocate memory from the appropriate NUMA node.
-This is achieved using the following command:
+To enable L3 forwarding between two ports, assuming that both ports are in the same socket, using two cores, cores 1 and 2,
+(which are in the same socket too), use the following command:
 
 ..  code-block:: console
 
-    ./build/l3fwd-acl -c f -n 4 -- -p 0x3 --config="(0,0,0),(0,1,2),(1,0,1),(1,1,3)" --rule_ipv4="./rule_ipv4.db" -- rule_ipv6="./rule_ipv6.db" --scalar
+    ./build/l3fwd-acl -l 1,2 -n 4 -- -p 0x3 --config="(0,0,1),(1,0,2)" --rule_ipv4="./rule_ipv4.db" -- rule_ipv6="./rule_ipv6.db" --scalar
 
 In this command:
 
-*   The -c option enables cores 0, 1, 2, 3
+*   The -c option enables cores 1, 2
 
 *   The -p option enables ports 0 and 1
 
-*   The --config option enables two queues on each port and maps each (port,queue) pair to a specific core.
-    Logic to enable multiple RX queues using RSS and to allocate memory from the correct NUMA nodes is included in the application
-    and is done transparently.
+*   The --config option enables one queue on each port and maps each (port,queue) pair to a specific core.
     The following table shows the mapping in this example:
 
-    +----------+------------+-----------+------------------------------------------------+
-    | **Port** | **Queue**  | **lcore** |            **Description**                     |
-    |          |            |           |                                                |
-    +==========+============+===========+================================================+
-    | 0        | 0          | 0         | Map queue 0 from port 0 to lcore 0.            |
-    |          |            |           |                                                |
-    +----------+------------+-----------+------------------------------------------------+
-    | 0        | 1          | 2         | Map queue 1 from port 0 to lcore 2.            |
-    |          |            |           |                                                |
-    +----------+------------+-----------+------------------------------------------------+
-    | 1        | 0          | 1         | Map queue 0 from port 1 to lcore 1.            |
-    |          |            |           |                                                |
-    +----------+------------+-----------+------------------------------------------------+
-    | 1        | 1          | 3         | Map queue 1 from port 1 to lcore 3.            |
-    |          |            |           |                                                |
-    +----------+------------+-----------+------------------------------------------------+
+    +----------+------------+-----------+-------------------------------------+
+    | **Port** | **Queue**  | **lcore** |            **Description**          |
+    |          |            |           |                                     |
+    +==========+============+===========+=====================================+
+    | 0        | 0          | 1         | Map queue 0 from port 0 to lcore 1. |
+    |          |            |           |                                     |
+    +----------+------------+-----------+-------------------------------------+
+    | 1        | 0          | 2         | Map queue 0 from port 1 to lcore 2. |
+    |          |            |           |                                     |
+    +----------+------------+-----------+-------------------------------------+
 
 *   The --rule_ipv4 option specifies the reading of IPv4 rules sets from the ./ rule_ipv4.db file.
 
diff --git a/doc/guides/sample_app_ug/l3_forward_virtual.rst b/doc/guides/sample_app_ug/l3_forward_virtual.rst
index fa04722..5f9d894 100644
--- a/doc/guides/sample_app_ug/l3_forward_virtual.rst
+++ b/doc/guides/sample_app_ug/l3_forward_virtual.rst
@@ -110,40 +110,33 @@ where,
 
 *   --no-numa: optional, disables numa awareness
 
-For example, consider a dual processor socket platform where cores 0,2,4,6, 8, and 10 appear on socket 0,
-while cores 1,3,5,7,9, and 11 appear on socket 1.
-Let's say that the programmer wants to use memory from both NUMA nodes,
-the platform has only two ports and the programmer wants to use one core from each processor socket to do the packet processing
-since only one Rx/Tx queue pair can be used in virtualization mode.
+For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7 and 16-23 appear on socket 0,
+while cores 8-15 and 24-31 appear on socket 1.
 
-To enable L3 forwarding between two ports, using one core from each processor,
-while also taking advantage of local memory accesses by optimizing around NUMA,
-the programmer can pin to the appropriate cores and allocate memory from the appropriate NUMA node.
-This is achieved using the following command:
+To enable L3 forwarding between two ports, assuming that both ports are in the same socket, using two cores, cores 1 and 2,
+(which are in the same socket too), use the following command:
 
 .. code-block:: console
 
-   ./build/l3fwd-vf -c 0x03 -n 3 -- -p 0x3 --config="(0,0,0),(1,0,1)"
+   ./build/l3fwd-vf -l 1,2 -n 4 -- -p 0x3 --config="(0,0,1),(1,0,2)"
 
 In this command:
 
-*   The -c option enables cores 0 and 1
+*   The -l option enables cores 1 and 2
 
 *   The -p option enables ports 0 and 1
 
 *   The --config option enables one queue on each port and maps each (port,queue) pair to a specific core.
-    Logic to enable multiple RX queues using RSS and to allocate memory from the correct NUMA nodes
-    is included in the application and is done transparently.
     The following table shows the mapping in this example:
 
     +----------+-----------+-----------+------------------------------------+
     | **Port** | **Queue** | **lcore** | **Description**                    |
     |          |           |           |                                    |
     +==========+===========+===========+====================================+
-    | 0        | 0         | 0         | Map queue 0 from port 0 to lcore 0 |
+    | 0        | 0         | 1         | Map queue 0 from port 0 to lcore 1 |
     |          |           |           |                                    |
     +----------+-----------+-----------+------------------------------------+
-    | 1        | 1         | 1         | Map queue 0 from port 1 to lcore 1 |
+    | 1        | 0         | 2         | Map queue 0 from port 1 to lcore 2 |
     |          |           |           |                                    |
     +----------+-----------+-----------+------------------------------------+
 
-- 
2.7.4



More information about the dev mailing list