[dpdk-dev] [PATCH v3 10/10] doc: add application usage guide for l2fwd-event

pbhagavatula at marvell.com pbhagavatula at marvell.com
Thu Sep 19 12:13:46 CEST 2019


From: Sunil Kumar Kori <skori at marvell.com>

Add documentation for l2fwd-event example.
Update MAINTAINERS file claiming responsibility of l2fwd-event.

Signed-off-by: Sunil Kumar Kori <skori at marvell.com>
---
 MAINTAINERS                                   |   5 +
 doc/guides/sample_app_ug/index.rst            |   1 +
 doc/guides/sample_app_ug/intro.rst            |   5 +
 .../l2_forward_event_real_virtual.rst         | 799 ++++++++++++++++++
 4 files changed, 810 insertions(+)
 create mode 100644 doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index b3d9aaddd..d8e1fa84d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1458,6 +1458,11 @@ M: Tomasz Kantecki <tomasz.kantecki at intel.com>
 F: doc/guides/sample_app_ug/l2_forward_cat.rst
 F: examples/l2fwd-cat/
 
+M: Sunil Kumar Kori <skori at marvell.com>
+M: Pavan Nikhilesh <pbhagavatula at marvell.com>
+F: examples/l2fwd-event/
+F: doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
+
 F: examples/l3fwd/
 F: doc/guides/sample_app_ug/l3_forward.rst
 
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index f23f8f59e..83a4f8d5c 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -26,6 +26,7 @@ Sample Applications User Guides
     l2_forward_crypto
     l2_forward_job_stats
     l2_forward_real_virtual
+    l2_forward_event_real_virtual
     l2_forward_cat
     l3_forward
     l3_forward_power_man
diff --git a/doc/guides/sample_app_ug/intro.rst b/doc/guides/sample_app_ug/intro.rst
index 90704194a..b33904ed1 100644
--- a/doc/guides/sample_app_ug/intro.rst
+++ b/doc/guides/sample_app_ug/intro.rst
@@ -87,6 +87,11 @@ examples are highlighted below.
   forwarding, or ``l2fwd`` application does forwarding based on Ethernet MAC
   addresses like a simple switch.
 
+* :doc:`Network Layer 2 forwarding<l2_forward_eventdev_real_virtual>`: The Network Layer 2
+  forwarding, or ``l2fwd-event`` application does forwarding based on Ethernet MAC
+  addresses like a simple switch. It demonstrate usage of poll and event mode Rx/Tx
+  mechanism.
+
 * :doc:`Network Layer 3 forwarding<l3_forward>`: The Network Layer3
   forwarding, or ``l3fwd`` application does forwarding based on Internet
   Protocol, IPv4 or IPv6 like a simple router.
diff --git a/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
new file mode 100644
index 000000000..7cea8efaf
--- /dev/null
+++ b/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
@@ -0,0 +1,799 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2010-2014 Intel Corporation.
+
+.. _l2_fwd_event_app_real_and_virtual:
+
+L2 Forwarding Eventdev Sample Application (in Real and Virtualized Environments)
+================================================================================
+
+The L2 Forwarding eventdev sample application is a simple example of packet
+processing using the Data Plane Development Kit (DPDK) to demonstrate usage of
+poll and event mode packet I/O mechanism which also takes advantage of Single
+Root I/O Virtualization (SR-IOV) features in a virtualized environment.
+
+Overview
+--------
+
+The L2 Forwarding eventdev sample application, which can operate in real and
+virtualized environments, performs L2 forwarding for each packet that is
+received on an RX_PORT. The destination port is the adjacent port from the
+enabled portmask, that is, if the first four ports are enabled (portmask=0x0f),
+ports 1 and 2 forward into each other, and ports 3 and 4 forward into each
+other. Also, if MAC addresses updating is enabled, the MAC addresses are
+affected as follows:
+
+*   The source MAC address is replaced by the TX_PORT MAC address
+
+*   The destination MAC address is replaced by  02:00:00:00:00:TX_PORT_ID
+
+Appliation receives packets from RX_PORT using below mentioned methods:
+
+*   Poll mode
+
+*   Eventdev mode (default)
+
+This application can be used to benchmark performance using a traffic-generator,
+as shown in the :numref:`figure_l2_fwd_benchmark_setup`, or in a virtualized
+environment as shown in :numref:`figure_l2_fwd_virtenv_benchmark_setup`.
+
+.. _figure_l2_fwd_benchmark_setup:
+
+.. figure:: img/l2_fwd_benchmark_setup.*
+
+   Performance Benchmark Setup (Basic Environment)
+
+.. _figure_l2_fwd_virtenv_benchmark_setup:
+
+.. figure:: img/l2_fwd_virtenv_benchmark_setup.*
+
+   Performance Benchmark Setup (Virtualized Environment)
+
+This application may be used for basic VM to VM communication as shown
+in :numref:`figure_l2_fwd_vm2vm`, when MAC addresses updating is disabled.
+
+.. _figure_l2_fwd_vm2vm:
+
+.. figure:: img/l2_fwd_vm2vm.*
+
+   Virtual Machine to Virtual Machine communication.
+
+.. _l2_fwd_event_vf_setup:
+
+Virtual Function Setup Instructions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Application can use the virtual function available in the system and therefore
+can be used in a virtual machine without passing through the whole Network
+Device into a guest machine in a virtualized scenario. The virtual functions
+can be enabled on host machine or the hypervisor with the respective physical
+function driver.
+
+For example, on a Linux* host machine, it is possible to enable a virtual
+function using the following command:
+
+.. code-block:: console
+
+    modprobe ixgbe max_vfs=2,2
+
+This command enables two Virtual Functions on each of Physical Function of the
+NIC, with two physical ports in the PCI configuration space.
+
+It is important to note that enabled Virtual Function 0 and 2 would belong to
+Physical Function 0 and Virtual Function 1 and 3 would belong to Physical
+Function 1, in this case enabling a total of four Virtual Functions.
+
+Compiling the Application
+-------------------------
+
+To compile the sample application see :doc:`compiling`.
+
+The application is located in the ``l2fwd-event`` sub-directory.
+
+Running the Application
+-----------------------
+
+The application requires a number of command line options:
+
+.. code-block:: console
+
+    ./build/l2fwd-event [EAL options] -- -p PORTMASK [-q NQ] --[no-]mac-updating --mode=MODE --eventq-sync=SYNC_MODE
+
+where,
+
+*   p PORTMASK: A hexadecimal bitmask of the ports to configure
+
+*   q NQ: A number of queues (=ports) per lcore (default is 1)
+
+*   --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default).
+
+*   --mode=MODE: Packet transfer mode for I/O, poll or eventdev. Eventdev by default.
+
+*   --eventq-sync=SYNC_MODE: Event queue synchronization method, Ordered or Atomic. Atomic by default.
+
+Sample usage commands are given below to run the application into different mode:
+
+Poll mode on linux environment with 4 lcores, 16 ports and 8 RX queues per lcore
+and MAC address updating enabled, issue the command:
+
+.. code-block:: console
+
+    ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=poll
+
+Eventdev mode on linux environment with 4 lcores, 16 ports , sync method ordered
+and MAC address updating enabled, issue the command:
+
+.. code-block:: console
+
+    ./build/l2fwd-event -l 0-3 -n 4 -- -p ffff --eventq-sync=ordered
+
+or
+
+.. code-block:: console
+
+    ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=eventdev --eventq-sync=ordered
+
+Refer to the *DPDK Getting Started Guide* for general information on running
+applications and the Environment Abstraction Layer (EAL) options.
+
+To run application with S/W scheduler, it uses following DPDK services:
+
+*   Software scheduler
+*   Rx adapter service function
+*   Tx adapter service function
+
+Application needs service cores to run above mentioned services. Service cores
+must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W
+scheduler. Following is the sample command:
+
+.. code-block:: console
+
+    ./build/l2fwd-event -l 0-7 -s 0-3 -n 4 ---vdev event_sw0 --q 8 -p ffff --mode=eventdev --eventq-sync=ordered
+
+Explanation
+-----------
+
+The following sections provide some explanation of the code.
+
+.. _l2_fwd_event_app_cmd_arguments:
+
+Command Line Arguments
+~~~~~~~~~~~~~~~~~~~~~~
+
+The L2 Forwarding eventdev sample application takes specific parameters,
+in addition to Environment Abstraction Layer (EAL) arguments.
+The preferred way to parse parameters is to use the getopt() function,
+since it is part of a well-defined and portable library.
+
+The parsing of arguments is done in the **l2fwd_parse_args()** function for non
+eventdev parameteres and in **parse_eventdev_args()** for eventded parameters.
+The method of argument parsing is not described here. Refer to the
+*glibc getopt(3)* man page for details.
+
+EAL arguments are parsed first, then application-specific arguments.
+This is done at the beginning of the main() function and eventdev parameters
+are parsed in eventdev_resource_setup() function during eventdev setup:
+
+.. code-block:: c
+
+    /* init EAL */
+
+    ret = rte_eal_init(argc, argv);
+    if (ret < 0)
+        rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+
+    argc -= ret;
+    argv += ret;
+
+    /* parse application arguments (after the EAL ones) */
+
+    ret = l2fwd_parse_args(argc, argv);
+    if (ret < 0)
+        rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n");
+    .
+    .
+    .
+
+    /* Parse eventdev command line options */
+    ret = parse_eventdev_args(argc, argv);
+    if (ret < 0)
+        return ret;
+
+
+
+
+.. _l2_fwd_event_app_mbuf_init:
+
+Mbuf Pool Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once the arguments are parsed, the mbuf pool is created.
+The mbuf pool contains a set of mbuf objects that will be used by the driver
+and the application to store network packet data:
+
+.. code-block:: c
+
+    /* create the mbuf pool */
+
+    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
+                                                 MEMPOOL_CACHE_SIZE, 0,
+                                                 RTE_MBUF_DEFAULT_BUF_SIZE,
+                                                 rte_socket_id());
+    if (l2fwd_pktmbuf_pool == NULL)
+        rte_panic("Cannot init mbuf pool\n");
+
+The rte_mempool is a generic structure used to handle pools of objects.
+In this case, it is necessary to create a pool that will be used by the driver.
+The number of allocated pkt mbufs is NB_MBUF, with a data room size of
+RTE_MBUF_DEFAULT_BUF_SIZE each.
+A per-lcore cache of 32 mbufs is kept.
+The memory is allocated in NUMA socket 0,
+but it is possible to extend this code to allocate one mbuf pool per socket.
+
+The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
+initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
+An advanced application may want to use the mempool API to create the
+mbuf pool with more control.
+
+.. _l2_fwd_event_app_dvr_init:
+
+Driver Initialization
+~~~~~~~~~~~~~~~~~~~~~
+
+The main part of the code in the main() function relates to the initialization
+of the driver. To fully understand this code, it is recommended to study the
+chapters that related to the Poll Mode and Event mode Driver in the
+*DPDK Programmer's Guide* - Rel 1.4 EAR and the *DPDK API Reference*.
+
+.. code-block:: c
+
+    if (rte_pci_probe() < 0)
+        rte_exit(EXIT_FAILURE, "Cannot probe PCI\n");
+
+    /* reset l2fwd_dst_ports */
+
+    for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+        l2fwd_dst_ports[portid] = 0;
+
+    last_port = 0;
+
+    /*
+     * Each logical core is assigned a dedicated TX queue on each port.
+     */
+
+    RTE_ETH_FOREACH_DEV(portid) {
+        /* skip ports that are not enabled */
+
+        if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+           continue;
+
+        if (nb_ports_in_mask % 2) {
+            l2fwd_dst_ports[portid] = last_port;
+            l2fwd_dst_ports[last_port] = portid;
+        }
+        else
+           last_port = portid;
+
+        nb_ports_in_mask++;
+
+        rte_eth_dev_info_get((uint8_t) portid, &dev_info);
+    }
+
+Observe that:
+
+*   rte_igb_pmd_init_all() simultaneously registers the driver as a PCI driver
+    and as an Ethernet Poll Mode Driver.
+
+*   rte_pci_probe() parses the devices on the PCI bus and initializes recognized
+    devices.
+
+The next step is to configure the RX and TX queues. For each port, there is only
+one RX queue (only one lcore is able to poll a given port). The number of TX
+queues depends on the number of available lcores. The rte_eth_dev_configure()
+function is used to configure the number of queues for a port:
+
+.. code-block:: c
+
+    ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
+    if (ret < 0)
+        rte_exit(EXIT_FAILURE, "Cannot configure device: "
+            "err=%d, port=%u\n",
+            ret, portid);
+
+.. _l2_fwd_event_app_rx_init:
+
+RX Queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The application uses one lcore to poll one or several ports, depending on the -q
+option, which specifies the number of queues per lcore.
+
+For example, if the user specifies -q 4, the application is able to poll four
+ports with one lcore. If there are 16 ports on the target (and if the portmask
+argument is -p ffff ), the application will need four lcores to poll all the
+ports.
+
+.. code-block:: c
+
+    ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0,
+                                 &rx_conf, l2fwd_pktmbuf_pool);
+    if (ret < 0)
+
+        rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: "
+            "err=%d, port=%u\n",
+            ret, portid);
+
+The list of queues that must be polled for a given lcore is stored in a private
+structure called struct lcore_queue_conf.
+
+.. code-block:: c
+
+    struct lcore_queue_conf {
+        unsigned n_rx_port;
+        unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+        struct mbuf_table tx_mbufs[L2FWD_MAX_PORTS];
+    } rte_cache_aligned;
+
+    struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+The values n_rx_port and rx_port_list[] are used in the main packet processing
+loop (see :ref:`l2_fwd_event_app_rx_tx_packets`).
+
+.. _l2_fwd_event_app_tx_init:
+
+TX Queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Each lcore should be able to transmit on any port. For every port, a single TX
+queue is initialized.
+
+.. code-block:: c
+
+    /* init one TX queue on each port */
+
+    fflush(stdout);
+
+    ret = rte_eth_tx_queue_setup((uint8_t) portid, 0, nb_txd,
+                                 rte_eth_dev_socket_id(portid), &tx_conf);
+    if (ret < 0)
+        rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n",
+                 ret, (unsigned) portid);
+
+The global configuration for TX queues is stored in a static structure:
+
+.. code-block:: c
+
+    static const struct rte_eth_txconf tx_conf = {
+        .tx_thresh = {
+            .pthresh = TX_PTHRESH,
+            .hthresh = TX_HTHRESH,
+            .wthresh = TX_WTHRESH,
+        },
+        .tx_free_thresh = RTE_TEST_TX_DESC_DEFAULT + 1, /* disable feature */
+    };
+
+To configure eventdev support, application setups following components:
+
+*   Event dev
+*   Event queue
+*   Event Port
+*   Rx/Tx adapters
+*   Ethernet ports
+
+.. _l2_fwd_event_app_event_dev_init:
+
+Event dev Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~
+Application can use either H/W or S/W based event device scheduler
+implementation and supports single instance of event device. It configures event
+device as per below configuration
+
+.. code-block:: c
+
+   struct rte_event_dev_config event_d_conf = {
+        .nb_event_queues = ethdev_count, /* Dedicated to each Ethernet port */
+        .nb_event_ports = num_workers, /* Dedicated to each lcore */
+        .nb_events_limit  = 4096,
+        .nb_event_queue_flows = 1024,
+        .nb_event_port_dequeue_depth = 128,
+        .nb_event_port_enqueue_depth = 128
+   };
+
+   ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+   if (ret < 0)
+        rte_exit(EXIT_FAILURE, "Error in configuring event device");
+
+In case of S/W scheduler, application runs eventdev scheduler service on service
+core. Application retrieves service id and later on it starts the same on a
+given lcore.
+
+.. code-block:: c
+
+        /* Start event device service */
+        ret = rte_event_dev_service_id_get(eventdev_rsrc.event_d_id,
+                                           &service_id);
+        if (ret != -ESRCH && ret != 0)
+                rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+        rte_service_runstate_set(service_id, 1);
+        rte_service_set_runstate_mapped_check(service_id, 0);
+        eventdev_rsrc.service_id = service_id;
+
+        /* Start eventdev scheduler service */
+        rte_service_map_lcore_set(eventdev_rsrc.service_id, lcore_id[0], 1);
+        rte_service_lcore_start(lcore_id[0]);
+
+.. _l2_fwd_app_event_queue_init:
+
+Event queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each Ethernet device is assigned a dedicated event queue which will be linked
+to all available event ports i.e. each lcore can dequeue packets from any of the
+Ethernet ports.
+
+.. code-block:: c
+
+   struct rte_event_queue_conf event_q_conf = {
+        .nb_atomic_flows = 1024,
+        .nb_atomic_order_sequences = 1024,
+        .event_queue_cfg = 0,
+        .schedule_type = RTE_SCHED_TYPE_ATOMIC,
+        .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST
+   };
+
+   /* User requested sync mode */
+   event_q_conf.schedule_type = eventq_sync_mode;
+   for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) {
+        ret = rte_event_queue_setup(event_d_id, event_q_id,
+                                            &event_q_conf);
+        if (ret < 0) {
+              rte_exit(EXIT_FAILURE,
+                       "Error in configuring event queue");
+        }
+  }
+
+In case of S/W scheduler, an extra event queue is created which will be used for
+Tx adapter service function for enqueue operation.
+
+.. _l2_fwd_app_event_port_init:
+
+Event port Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~
+Each worker thread is assigned a dedicated event port for enq/deq operations
+to/from an event device. All event ports are linked with all available event
+queues.
+
+.. code-block:: c
+
+   struct rte_event_port_conf event_p_conf = {
+        .dequeue_depth = 32,
+        .enqueue_depth = 32,
+        .new_event_threshold = 4096
+   };
+
+   for (event_p_id = 0; event_p_id < num_workers; event_p_id++) {
+        ret = rte_event_port_setup(event_d_id, event_p_id,
+                                   &event_p_conf);
+        if (ret < 0) {
+              rte_exit(EXIT_FAILURE,
+                       "Error in configuring event port %d\n",
+                       event_p_id);
+        }
+
+        ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+                                  NULL, 0);
+        if (ret < 0) {
+              rte_exit(EXIT_FAILURE, "Error in linking event port %d "
+                       "to event queue", event_p_id);
+        }
+   }
+
+In case of S/W scheduler, an extra event port is created by DPDK library which
+is retrieved  by the application and same will be used by Tx adapter service.
+
+.. code-block:: c
+
+        ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+        if (ret)
+                rte_exit(EXIT_FAILURE,
+                         "Failed to get Tx adapter port id: %d\n", ret);
+
+        ret = rte_event_port_link(event_d_id, tx_port_id,
+                                  &eventdev_rsrc.evq.event_q_id[
+                                        eventdev_rsrc.evq.nb_queues - 1],
+                                  NULL, 1);
+        if (ret != 1)
+                rte_exit(EXIT_FAILURE,
+                         "Unable to link Tx adapter port to Tx queue:err = %d",
+                         ret);
+
+.. _l2_fwd_event_app_adapter_init:
+
+Rx/Tx adapter Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each Ethernet port is assigned a dedicated Rx/Tx adapter for H/W scheduler. Each
+Ethernet port's Rx queues are connected to its respective event queue at
+priority 0 via Rx adapter configuration and Ethernet port's tx queues are
+connected via Tx adapter.
+
+.. code-block:: c
+
+        struct rte_event_port_conf event_p_conf = {
+                .dequeue_depth = 32,
+                .enqueue_depth = 32,
+                .new_event_threshold = 4096
+        };
+
+        for (i = 0; i < ethdev_count; i++) {
+                ret = rte_event_eth_rx_adapter_create(i, event_d_id,
+                                                      &event_p_conf);
+                if (ret)
+                        rte_exit(EXIT_FAILURE,
+                                 "failed to create rx adapter[%d]", i);
+
+                /* Configure user requested sync mode */
+                eth_q_conf.ev.queue_id = eventdev_rsrc.evq.event_q_id[i];
+                eth_q_conf.ev.sched_type = eventq_sync_mode;
+                ret = rte_event_eth_rx_adapter_queue_add(i, i, -1, &eth_q_conf);
+                if (ret)
+                        rte_exit(EXIT_FAILURE,
+                                 "Failed to add queues to Rx adapter");
+
+                ret = rte_event_eth_rx_adapter_start(i);
+                if (ret)
+                        rte_exit(EXIT_FAILURE,
+                                 "Rx adapter[%d] start failed", i);
+
+                eventdev_rsrc.rx_adptr.rx_adptr[i] = i;
+        }
+
+        for (i = 0; i < ethdev_count; i++) {
+                ret = rte_event_eth_tx_adapter_create(i, event_d_id,
+                                                      &event_p_conf);
+                if (ret)
+                        rte_exit(EXIT_FAILURE,
+                                 "failed to create tx adapter[%d]", i);
+
+                ret = rte_event_eth_tx_adapter_queue_add(i, i, -1);
+                if (ret)
+                        rte_exit(EXIT_FAILURE,
+                                 "failed to add queues to Tx adapter");
+
+                ret = rte_event_eth_tx_adapter_start(i);
+                if (ret)
+                        rte_exit(EXIT_FAILURE,
+                                 "Tx adapter[%d] start failed", i);
+
+                eventdev_rsrc.tx_adptr.tx_adptr[i] = i;
+        }
+
+For S/W scheduler instead of dedicated adapters, common Rx/Tx adapters are
+configured which will be shared among all the Ethernet ports. Also DPDK library
+need service cores to run internal services for Rx/Tx adapters. Application gets
+service id for Rx/Tx adapters and after successful setup it runs the services
+on dedicated service cores.
+
+.. code-block:: c
+
+        /* retrieving service Id for Rx adapter */
+        ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+        if (ret != -ESRCH && ret != 0) {
+                rte_exit(EXIT_FAILURE,
+                        "Error getting the service ID for rx adptr\n");
+        }
+
+        rte_service_runstate_set(service_id, 1);
+        rte_service_set_runstate_mapped_check(service_id, 0);
+        eventdev_rsrc.rx_adptr.service_id = service_id;
+
+        /* Start eventdev Rx adapter service */
+        rte_service_map_lcore_set(eventdev_rsrc.rx_adptr.service_id,
+                                  lcore_id[1], 1);
+        rte_service_lcore_start(lcore_id[1]);
+
+        /* retrieving service Id for Tx adapter */
+        ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+        if (ret != -ESRCH && ret != 0)
+                rte_exit(EXIT_FAILURE, "Failed to get Tx adapter service ID");
+
+        rte_service_runstate_set(service_id, 1);
+        rte_service_set_runstate_mapped_check(service_id, 0);
+        eventdev_rsrc.tx_adptr.service_id = service_id;
+
+        /* Start eventdev Tx adapter service */
+        rte_service_map_lcore_set(eventdev_rsrc.tx_adptr.service_id,
+                                  lcore_id[2], 1);
+        rte_service_lcore_start(lcore_id[2]);
+
+.. _l2_fwd_event_app_rx_tx_packets:
+
+Receive, Process and Transmit Packets
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the **l2fwd_main_loop()** function, the main task is to read ingress packets from
+the RX queues. This is done using the following code:
+
+.. code-block:: c
+
+    /*
+     * Read packet from RX queues
+     */
+
+    for (i = 0; i < qconf->n_rx_port; i++) {
+        portid = qconf->rx_port_list[i];
+        nb_rx = rte_eth_rx_burst((uint8_t) portid, 0,  pkts_burst,
+                                 MAX_PKT_BURST);
+
+        for (j = 0; j < nb_rx; j++) {
+            m = pkts_burst[j];
+            rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+            l2fwd_simple_forward(m, portid);
+        }
+    }
+
+Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst()
+function writes the mbuf pointers in a local table and returns the number of
+available mbufs in the table.
+
+Then, each mbuf in the table is processed by the l2fwd_simple_forward()
+function. The processing is very simple: process the TX port from the RX port,
+then replace the source and destination MAC addresses if MAC addresses updating
+is enabled.
+
+.. note::
+
+    In the following code, one line for getting the output port requires some
+    explanation.
+
+During the initialization process, a static array of destination ports
+(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
+is assigned that is either the next or previous enabled port from the portmask.
+If number of ports are odd in portmask then packet from last port will be
+forwarded to first port i.e. if portmask=0x07, then forwarding will take place
+like p0--->p1, p1--->p2, p2--->p0.
+
+Also to optimize enqueue opeartion, l2fwd_simple_forward() stores incoming mbus
+upto MAX_PKT_BURST. Once it reaches upto limit, all packets are transmitted to
+destination ports.
+
+.. code-block:: c
+
+   static void
+   l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid)
+   {
+       uint32_t dst_port;
+       int32_t sent;
+       struct rte_eth_dev_tx_buffer *buffer;
+
+       dst_port = l2fwd_dst_ports[portid];
+
+       if (mac_updating)
+           l2fwd_mac_updating(m, dst_port);
+
+       buffer = tx_buffer[dst_port];
+       sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
+       if (sent)
+       port_statistics[dst_port].tx += sent;
+   }
+
+For this test application, the processing is exactly the same for all packets
+arriving on the same RX port. Therefore, it would have been possible to call
+the rte_eth_tx_buffer() function directly from the main loop to send all the
+received packets on the same TX port, using the burst-oriented send function,
+which is more efficient.
+
+However, in real-life applications (such as, L3 routing),
+packet N is not necessarily forwarded on the same port as packet N-1.
+The application is implemented to illustrate that, so the same approach can be
+reused in a more complex application.
+
+To ensure that no packets remain in the tables, each lcore does a draining of TX
+queue in its main loop. This technique introduces some latency when there are
+not many packets to send, however it improves performance:
+
+.. code-block:: c
+
+        cur_tsc = rte_rdtsc();
+
+        /*
+        * TX burst queue drain
+        */
+        diff_tsc = cur_tsc - prev_tsc;
+        if (unlikely(diff_tsc > drain_tsc)) {
+                for (i = 0; i < qconf->n_rx_port; i++) {
+                        portid = l2fwd_dst_ports[qconf->rx_port_list[i]];
+                        buffer = tx_buffer[portid];
+                        sent = rte_eth_tx_buffer_flush(portid, 0,
+                                                       buffer);
+                        if (sent)
+                                port_statistics[portid].tx += sent;
+                }
+
+                /* if timer is enabled */
+                if (timer_period > 0) {
+                        /* advance the timer */
+                        timer_tsc += diff_tsc;
+
+                        /* if timer has reached its timeout */
+                        if (unlikely(timer_tsc >= timer_period)) {
+                                /* do this only on master core */
+                                if (lcore_id == rte_get_master_lcore()) {
+                                        print_stats();
+                                        /* reset the timer */
+                                        timer_tsc = 0;
+                                }
+                        }
+                }
+
+                prev_tsc = cur_tsc;
+        }
+
+In the **l2fwd_main_loop_eventdev()** function, the main task is to read ingress
+packets from the event ports. This is done using the following code:
+
+.. code-block:: c
+
+        /* Read packet from eventdev */
+        nb_rx = rte_event_dequeue_burst(event_d_id, event_p_id,
+                                        events, deq_len, 0);
+        if (nb_rx == 0) {
+                rte_pause();
+                continue;
+        }
+
+        for (i = 0; i < nb_rx; i++) {
+                mbuf[i] = events[i].mbuf;
+                rte_prefetch0(rte_pktmbuf_mtod(mbuf[i], void *));
+        }
+
+
+Before reading packets, deq_len is fetched to ensure correct allowed deq length
+by the eventdev.
+The rte_event_dequeue_burst() function writes the mbuf pointers in a local table
+and returns the number of available mbufs in the table.
+
+Then, each mbuf in the table is processed by the l2fwd_eventdev_forward()
+function. The processing is very simple: process the TX port from the RX port,
+then replace the source and destination MAC addresses if MAC addresses updating
+is enabled.
+
+.. note::
+
+    In the following code, one line for getting the output port requires some
+    explanation.
+
+During the initialization process, a static array of destination ports
+(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
+is assigned that is either the next or previous enabled port from the portmask.
+If number of ports are odd in portmask then packet from last port will be
+forwarded to first port i.e. if portmask=0x07, then forwarding will take place
+like p0--->p1, p1--->p2, p2--->p0.
+
+l2fwd_eventdev_forward() does not stores incoming mbufs. Packet will forwarded
+be to destination ports via Tx adapter or generic event dev enqueue API
+depending H/W or S/W scheduler is used.
+
+.. code-block:: c
+
+        static inline void
+        l2fwd_eventdev_forward(struct rte_mbuf *m[], uint32_t portid,
+                               uint16_t nb_rx, uint16_t event_p_id)
+        {
+                uint32_t dst_port, i;
+
+                dst_port = l2fwd_dst_ports[portid];
+
+                for (i = 0; i < nb_rx; i++) {
+                        if (mac_updating)
+                                l2fwd_mac_updating(m[i], dst_port);
+
+                        m[i]->port = dst_port;
+                }
+
+                if (timer_period > 0) {
+                        rte_spinlock_lock(&port_stats_lock);
+                        port_statistics[dst_port].tx += nb_rx;
+                        rte_spinlock_unlock(&port_stats_lock);
+                }
+                /* Registered callback is invoked for Tx */
+                eventdev_rsrc.send_burst_eventdev(m, nb_rx, event_p_id);
+        }
-- 
2.17.1



More information about the dev mailing list