[PATCH v5 36/54] doc: correct grammar in mldev library guide
Stephen Hemminger
stephen at networkplumber.org
Sun Jan 18 20:10:39 CET 2026
Correct various grammar and style issues in the ML device library
documentation:
- fix subject-verb agreement for "API which supports"
- use compound word "Workflow" instead of "Work flow"
- fix parallel construction for model load and start
- use plural "feature sets"
- rewrite grammatically broken sentence about rte_ml_dev_info_get
- add missing article before "number of queue pairs"
- use consistent terminology "operations" not "packets"
- fix malformed sentence about dequeue API format
- add missing word "with" in quantize section
Signed-off-by: Stephen Hemminger <stephen at networkplumber.org>
---
doc/guides/prog_guide/mldev.rst | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/doc/guides/prog_guide/mldev.rst b/doc/guides/prog_guide/mldev.rst
index 61661b998b..094a67cbdb 100644
--- a/doc/guides/prog_guide/mldev.rst
+++ b/doc/guides/prog_guide/mldev.rst
@@ -6,7 +6,7 @@ Machine Learning (ML) Device Library
The Machine Learning (ML) Device library provides a Machine Learning device framework for the management and
provisioning of hardware and software ML poll mode drivers,
-defining an API which support a number of ML operations
+defining an API which supports a number of ML operations
including device handling and inference processing.
The ML model creation and training is outside of the scope of this library.
@@ -16,7 +16,7 @@ The ML framework is built on the following model:
.. figure:: img/mldev_flow.*
- Work flow of inference on MLDEV
+ Workflow of inference on MLDEV
ML Device
A hardware or software-based implementation of ML device API
@@ -28,7 +28,7 @@ ML Model
required to make predictions on live data.
Once the model is created and trained outside of the DPDK scope,
the model can be loaded via ``rte_ml_model_load()``
- and then start it using ``rte_ml_model_start()`` API function.
+ and then started using ``rte_ml_model_start()`` API function.
The ``rte_ml_model_params_update()`` can be used to update the model parameters
such as weights and bias without unloading the model using ``rte_ml_model_unload()``.
@@ -79,9 +79,9 @@ Each device, whether virtual or physical is uniquely designated by two identifie
Device Features and Capabilities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-ML devices may support different feature set.
-In order to get the supported PMD feature ``rte_ml_dev_info_get()`` API
-which return the info of the device and its supported features.
+ML devices may support different feature sets.
+To get the supported PMD features, use the ``rte_ml_dev_info_get()`` API,
+which returns the info of the device and its supported features.
Device Configuration
@@ -106,7 +106,7 @@ maximum size of model and so on.
Configuration of Queue Pairs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each ML device can be configured with number of queue pairs.
+Each ML device can be configured with a number of queue pairs.
Each queue pair is configured using ``rte_ml_dev_queue_pair_setup()``
@@ -162,9 +162,9 @@ to specify the device queue pair to schedule the processing on.
The ``nb_ops`` parameter is the number of operations to process
which are supplied in the ``ops`` array of ``rte_ml_op`` structures.
The enqueue function returns the number of operations it enqueued for processing,
-a return value equal to ``nb_ops`` means that all packets have been enqueued.
+a return value equal to ``nb_ops`` means that all operations have been enqueued.
-The dequeue API uses the same format as the enqueue API of processed
+The dequeue API uses the same format as the enqueue API,
but the ``nb_ops`` and ``ops`` parameters are now used to specify
the max processed operations the user wishes to retrieve
and the location in which to store them.
@@ -193,7 +193,7 @@ from a higher precision type to a lower precision type and vice-versa.
ML library provides the functions ``rte_ml_io_quantize()`` and ``rte_ml_io_dequantize()``
to enable data type conversions.
User needs to provide the address of the quantized and dequantized data buffers
-to the functions, along the number of the batches in the buffers.
+to the functions, along with the number of batches in the buffers.
For quantization, the dequantized data is assumed to be
of the type ``dtype`` provided by the ``rte_ml_model_info::input``
--
2.51.0
More information about the dev
mailing list