[dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker

Ananyev, Konstantin konstantin.ananyev at intel.com
Tue Jan 7 15:30:56 CET 2020


> > > > > This patch adds the driver outbound worker thread for ipsec-secgw.
> > > > > In this mode the security session is a fixed one and sa update is
> > > > > not done.
> > > > >
> > > > > Signed-off-by: Ankur Dwivedi <adwivedi at marvell.com>
> > > > > Signed-off-by: Anoob Joseph <anoobj at marvell.com>
> > > > > Signed-off-by: Lukasz Bartosik <lbartosik at marvell.com>
> > > > > ---
> > > > >  examples/ipsec-secgw/ipsec-secgw.c  | 12 +++++
> > > > >  examples/ipsec-secgw/ipsec.c        |  9 ++++
> > > > >  examples/ipsec-secgw/ipsec_worker.c | 90
> > > > > ++++++++++++++++++++++++++++++++++++-
> > > > >  3 files changed, 110 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c
> > > > > b/examples/ipsec-secgw/ipsec-secgw.c
> > > > > index 2e7d4d8..76719f2 100644
> > > > > --- a/examples/ipsec-secgw/ipsec-secgw.c
> > > > > +++ b/examples/ipsec-secgw/ipsec-secgw.c
> > > > > @@ -2011,6 +2011,18 @@ cryptodevs_init(void)
> > > > >  			i++;
> > > > >  		}
> > > > >
> > > > > +		/*
> > > > > +		 * Set the queue pair to at least the number of ethernet
> > > > > +		 * devices for inline outbound.
> > > > > +		 */
> > > > > +		qp = RTE_MAX(rte_eth_dev_count_avail(), qp);
> > > >
> > > >
> > > > Not sure, what for?
> > > > Why we can't process packets from several eth devs on the same
> > > > crypto-dev queue?
> > >
> > > [Anoob] This is because of a limitation in our hardware. In our
> > > hardware, it's the crypto queue pair which would be submitting to the
> > > ethernet queue for Tx. But in DPDK spec, the security processing is
> > > done by the ethernet PMD Tx routine alone. We manage to do this by sharing
> > the crypto queue internally. The crypto queues initialized during
> > crypto_configure() gets mapped to various ethernet ports. Because of this, we
> > need to have atleast as many crypto queues as the number of eth ports.
> >
> > Ok, but that breaks current behavior.
> > Right now in poll-mode it is possible to map traffic from N eth-devs to M crypto-
> > devs (N>= M, by using M lcores).
> > Would prefer to keep this functionality in place.
> 
> [Anoob] Understood. I don't think that functionality is broken. If the number of qps available is lower than the number of eth devs,
> then only the ones available would be enabled. Inline protocol session for the other eth devs would fail for us.
> 
> Currently, the app assumes that for one core, it needs only one qp (and for M core, M qp). Is there any harm in enabling all qps
> available? If such a change can be done, that would also work for us.

Hmm, I suppose it could cause some problems with some corner-cases:
if we'll have crypto-dev with really big number of max_queues.
In that case it might require a lot of extra memory for cryptodev_configure/queue_pair_setup.
Probably the easiest way to deal with it:
- add req_queue_num parameter for cryptodevs_init()
   And then do: qp =RTE_MIN(max_nb_qps, RTE_MAX(req_queue_num, qp));
 - for poll mode we'll call cryptodevs_init(0), for your case it could be
   cryptodevs_init(rte_eth_dev_count_avail()).

Would it work for your case?

> >
> > >
> > > The above change is required because here we limit the number of
> > > crypto qps based on the number of cores etc. So when tried on single core, the
> > qps get limited to 1, which causes session_create() to fail for all ports other than
> > the first one.
> > >
> > > >
> > > > > +
> > > > > +		/*
> > > > > +		 * The requested number of queues should never exceed
> > > > > +		 * the max available
> > > > > +		 */
> > > > > +		qp = RTE_MIN(qp, max_nb_qps);
> > > > > +
> > > > >  		if (qp == 0)
> > > > >  			continue;
> > > > >
> > > > > diff --git a/examples/ipsec-secgw/ipsec.c
> > > > > b/examples/ipsec-secgw/ipsec.c index e529f68..9ff8a63 100644
> > > > > --- a/examples/ipsec-secgw/ipsec.c
> > > > > +++ b/examples/ipsec-secgw/ipsec.c
> > > > > @@ -141,6 +141,10 @@ create_lookaside_session(struct ipsec_ctx
> > > > *ipsec_ctx, struct ipsec_sa *sa,
> > > > >  	return 0;
> > > > >  }
> > > > >
> > > > > +uint16_t sa_no;
> > > > > +#define MAX_FIXED_SESSIONS	10
> > > > > +struct rte_security_session
> > > > > +*sec_session_fixed[MAX_FIXED_SESSIONS];
> > > > > +
> > > > >  int
> > > > >  create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
> > > > >  		struct rte_ipsec_session *ips)
> > > > > @@ -401,6 +405,11 @@ create_inline_session(struct socket_ctx
> > > > > *skt_ctx, struct ipsec_sa *sa,
> > > > >
> > > > >  		ips->security.ol_flags = sec_cap->ol_flags;
> > > > >  		ips->security.ctx = sec_ctx;
> > > > > +		if (sa_no < MAX_FIXED_SESSIONS) {
> > > > > +			sec_session_fixed[sa_no] =
> > > > > +				ipsec_get_primary_session(sa)-
> > > > >security.ses;
> > > > > +			sa_no++;
> > > > > +		}
> > > > >  	}
> > > >
> > > > Totally lost what is the purpose of these changes...
> > > > Why first 10 inline-proto are special and need to be saved inside
> > > > global array (sec_session_fixed)?
> > > > Why later, in ipsec_worker.c this array is referenced by eth port_id?
> > > > What would happen if number of inline-proto sessions is less than
> > > > number of eth ports?
> > >
> > > [Anoob] This is required for the outbound driver mode. The 'driver
> > > mode' is more like 'single_sa' mode of the existing application. The
> > > idea is to skip all the lookups etc done in the s/w and perform ipsec
> > > processing fully in h/w. In outbound, following is roughly what we
> > > should do for driver mode,
> > >
> > > pkt = rx_burst();
> > >
> > > /* set_pkt_metadata() */
> > > pkt-> udata64 = session;
> > >
> > > tx_burst(pkt);
> > >
> > > The session is created on eth ports. And so, if we have single SA,
> > > then the entire traffic will have to be forwarded on the same port. The above
> > change is to make sure we could send traffic on all ports.
> > >
> > > Currently we just use the first 10 SAs and save it in the array. So
> > > the user has to set the conf properly and make sure the SAs are
> > > distributed such. Will update this to save the first parsed outbound SA for a
> > port in the array. That way the size of the array will be RTE_MAX_ETHPORTS.
> >
> > Ok, then if it is for specific case (event-mode + sing-sa mode) then in
> > create_inline_session we probably shouldn't do it always, but only when this
> > mode is selected.
> 
> [Anoob] Will make that change.
> 
> > Also wouldn't it better to reuse current  single-sa cmd-line option and logic?
> > I.E. whe event-mode and single-sa is selected, go though all eth-devs and for
> > each do create_inline_session() with for sa that corresponds to sing_sa_idx?
> > Then, I think create_inline_session() can be kept intact.
> 
> [Anoob] No disagreement. Current single_sa uses single_sa universally. The driver mode intends to use single_sa per port.
> Technically, just single_sa (universally) will result in the eth port being the bottleneck. So I can fix the single sa and we can use
> single_sa option in eventmode as you have described.
> 
> >
> > >
> > > Is the above approach fine?
> > >
> > > >
> > > > >  set_cdev_id:
> > > > > diff --git a/examples/ipsec-secgw/ipsec_worker.c
> > > > > b/examples/ipsec-secgw/ipsec_worker.c
> > > > > index 2af9475..e202277 100644
> > > > > --- a/examples/ipsec-secgw/ipsec_worker.c
> > > > > +++ b/examples/ipsec-secgw/ipsec_worker.c
> > > > > @@ -263,7 +263,7 @@ process_ipsec_ev_inbound(struct ipsec_ctx
> > > > > *ctx,
> > > > struct route_table *rt,
> > > > >   */
> > > > >
> > > > >  /* Workers registered */
> > > > > -#define IPSEC_EVENTMODE_WORKERS		2
> > > > > +#define IPSEC_EVENTMODE_WORKERS		3
> > > > >
> > > > >  /*
> > > > >   * Event mode worker
> > > > > @@ -423,6 +423,84 @@
> > > > ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info
> > > > *links,
> > > > >  	return;
> > > > >  }
> > > > >
> > > > > +/*
> > > > > + * Event mode worker
> > > > > + * Operating parameters : non-burst - Tx internal port - driver
> > > > > +mode
> > > > > +- outbound  */ extern struct rte_security_session
> > > > > +*sec_session_fixed[]; static void
> > > > > +ipsec_wrkr_non_burst_int_port_drvr_mode_outb(struct
> > > > eh_event_link_info *links,
> > > > > +		uint8_t nb_links)
> > > > > +{
> > > > > +	unsigned int nb_rx = 0;
> > > > > +	struct rte_mbuf *pkt;
> > > > > +	unsigned int port_id;
> > > > > +	struct rte_event ev;
> > > > > +	uint32_t lcore_id;
> > > > > +
> > > > > +	/* Check if we have links registered for this lcore */
> > > > > +	if (nb_links == 0) {
> > > > > +		/* No links registered - exit */
> > > > > +		goto exit;
> > > > > +	}
> > > > > +
> > > > > +	/* Get core ID */
> > > > > +	lcore_id = rte_lcore_id();
> > > > > +
> > > > > +	RTE_LOG(INFO, IPSEC,
> > > > > +		"Launching event mode worker (non-burst - Tx internal port -
> > > > "
> > > > > +		"driver mode - outbound) on lcore %d\n", lcore_id);
> > > > > +
> > > > > +	/* We have valid links */
> > > > > +
> > > > > +	/* Check if it's single link */
> > > > > +	if (nb_links != 1) {
> > > > > +		RTE_LOG(INFO, IPSEC,
> > > > > +			"Multiple links not supported. Using first link\n");
> > > > > +	}
> > > > > +
> > > > > +	RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n",
> > > > lcore_id,
> > > > > +			links[0].event_port_id);
> > > > > +	while (!force_quit) {
> > > > > +		/* Read packet from event queues */
> > > > > +		nb_rx = rte_event_dequeue_burst(links[0].eventdev_id,
> > > > > +				links[0].event_port_id,
> > > > > +				&ev,	/* events */
> > > > > +				1,	/* nb_events */
> > > > > +				0	/* timeout_ticks */);
> > > > > +
> > > > > +		if (nb_rx == 0)
> > > > > +			continue;
> > > > > +
> > > > > +		port_id = ev.queue_id;
> > > > > +		pkt = ev.mbuf;
> > > > > +
> > > > > +		rte_prefetch0(rte_pktmbuf_mtod(pkt, void *));
> > > > > +
> > > > > +		/* Process packet */
> > > > > +		ipsec_event_pre_forward(pkt, port_id);
> > > > > +
> > > > > +		pkt->udata64 = (uint64_t) sec_session_fixed[port_id];
> > > > > +
> > > > > +		/* Mark the packet for Tx security offload */
> > > > > +		pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
> > > > > +
> > > > > +		/*
> > > > > +		 * Since tx internal port is available, events can be
> > > > > +		 * directly enqueued to the adapter and it would be
> > > > > +		 * internally submitted to the eth device.
> > > > > +		 */
> > > > > +		rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id,
> > > > > +				links[0].event_port_id,
> > > > > +				&ev,	/* events */
> > > > > +				1,	/* nb_events */
> > > > > +				0	/* flags */);
> > > > > +	}
> > > > > +
> > > > > +exit:
> > > > > +	return;
> > > > > +}
> > > > > +
> > > > >  static uint8_t
> > > > >  ipsec_eventmode_populate_wrkr_params(struct
> > > > eh_app_worker_params
> > > > > *wrkrs)  { @@ -449,6 +527,16 @@
> > > > > ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params
> > > > *wrkrs)
> > > > >  	wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND;
> > > > >  	wrkr->worker_thread =
> > > > ipsec_wrkr_non_burst_int_port_app_mode_inb;
> > > > >
> > > > > +	wrkr++;
> > > > > +	nb_wrkr_param++;
> > > > > +
> > > > > +	/* Non-burst - Tx internal port - driver mode - outbound */
> > > > > +	wrkr->cap.burst = EH_RX_TYPE_NON_BURST;
> > > > > +	wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT;
> > > > > +	wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER;
> > > > > +	wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND;
> > > > > +	wrkr->worker_thread =
> > > > ipsec_wrkr_non_burst_int_port_drvr_mode_outb;
> > > > > +
> > > > >  	nb_wrkr_param++;
> > > > >  	return nb_wrkr_param;
> > > > >  }
> > > > > --
> > > > > 2.7.4



More information about the dev mailing list