[dpdk-dev] Load-balancing position field in DPDK load_balancer sample app vs. Hash table
konstantin.ananyev at intel.com
Fri Nov 14 15:44:20 CET 2014
> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Kamraan Nasim
> Sent: Thursday, November 13, 2014 6:30 PM
> To: dev at dpdk.org
> Cc: Yuanzhang Hu
> Subject: [dpdk-dev] Load-balancing position field in DPDK load_balancer sample app vs. Hash table
> So i've borrowed some code from the DPDK Load balancer sample application,
> specifically the load balancing position(byte 29th) to determine which
> worker lcore to forward the packet to.
> The idea is that flow affinity should be maintained and all packets from
> the same flow would have the same checksum/5-tuple value
> worker_id = packet[load_balancing_field] % n_workers
> Question is that how reliable is this load balancing position? I am tempted
> to use Hash tables but I think this position based mechanism may be faster.
> How have people's experience with this been in general?
If you have a NIC that is capable to do HW hash computation,
then you can do your load balancing based on that value.
Let say ixgbe/igb/i40e NICs can calculate RSS hash value based on different combinations of
dst/src Ips, dst/src ports.
This value can be stored inside mbuf for each RX packet by PMD RX function.
Then you can do:
worker_id = mbuf->hash.rss % n_workersl
That might to provide better balancing then using just one byte value,
plus should be a bit faster, as in that case your balancer code don't need to touch packet's data.
More information about the dev