[dpdk-dev] [PATCH v2 04/20] net/ice: support getting device information

Lu, Wenzhuo wenzhuo.lu at intel.com
Thu Dec 6 06:28:50 CET 2018


Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Tuesday, December 4, 2018 1:00 PM
> To: Lu, Wenzhuo <wenzhuo.lu at intel.com>; dev at dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu at intel.com>; Yang, Qiming
> <qiming.yang at intel.com>; Li, Xiaoyun <xiaoyun.li at intel.com>; Wu, Jingjing
> <jingjing.wu at intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 04/20] net/ice: support getting device
> information
> 
> snipped
> > +static void
> > +ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info
> > +*dev_info) {
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > >dev_private);
> > +	struct ice_vsi *vsi = pf->main_vsi;
> > +	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
> > +
> > +	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
> > +	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
> > +	dev_info->max_rx_queues = vsi->nb_qps;
> > +	dev_info->max_tx_queues = vsi->nb_qps;
> > +	dev_info->max_mac_addrs = vsi->max_macaddrs;
> > +	dev_info->max_vfs = pci_dev->max_vfs;
> > +
> > +	dev_info->rx_offload_capa =
> > +		DEV_RX_OFFLOAD_VLAN_STRIP |
> > +		DEV_RX_OFFLOAD_IPV4_CKSUM |
> > +		DEV_RX_OFFLOAD_UDP_CKSUM |
> > +		DEV_RX_OFFLOAD_TCP_CKSUM |
> > +		DEV_RX_OFFLOAD_QINQ_STRIP |
> > +		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> > +		DEV_RX_OFFLOAD_VLAN_EXTEND |
> > +		DEV_RX_OFFLOAD_JUMBO_FRAME;
> > +	dev_info->tx_offload_capa =
> > +		DEV_TX_OFFLOAD_VLAN_INSERT |
> > +		DEV_TX_OFFLOAD_QINQ_INSERT |
> > +		DEV_TX_OFFLOAD_IPV4_CKSUM |
> > +		DEV_TX_OFFLOAD_UDP_CKSUM |
> > +		DEV_TX_OFFLOAD_TCP_CKSUM |
> > +		DEV_TX_OFFLOAD_SCTP_CKSUM |
> > +		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> > +		DEV_TX_OFFLOAD_TCP_TSO;
> > +	dev_info->rx_queue_offload_capa = 0;
> > +	dev_info->tx_queue_offload_capa = 0;
> 
> Does this mean per queue offload capability is not supported? If yes, can
> you mention this in release notes under 'support or limitation'
No, it's not supported. We have a document, ice.ini, to list all the features supported. All the others are not supported.
BTW, I don't think anything not supported is limitation.

> 
> > +
> > +	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
> > +	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) *
> > sizeof(uint32_t);
> > +	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
> > +
> > +	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > +		.rx_thresh = {
> > +			.pthresh = ICE_DEFAULT_RX_PTHRESH,
> > +			.hthresh = ICE_DEFAULT_RX_HTHRESH,
> > +			.wthresh = ICE_DEFAULT_RX_WTHRESH,
> > +		},
> > +		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
> > +		.rx_drop_en = 0,
> > +		.offloads = 0,
> Is drop function and rx_conf.offload supported ? If yes, if device is not
> configured then all offload should be set?
It's the default configuration. No matter a feature supported or not, it's not set only means it's not enabled here.

> 
> > +	};
> > +
> > +	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +		.tx_thresh = {
> > +			.pthresh = ICE_DEFAULT_TX_PTHRESH,
> > +			.hthresh = ICE_DEFAULT_TX_HTHRESH,
> > +			.wthresh = ICE_DEFAULT_TX_WTHRESH,
> > +		},
> > +		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
> > +		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
> > +		.offloads = 0,
> 
> If device is not configured, is not all offload be set true?
This a info_get function. I don't understand why we talk about configuration here.

> 
> Snipped
> 
> > +	switch (hw->port_info->phy.link_info.link_speed) {
> 
> If device switch is not configured (default value from NVM) should we
> highlight the switch can support speed 10, 100, 1000, 1000 and son on?
No, this's the capability getting from HW.

> 
> > +	case ICE_AQ_LINK_SPEED_10MB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_10M;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_100MB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_100M;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_1000MB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_1G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_2500MB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_2_5G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_5GB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_5G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_10GB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_10G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_20GB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_20G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_25GB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_25G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_40GB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_40G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_UNKNOWN:
> > +	default:
> > +		PMD_DRV_LOG(ERR, "Unknown link speed");
> > +		dev_info->speed_capa = ETH_LINK_SPEED_AUTONEG;
> > +		break;
> > +	}
> 
> If speed is not true as stated above, can you please add this to release notes
> and documentation.
Here listed all the case we can get from HW.

> 
> > +
> > +	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
> > +	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
> > +
> > +	dev_info->default_rxportconf.burst_size = 32;
> > +	dev_info->default_txportconf.burst_size = 32;
> > +	dev_info->default_rxportconf.nb_queues = 1;
> > +	dev_info->default_txportconf.nb_queues = 1;
> > +	dev_info->default_rxportconf.ring_size = 1024;
> > +	dev_info->default_txportconf.ring_size = 1024;
> 
> Can we use MACRO  (in previous PATCH there were MAX_BURST_SIZE)?
Good suggestion. Will update it in the new version.

> 
> }
> > --
> > 1.9.3



More information about the dev mailing list