[dpdk-dev] [PATCH 10/10] cxgbe: add flow director support and update documentation

Rahul Lakkireddy rahul.lakkireddy at chelsio.com
Wed Feb 3 09:32:31 CET 2016


Add flow director support for setting/deleting LE-TCAM (maskfull)
and HASH (maskless) filters.

Wait and poll firmware event queue for replies about the filter status.
Also, the firware event queue doesn't have any freelists.  So, there is
no need to refill any.

Provide stats showing the number of remaining free entries, and
number of successful and failed added/deleted filters.

Add documentation explaining the usage of CXGBE Flow Director support.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy at chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras at chelsio.com>
---
 doc/guides/nics/cxgbe.rst            | 166 ++++++++
 doc/guides/rel_notes/release_2_3.rst |   7 +
 drivers/net/cxgbe/Makefile           |   1 +
 drivers/net/cxgbe/base/adapter.h     |   2 +
 drivers/net/cxgbe/cxgbe.h            |   3 +
 drivers/net/cxgbe/cxgbe_ethdev.c     |  18 +
 drivers/net/cxgbe/cxgbe_fdir.c       | 715 +++++++++++++++++++++++++++++++++++
 drivers/net/cxgbe/cxgbe_fdir.h       | 108 ++++++
 drivers/net/cxgbe/cxgbe_main.c       |  40 ++
 drivers/net/cxgbe/sge.c              |   3 +-
 10 files changed, 1062 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/cxgbe/cxgbe_fdir.c
 create mode 100644 drivers/net/cxgbe/cxgbe_fdir.h

diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index d718f19..d2a0d74 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -51,6 +51,7 @@ CXGBE PMD has support for:
 - All multicast mode
 - Port hardware statistics
 - Jumbo frames
+- Packet classification and filtering
 
 Limitations
 -----------
@@ -187,6 +188,13 @@ Unified Wire package for Linux operating system are as follows:
 
       cxgbtool p1p1 loadcfg <path_to_uwire>/src/network/firmware/t5-config.txt
 
+   .. note::
+
+      To enable HASH filters, a special firmware configuration file is needed.
+      The file is located under the following directory:
+
+      <path_to_uwire>/src/network/firmware/hash_filter_config/t5-config.txt
+
 #. Use cxgbtool to load the firmware image onto the card:
 
    .. code-block:: console
@@ -541,6 +549,66 @@ devices managed by librte_pmd_cxgbe in FreeBSD operating system.
    Flow control pause TX/RX is disabled by default and can be enabled via
    testpmd. Refer section :ref:`flow-control` for more details.
 
+
+.. _filtering:
+
+Packet Classification and Filtering
+-----------------------------------
+
+Chelsio T5 NICs support packet classification and filtering in hardware.
+This feature can be used in the ingress path to:
+
+- Steer ingress packets that meet ACL (Access Control List) accept criteria
+  to a particular receive queue.
+
+- Switch (proxy) ingress packets that meet ACL accept criteria to an output
+  port, with optional header rewrite.
+
+- Drop ingress packets that fail ACL accept criteria.
+
+There are two types of filters that can be set, namely LE-TCAM (Maskfull)
+filters and HASH (Maskless) filters.  LE-TCAM filters allow specifying masks
+to the accept criteria to allow specifying a match for a range of values;
+whereas, HASH filters ignore masks and hence enforce a more strict accept
+criteria.
+
+By default, only LE-TCAM filter rules can be created.  Creating HASH filters
+requires a special firmware configuration file.  Instructions on how to
+manually flash the firmware configuration file are given in section
+:ref:`linux-installation`.
+
+The fields that can be specified for the accept criteria are based on the
+filter selection combination set in the firmware configuration
+(t5-config.txt) file flashed in section :ref:`linux-installation`.
+
+By default, the selection combination automatically includes source/
+destination IPV4/IPV6 address, and source/destination layer 4 port
+addresses.  In addition to the above, more combinations can be added by
+modifying the t5-config.txt firmware configuration file.
+
+For example, consider the following combination that has been set in
+t5-config.txt:
+
+.. code-block:: console
+
+   filterMode = ethertype, protocol, tos, vlan, port
+   filterMask = ethertype, protocol, tos, vlan, port
+
+In the above example, in addition to source/destination IPV4/IPV6
+addresses and layer 4 source/destination port addresses, a packet can also
+be matched against ethertype field set in the ethernet header, IP protocol
+and tos field set in the IP header, inner VLAN tag, and physical ingress
+port number, respectively.
+
+You can create 496 LE-TCAM filters and ~0.5 million HASH filter rules.
+For more information, please visit `Chelsio Communications Official Website
+<http://www.chelsio.com>`_.
+
+To test packet classification and filtering on a Chelsio NIC, an
+example app is provided in **examples/test-cxgbe-filters/** directory.
+Please see :doc:`Test CXGBE Filters Application Guide
+</sample_app_ug/test_cxgbe_filters>` to compile and run the example app.
+
 Sample Application Notes
 ------------------------
 
@@ -587,3 +655,101 @@ to configure the mtu of all the ports with a single command.
 
      testpmd> port stop all
      testpmd> port config all max-pkt-len 9000
+
+Add/Delete Filters
+~~~~~~~~~~~~~~~~~~
+
+To test packet classification and filtering on a Chelsio NIC, an
+example app is provided in **examples/test-cxgbe-filters/** directory.
+Please see :doc:`Test CXGBE Filters Application Guide
+</sample_app_ug/test_cxgbe_filters>` to compile and run the app.
+The examples below have to be run on the **test_cxgbe_filters** app.
+
+The command line to add/delete filters is given below. Note that the
+command is too long to fit on one line and hence is shown wrapped
+at "\\" for display purposes.  In real prompt, these commands should
+be on a single line without the "\\".
+
+  .. code-block:: console
+
+     cxgbe> filter (port_id) (add|del) (ipv4|ipv6) \
+            mode (maskfull|maskless) (no-prio|prio) \
+            ingress-port (iport) (iport_mask) \
+            ether (ether_type) (ether_type_mask) \
+            vlan (inner_vlan) (inner_vlan_mask) \
+            (outer_vlan) (outer_vlan_mask) \
+            ip (tos) (tos_mask) (proto) (proto_mask) \
+            (src_ip_address) (src_ip_mask) \
+            (dst_ip_address) (dst_ip_mask) \
+            (src_port) (src_port_mask) (dst_port) (dst_port_mask) \
+            (drop|fwd|switch) queue (queue_id) \
+            (port-none|port-redirect) (egress_port) \
+            (ether-none|mac-rewrite|mac-swap) (src_mac) (dst_mac) \
+            (vlan-none|vlan-rewrite|vlan-delete) (new_vlan) \
+            (nat-none|nat-rewrite) (nat_src_ip) (nat_dst_ip) \
+            (nat_src_port) (nat_dst_port) \
+            fd_id (fd_id_value)
+
+LE-TCAM (Maskfull) Filters
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+- To drop all traffic coming for 102.1.2.0/24 network, add a maskfull
+  filter as follows:
+
+  .. code-block:: console
+
+     cxgbe> filter 0 add ipv4 mode maskfull \
+            no-prio ingress-port 0 0 ether 0 0 vlan 0 0 0 0 \
+            ip 0 0 0 0 0.0.0.0 0.0.0.0 102.1.2.0 255.255.255.0 0 0 0 0 \
+            drop queue 0 port-none 0 \
+            ether-none 00:00:00:00:00:00 00:00:00:00:00:00 \
+            vlan-none 0 nat-none 0.0.0.0 0.0.0.0 0 0 \
+            fd_id 0
+
+- To switch all traffic coming for 102.1.2.0/24 network out via port 1
+  with source and destination mac addresses rewritten, add a maskfull
+  filter as follows:
+
+  .. code-block:: console
+
+     cxgbe> filter 0 add ipv4 mode maskfull \
+            no-prio ingress-port 0 0 ether 0 0 vlan 0 0 0 0 \
+            ip 0 0 0 0 0.0.0.0 0.0.0.0 102.1.2.0 255.255.255.0 0 0 0 0 \
+            switch queue 0 port-redirect 1 \
+            mac-rewrite 00:07:43:04:96:48 00:07:43:12:D4:88 \
+            vlan-none 0 nat-none 0.0.0.0 0.0.0.0 0 0 \
+            fd_id 0
+
+HASH (Maskless) Filters
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Maskless filters require a special firmware configuration file. Please see
+section :ref:`filtering` for more information.
+
+- To steer all traffic coming for 102.1.2.2 with destination port 12865
+  from 102.1.2.1 with source port 12000 to port 1's rx queue, add a
+  maskless filter as follows:
+
+  .. code-block:: console
+
+     cxgbe> filter 1 add ipv4 mode maskless \
+            no-prio ingress-port 0 0 ether 0 0 vlan 0 0 0 0 \
+            ip 0 0 0 0 102.1.2.1 0.0.0.0 102.1.2.2 0.0.0.0 12000 0 12865 0 \
+            fwd queue 0 port-none 0 \
+            ether-none 00:00:00:00:00:00 00:00:00:00:00:00 \
+            vlan-none 0 nat-none 0.0.0.0 0.0.0.0 0 0 \
+            fd_id 0
+
+- To swap the source and destination mac addresses of all traffic coming
+  for 102.1.2.2 with destination port 12865 from 102.1.2.1 with source
+  port 12000, add a maskless filter as follows:
+
+  .. code-block:: console
+
+     cxgbe> filter 0 add ipv4 mode maskless \
+            no-prio ingress-port 0 0 ether 0 0 vlan 0 0 0 0 \
+            ip 0 0 0 0 102.1.2.1 0.0.0.0 102.1.2.2 0.0.0.0 12000 0 12865 0 \
+            switch queue 0 port-redirect 1 \
+            mac-swap 00:00:00:00:00:00 00:00:00:00:00:00 \
+            vlan-none 0 nat-none 0.0.0.0 0.0.0.0 0 0 \
+            fd_id 0
diff --git a/doc/guides/rel_notes/release_2_3.rst b/doc/guides/rel_notes/release_2_3.rst
index 19ce954..2953f52 100644
--- a/doc/guides/rel_notes/release_2_3.rst
+++ b/doc/guides/rel_notes/release_2_3.rst
@@ -4,6 +4,13 @@ DPDK Release 2.3
 New Features
 ------------
 
+* **Added flow director support for Chelsio CXGBE driver.**
+
+  * Added flow director support to enable Chelsio T5 NIC hardware filtering
+    features.
+  * Added an example app under ``examples/test-cxgbe-filters`` directory
+    to test Chelsio T5 NIC hardware filtering features.
+
 
 Resolved Issues
 ---------------
diff --git a/drivers/net/cxgbe/Makefile b/drivers/net/cxgbe/Makefile
index 3201aff..0d52cb1 100644
--- a/drivers/net/cxgbe/Makefile
+++ b/drivers/net/cxgbe/Makefile
@@ -82,6 +82,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += clip_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += l2t.c
 SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += smt.c
 SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe_filter.c
+SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe_fdir.c
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index a64571d..a03048d 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -342,6 +342,8 @@ struct adapter {
 	struct l2t_data *l2t;     /* Layer 2 table */
 	struct smt_data *smt;     /* Source MAC table */
 	struct tid_info tids;     /* Info used to access TID related tables */
+
+	struct cxgbe_fdir_map *fdir;  /* Flow Director */
 };
 
 #define CXGBE_PCI_REG(reg) (*((volatile uint32_t *)(reg)))
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 9ca4388..f984103 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -36,6 +36,7 @@
 
 #include "common.h"
 #include "t4_regs.h"
+#include "cxgbe_fdir.h"
 
 #define CXGBE_MIN_RING_DESC_SIZE      128  /* Min TX/RX descriptor ring size */
 #define CXGBE_MAX_RING_DESC_SIZE      4096 /* Max TX/RX descriptor ring size */
@@ -52,6 +53,8 @@ int cxgbe_down(struct port_info *pi);
 void cxgbe_close(struct adapter *adapter);
 void cxgbe_stats_get(struct port_info *pi, struct port_stats *stats);
 void cxgbe_stats_reset(struct port_info *pi);
+int cxgbe_poll_for_completion(struct sge_rspq *q, unsigned int us,
+			      unsigned int cnt, struct t4_completion *c);
 int link_start(struct port_info *pi);
 void init_rspq(struct adapter *adap, struct sge_rspq *q, unsigned int us,
 	       unsigned int cnt, unsigned int size, unsigned int iqe_size);
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 2701bb6..7016026 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -770,6 +770,23 @@ static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 			     &pi->link_cfg);
 }
 
+static int cxgbe_dev_filter_ctrl(struct rte_eth_dev *dev,
+				 enum rte_filter_type filter_type,
+				 enum rte_filter_op filter_op, void *arg)
+{
+	int ret;
+
+	switch (filter_type) {
+	case RTE_ETH_FILTER_FDIR:
+		ret = cxgbe_fdir_ctrl_func(dev, filter_op, arg);
+		break;
+	default:
+		ret = -ENOTSUP;
+		break;
+	}
+	return ret;
+}
+
 static struct eth_dev_ops cxgbe_eth_dev_ops = {
 	.dev_start		= cxgbe_dev_start,
 	.dev_stop		= cxgbe_dev_stop,
@@ -794,6 +811,7 @@ static struct eth_dev_ops cxgbe_eth_dev_ops = {
 	.stats_reset		= cxgbe_dev_stats_reset,
 	.flow_ctrl_get		= cxgbe_flow_ctrl_get,
 	.flow_ctrl_set		= cxgbe_flow_ctrl_set,
+	.filter_ctrl            = cxgbe_dev_filter_ctrl,
 };
 
 /*
diff --git a/drivers/net/cxgbe/cxgbe_fdir.c b/drivers/net/cxgbe/cxgbe_fdir.c
new file mode 100644
index 0000000..1c15e34
--- /dev/null
+++ b/drivers/net/cxgbe/cxgbe_fdir.c
@@ -0,0 +1,715 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Chelsio Communications.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Chelsio Communications nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+
+#include "cxgbe.h"
+#include "common.h"
+#include "cxgbe_filter.h"
+#include "smt.h"
+#include "clip_tbl.h"
+#include "cxgbe_fdir.h"
+
+/**
+ * Check if the specified entry is set or not
+ */
+static bool is_fdir_map_set(struct cxgbe_fdir_map *map,
+			    unsigned int cap,
+			    unsigned int idx)
+{
+	bool result = FALSE;
+
+	t4_os_lock(&map->lock);
+	if (!cap) {
+		if (rte_bitmap_get(map->mfull_bmap, idx))
+			result = TRUE;
+	} else {
+		if (rte_bitmap_get(map->mless_bmap, idx))
+			result = TRUE;
+	}
+	t4_os_unlock(&map->lock);
+
+	return result;
+}
+
+/**
+ * Set/Clear bitmap entry
+ */
+static int cxgbe_fdir_add_del_map_entry(struct cxgbe_fdir_map *map,
+					unsigned int cap, unsigned int idx,
+					bool del)
+{
+	struct cxgbe_fdir_map_entry *e;
+
+	if (cap && !map->maskless_size)
+		return 0;
+
+	if (cap && idx >= map->maskless_size)
+		return -ERANGE;
+
+	if (!cap && idx >= map->maskfull_size)
+		return -ERANGE;
+
+	t4_os_lock(&map->lock);
+	if (!cap) {
+		e = &map->mfull_entry[idx];
+		/*
+		 * IPv6 maskfull filters occupy 4 slots and IPv4 maskfull
+		 * filters take up one slot. Set the map accordingly.
+		 */
+		if (e->fs.type == FILTER_TYPE_IPV4) {
+			if (del)
+				rte_bitmap_clear(map->mfull_bmap, idx);
+			else
+				rte_bitmap_set(map->mfull_bmap, idx);
+		} else {
+			if (del) {
+				rte_bitmap_clear(map->mfull_bmap, idx);
+				rte_bitmap_clear(map->mfull_bmap, idx + 1);
+				rte_bitmap_clear(map->mfull_bmap, idx + 2);
+				rte_bitmap_clear(map->mfull_bmap, idx + 3);
+			} else {
+				rte_bitmap_set(map->mfull_bmap, idx);
+				rte_bitmap_set(map->mfull_bmap, idx + 1);
+				rte_bitmap_set(map->mfull_bmap, idx + 2);
+				rte_bitmap_set(map->mfull_bmap, idx + 3);
+			}
+		}
+	} else {
+		e = &map->mless_entry[idx];
+		/*
+		 * Maskless filters take up only one slot for both
+		 * IPv4 and IPv6
+		 */
+		if (del)
+			rte_bitmap_clear(map->mless_bmap, idx);
+		else
+			rte_bitmap_set(map->mless_bmap, idx);
+	}
+	t4_os_unlock(&map->lock);
+
+	return 0;
+}
+
+/**
+ * Fill up default masks
+ */
+static void fill_ch_spec_def_mask(struct ch_filter_specification *fs)
+{
+	unsigned int i;
+	unsigned int lip = 0, lip_mask = 0;
+	unsigned int fip = 0, fip_mask = 0;
+	unsigned int cap = fs->cap;
+
+	if (fs->val.iport && (!fs->mask.iport || cap))
+		fs->mask.iport |= ~0;
+	if (fs->val.ethtype && (!fs->mask.ethtype || cap))
+		fs->mask.ethtype |= ~0;
+	if (fs->val.ivlan && (!fs->mask.ivlan || cap))
+		fs->mask.ivlan |= ~0;
+	if (fs->val.ovlan && (!fs->mask.ovlan || cap))
+		fs->mask.ovlan |= ~0;
+	if (fs->val.tos && (!fs->mask.tos || cap))
+		fs->mask.tos |= ~0;
+	if (fs->val.proto && (!fs->mask.proto || cap))
+		fs->mask.proto |= ~0;
+
+	for (i = 0; i < ARRAY_SIZE(fs->val.lip); i++) {
+		lip |= fs->val.lip[i];
+		lip_mask |= fs->mask.lip[i];
+		fip |= fs->val.fip[i];
+		fip_mask |= fs->mask.fip[i];
+	}
+
+	if (lip && (!lip_mask || cap))
+		memset(fs->mask.lip, ~0, sizeof(fs->mask.lip));
+
+	if (fip && (!fip_mask || cap))
+		memset(fs->mask.fip, ~0, sizeof(fs->mask.lip));
+
+	if (fs->val.lport && (!fs->mask.lport || cap))
+		fs->mask.lport = ~0;
+	if (fs->val.fport && (!fs->mask.fport || cap))
+		fs->mask.fport = ~0;
+}
+
+/**
+ * Translate match fields to Chelsio Filter Specification
+ */
+static int fill_ch_spec_match(const struct rte_eth_fdir_filter *fdir_filter,
+			      struct ch_filter_specification *fs)
+{
+	struct cxgbe_fdir_input_admin admin;
+	struct cxgbe_fdir_input_flow val;
+	struct cxgbe_fdir_input_flow mask;
+	const uint8_t *raw_pkt_admin, *raw_pkt_match, *raw_pkt_mask;
+
+	raw_pkt_admin = &fdir_filter->input.flow.raw_pkt_flow[0];
+	raw_pkt_match = raw_pkt_admin + sizeof(admin);
+	raw_pkt_mask = &fdir_filter->input.flow_mask.raw_pkt_flow[0];
+
+	/* Match arguments without masks */
+	rte_memcpy(&admin, raw_pkt_admin, sizeof(admin));
+	fs->prio = admin.prio ? 1 : 0;
+	fs->type = admin.type ? 1 : 0;
+	fs->cap  = admin.cap ? 1 : 0;
+
+	/* Match arguments with masks */
+	rte_memcpy(&val, raw_pkt_match, sizeof(val));
+	fs->val.ethtype = be16_to_cpu(val.ethtype);
+	if (val.iport > 7)
+		return -ERANGE;
+	fs->val.iport   = val.iport;
+
+	fs->val.ivlan   = be16_to_cpu(val.ivlan);
+	fs->val.ovlan   = be16_to_cpu(val.ovlan);
+
+	fs->val.proto   = val.proto;
+	fs->val.tos     = val.tos;
+	rte_memcpy(&fs->val.lip[0], &val.lip[0], sizeof(val.lip));
+	rte_memcpy(&fs->val.fip[0], &val.fip[0], sizeof(val.fip));
+
+	fs->val.lport   = be16_to_cpu(val.lport);
+	fs->val.fport   = be16_to_cpu(val.fport);
+
+	/* Masks for matched arguments */
+	rte_memcpy(&mask, raw_pkt_mask, sizeof(mask));
+	fs->mask.ethtype = be16_to_cpu(mask.ethtype);
+	if (mask.iport > 7)
+		return -ERANGE;
+	fs->mask.iport   = mask.iport;
+
+	fs->mask.ivlan   = be16_to_cpu(mask.ivlan);
+	fs->mask.ovlan   = be16_to_cpu(mask.ovlan);
+
+	fs->mask.proto   = mask.proto;
+	fs->mask.tos     = mask.tos;
+	rte_memcpy(&fs->mask.lip[0], &mask.lip[0], sizeof(mask.lip));
+	rte_memcpy(&fs->mask.fip[0], &mask.fip[0], sizeof(mask.fip));
+
+	fs->mask.lport   = be16_to_cpu(mask.lport);
+	fs->mask.fport   = be16_to_cpu(mask.fport);
+
+	/* Fill up matched field masks with defaults if not specified */
+	fill_ch_spec_def_mask(fs);
+
+	if (fs->val.ivlan) {
+		fs->val.ivlan_vld = 1;
+		fs->mask.ivlan_vld = 1;
+	}
+
+	if (fs->val.ovlan) {
+		fs->val.ovlan_vld = 1;
+		fs->mask.ovlan_vld = 1;
+	}
+
+	/* Disable filter hit counting for Maskless filters */
+	if (fs->cap)
+		fs->hitcnts = 0;
+	else
+		fs->hitcnts = 1;
+
+	return 0;
+}
+
+/**
+ * Translate action fields to Chelsio Filter Specification
+ */
+static int fill_ch_spec_action(struct rte_eth_dev *dev,
+			       const struct rte_eth_fdir_filter *fdir_filter,
+			       struct ch_filter_specification *fs)
+{
+	struct port_info *pi = ethdev2pinfo(dev);
+	struct cxgbe_fdir_action action;
+	int err = 0;
+	unsigned int drop_queue = dev->data->dev_conf.fdir_conf.drop_queue;
+	const uint8_t *action_arg;
+
+	if (fdir_filter->action.rx_queue > MAX_ETH_QSETS) {
+		err = -EINVAL;
+		goto out;
+	}
+
+	/* Action Arguments */
+	switch (fdir_filter->action.behavior) {
+	case RTE_ETH_FDIR_ACCEPT:
+		if (fdir_filter->action.rx_queue < pi->n_rx_qsets) {
+			fs->dirsteer = 1;
+			fs->iq = fdir_filter->action.rx_queue;
+		}
+		fs->action = FILTER_PASS;
+		break;
+	case RTE_ETH_FDIR_REJECT:
+		if (fdir_filter->action.rx_queue == drop_queue) {
+			if (drop_queue < pi->n_rx_qsets) {
+				/* Send to drop queue */
+				fs->dirsteer = 1;
+				fs->iq = drop_queue;
+				fs->action = FILTER_PASS;
+				err = 0;
+				goto out;
+			}
+		}
+		/* Drop in hardware */
+		fs->action = FILTER_DROP;
+		break;
+	case RTE_ETH_FDIR_SWITCH:
+		action_arg = &fdir_filter->action.behavior_arg[0];
+
+		rte_memcpy(&action, action_arg, sizeof(action));
+		if (action.eport > 4) {
+			err = -ERANGE;
+			break;
+		}
+		fs->eport = action.eport;
+
+		fs->newdmac = action.newdmac;
+		fs->newsmac = action.newsmac;
+		fs->swapmac = action.swapmac;
+		rte_memcpy(&fs->dmac[0], &action.dmac[0], ETHER_ADDR_LEN);
+		rte_memcpy(&fs->smac[0], &action.smac[0], ETHER_ADDR_LEN);
+
+		if (action.newvlan > VLAN_REWRITE) {
+			err = -ERANGE;
+			break;
+		}
+		fs->newvlan = action.newvlan;
+		fs->vlan = be16_to_cpu(action.vlan);
+
+		if (action.nat_mode && action.nat_mode != NAT_MODE_ALL) {
+			err = -ENOTSUP;
+			break;
+		}
+		fs->nat_mode = action.nat_mode;
+		rte_memcpy(&fs->nat_lip[0], &action.nat_lip[0],
+			   sizeof(action.nat_lip));
+		rte_memcpy(&fs->nat_fip[0], &action.nat_fip[0],
+			   sizeof(action.nat_fip));
+		fs->nat_lport = be16_to_cpu(action.nat_lport);
+		fs->nat_fport = be16_to_cpu(action.nat_fport);
+
+		fs->action = FILTER_SWITCH;
+		break;
+	default:
+		err = -EINVAL;
+		break;
+	}
+
+out:
+	return err;
+}
+
+/**
+ * cxgbe_add_del_fdir_filter - add or remove a flow diretor filter.
+ * @dev: pointer to the structure rte_eth_dev
+ * @filter: fdir filter entry
+ * @fd_id: fdir index to insert/delete
+ * @del: 1 - delete, 0 - add
+ */
+static int cxgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
+				     const struct rte_eth_fdir_filter *filter,
+				     unsigned int fd_id, bool del)
+{
+	struct adapter *adapter = ethdev2adap(dev);
+	struct cxgbe_fdir_map *map = adapter->fdir;
+	struct cxgbe_fdir_map_entry *entry;
+	struct ch_filter_specification fs;
+	struct filter_ctx ctx;
+	unsigned int filter_id;
+	bool map_set;
+	int err = 0;
+
+	if (filter->input.flow_type != RTE_ETH_FLOW_RAW_PKT)
+		return -ENOTSUP;
+
+	if (!(adapter->flags & FULL_INIT_DONE))
+		return -EAGAIN;  /* can still change nfilters */
+
+	t4_init_completion(&ctx.completion);
+
+	memset(&fs, 0, sizeof(fs));
+
+	/* Fill in the Match arguments to create the filter */
+	err = fill_ch_spec_match(filter, &fs);
+	if (err) {
+		dev_err(adapter, "FDIR filter invalid match argument\n");
+		goto out;
+	}
+
+	/* Sanity Check Filter ID */
+	if (fs.cap) {
+		if (!map->maskless_size) {
+			dev_err(adapter,
+				"Maskless Filters have been disabled\n");
+			return -ENOTSUP;
+		}
+
+		if (fd_id > map->maskless_size) {
+			dev_err(adapter,
+				"Maskless Filters fd_id range is 0 to %d\n",
+				map->maskless_size - 1);
+			return -ERANGE;
+		}
+		entry = &map->mless_entry[fd_id];
+	} else {
+		if (!map->maskfull_size) {
+			dev_err(adapter,
+				"Maskfull Filters have been disabled\n");
+			return -ENOTSUP;
+		}
+
+		if (fd_id > map->maskfull_size) {
+			dev_err(adapter,
+				"Maskfull Filters fd_id range is 0 to %d\n",
+				map->maskfull_size - 1);
+			return -ERANGE;
+		}
+		entry = &map->mfull_entry[fd_id];
+	}
+
+	/*
+	 * We are not bothered about action arguments to delete the filter, but
+	 * we need to know if it is a maskfull or maskless filter that is
+	 * requested to be deleted.
+	 */
+	map_set = is_fdir_map_set(map, fs.cap, fd_id);
+	if (del) {
+		if (!map_set) {
+			dev_err(adap, "No entry with fd_id %d found\n", fd_id);
+			return -EINVAL;
+		}
+	} else {
+		if (map_set) {
+			dev_err(adap, "Entry with fd_id %d occupied\n", fd_id);
+			return -EINVAL;
+		}
+	}
+
+	filter_id = fs.cap ? entry->tid : fd_id;
+	if (del) {
+		err = cxgbe_del_filter(dev, filter_id, &fs, &ctx);
+		if (!err) {
+			/* Poll the FW for reply */
+			err = cxgbe_poll_for_completion(&adapter->sge.fw_evtq,
+							CXGBE_FDIR_POLL_US,
+							CXGBE_FDIR_POLL_CNT,
+							&ctx.completion);
+			if (err) {
+				dev_err(adapter,
+					"FDIR filter delete timeout\n");
+				goto out;
+			} else {
+				err = ctx.result; /* Async Completion Done */
+				if (err)
+					goto out;
+
+				cxgbe_fdir_add_del_map_entry(map, fs.cap,
+							     fd_id, TRUE);
+				memset(entry, 0, sizeof(*entry));
+			}
+		} else {
+			dev_err(adapter, "Fail to delete FDIR filter!\n");
+			goto out;
+		}
+		return 0;
+	}
+
+	/* Fill in the Action arguments to create the filter */
+	err = fill_ch_spec_action(dev, filter, &fs);
+	if (err) {
+		dev_err(adapter, "FDIR filter invalid action argument\n");
+		goto out;
+	}
+
+	/* NAT not supported for LE-TCAM */
+	if (!fs.cap && fs.nat_mode) {
+		dev_err(adapter, "Maskfull NAT is not supported\n");
+		return -ENOTSUP;
+	}
+
+	/* Create the filter */
+	err = cxgbe_set_filter(dev, filter_id, &fs, &ctx);
+	if (!err) {
+		/* Poll the FW for reply */
+		err = cxgbe_poll_for_completion(&adapter->sge.fw_evtq,
+						CXGBE_FDIR_POLL_US,
+						CXGBE_FDIR_POLL_CNT,
+						&ctx.completion);
+		if (err) {
+			dev_err(adapter, "FDIR filter add timeout\n");
+			goto out;
+		} else {
+			err = ctx.result; /* Asynchronous Completion Done */
+			if (err)
+				goto out;
+
+			entry->tid = ctx.tid;
+			rte_memcpy(&entry->fs, &fs, sizeof(fs));
+			dev_debug(adapter, "FDIR inserted at tid: %d\n",
+				  ctx.tid);
+			cxgbe_fdir_add_del_map_entry(map, fs.cap, fd_id, FALSE);
+		}
+	} else {
+		dev_err(adapter, "Fail to add FDIR filter!\n");
+		goto out;
+	}
+
+	return 0;
+
+out:
+	return err;
+}
+
+/**
+ * Process the supported filter operations
+ */
+static int cxgbe_fdir_filter_op(struct rte_eth_dev *dev,
+				enum rte_filter_op filter_op,
+				struct rte_eth_fdir_filter *fdir)
+{
+	struct adapter *adap = ethdev2adap(dev);
+	struct cxgbe_fdir_map *map = adap->fdir;
+	struct rte_eth_fdir_stats *stats = &map->stats;
+	unsigned int fd_id = fdir->soft_id;
+	int ret;
+
+	if (fd_id >= (map->maskfull_size + map->maskless_size)) {
+		dev_err(adap, "FD_ID must be < %d\n",
+			map->maskfull_size + map->maskless_size);
+		return -ERANGE;
+	}
+
+	switch (filter_op) {
+	case RTE_ETH_FILTER_ADD:
+		ret = cxgbe_add_del_fdir_filter(dev, fdir, fd_id, FALSE);
+		if (ret) {
+			stats->f_add++;
+		} else {
+			stats->free--;
+			stats->add++;
+		}
+		break;
+	case RTE_ETH_FILTER_DELETE:
+		ret = cxgbe_add_del_fdir_filter(dev, fdir, fd_id, TRUE);
+		if (ret) {
+			stats->f_remove++;
+		} else {
+			stats->free++;
+			stats->remove++;
+		}
+		break;
+	case RTE_ETH_FILTER_UPDATE:
+	case RTE_ETH_FILTER_FLUSH:
+	case RTE_ETH_FILTER_GET:
+	case RTE_ETH_FILTER_SET:
+		ret = -ENOTSUP;
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	}
+	return ret;
+}
+
+/**
+ * Fill FDIR stats
+ */
+static void cxgbe_fdir_get_stats(struct rte_eth_dev *dev,
+				 struct rte_eth_fdir_stats *fdir_stats)
+{
+	struct adapter *adap = ethdev2adap(dev);
+	struct rte_eth_fdir_stats *stats = &adap->fdir->stats;
+
+	rte_memcpy(fdir_stats, stats, sizeof(*stats));
+}
+
+/**
+ * cxgbe_fdir_ctrl_func - deal with all operations on flow director.
+ * @dev: pointer to the structure rte_eth_dev
+ * @filter_op:operation will be taken
+ * @arg: a pointer to specific structure corresponding to the filter_op
+ */
+int cxgbe_fdir_ctrl_func(struct rte_eth_dev *dev, enum rte_filter_op filter_op,
+			 void *arg)
+{
+	struct adapter *adapter = ethdev2adap(dev);
+	int ret = 0;
+
+	if (!adapter->fdir)
+		return -ENOTSUP;
+
+	if (filter_op == RTE_ETH_FILTER_NOP)
+		return 0;
+
+	if (!arg && filter_op != RTE_ETH_FILTER_FLUSH)
+		return -EINVAL;
+
+	switch (filter_op) {
+	case RTE_ETH_FILTER_ADD:
+	case RTE_ETH_FILTER_DELETE:
+	case RTE_ETH_FILTER_UPDATE:
+	case RTE_ETH_FILTER_FLUSH:
+	case RTE_ETH_FILTER_GET:
+	case RTE_ETH_FILTER_SET:
+		ret = cxgbe_fdir_filter_op(dev, filter_op,
+					   (struct rte_eth_fdir_filter *)arg);
+		break;
+	case RTE_ETH_FILTER_INFO:
+		ret = -ENOTSUP;
+		break;
+	case RTE_ETH_FILTER_STATS:
+		cxgbe_fdir_get_stats(dev, (struct rte_eth_fdir_stats *)arg);
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	}
+	return ret;
+}
+
+/**
+ * Intitialize flow director
+ */
+struct cxgbe_fdir_map *cxgbe_init_fdir(struct adapter *adap)
+{
+	struct tid_info *t = &adap->tids;
+	struct cxgbe_fdir_map *map;
+	unsigned int mfull_size, mless_size;
+	unsigned int mfull_bmap_size, mless_bmap_size;
+
+	if (!t->tid_tab)
+		return NULL;
+
+	mfull_size = t->nftids;
+	if (is_hashfilter(adap))
+		mless_size = t->ntids - t->hash_base;
+	else
+		mless_size = 0;
+	mfull_bmap_size = rte_bitmap_get_memory_footprint(mfull_size);
+	mless_bmap_size = rte_bitmap_get_memory_footprint(mless_size);
+
+	if ((mfull_size + mless_size) < 1)
+		return NULL;
+
+	map = t4_os_alloc(sizeof(*map));
+	if (!map)
+		return NULL;
+
+	map->maskfull_size = mfull_size;
+	map->maskless_size = mless_size;
+
+	/* Allocate Maskfull Entries */
+	map->mfull_bmap_array = t4_os_alloc(mfull_bmap_size);
+	if (!map->mfull_bmap_array)
+		goto free_map;
+	map->mfull_bmap = rte_bitmap_init(mfull_size, map->mfull_bmap_array,
+					  mfull_bmap_size);
+	if (!map->mfull_bmap)
+		goto free_mfull;
+
+	map->mfull_entry = t4_os_alloc(mfull_size *
+				       sizeof(struct cxgbe_fdir_map_entry));
+	if (!map->mfull_entry)
+		goto free_mfull;
+
+	/* Allocate Maskless Entries */
+	if (mless_size) {
+		map->mless_bmap_array = t4_os_alloc(mless_bmap_size);
+		if (!map->mless_bmap_array)
+			goto free_mfull;
+		map->mless_bmap = rte_bitmap_init(mless_size,
+						  map->mless_bmap_array,
+						  mless_bmap_size);
+		if (!map->mless_bmap)
+			goto free_mless;
+
+		map->mless_entry = t4_os_alloc(mless_size *
+					sizeof(struct cxgbe_fdir_map_entry));
+		if (!map->mless_entry)
+			goto free_mless;
+	}
+
+	t4_os_lock_init(&map->lock);
+
+	map->stats.free = mfull_size + mless_size;
+	return map;
+
+free_mless:
+	if (map->mless_bmap)
+		rte_bitmap_free(map->mless_bmap);
+
+	if (map->mless_bmap_array)
+		t4_os_free(map->mless_bmap_array);
+
+free_mfull:
+	if (map->mfull_entry)
+		t4_os_free(map->mfull_entry);
+
+	if (map->mfull_bmap)
+		rte_bitmap_free(map->mfull_bmap);
+
+	if (map->mfull_bmap_array)
+		t4_os_free(map->mfull_bmap_array);
+
+free_map:
+	t4_os_free(map);
+	return NULL;
+}
+
+/**
+ * Cleanup flow director
+ */
+void cxgbe_cleanup_fdir(struct adapter *adap)
+{
+	struct cxgbe_fdir_map *map = adap->fdir;
+
+	if (map) {
+		if (map->mfull_bmap) {
+			rte_bitmap_free(map->mfull_bmap);
+			t4_os_free(map->mfull_bmap_array);
+		}
+
+		if (map->mless_bmap) {
+			rte_bitmap_free(map->mless_bmap);
+			t4_os_free(map->mless_bmap_array);
+		}
+
+		if (map->mfull_entry)
+			t4_os_free(map->mfull_entry);
+		if (map->mless_entry)
+			t4_os_free(map->mless_entry);
+
+		t4_os_free(map);
+	}
+}
diff --git a/drivers/net/cxgbe/cxgbe_fdir.h b/drivers/net/cxgbe/cxgbe_fdir.h
new file mode 100644
index 0000000..d4ae183
--- /dev/null
+++ b/drivers/net/cxgbe/cxgbe_fdir.h
@@ -0,0 +1,108 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Chelsio Communications.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Chelsio Communications nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _CXGBE_FDIR_H_
+#define _CXGBE_FDIR_H_
+
+#define CXGBE_FDIR_POLL_US  10
+#define CXGBE_FDIR_POLL_CNT 10
+
+/* RTE_ETH_FLOW_RAW_PKT representation. */
+struct cxgbe_fdir_input_admin {
+	uint8_t prio;
+	uint8_t type;
+	uint8_t cap;
+};
+
+struct cxgbe_fdir_input_flow {
+	uint16_t ethtype;
+	uint8_t iport;
+	uint8_t proto;
+	uint8_t tos;
+	uint16_t ivlan;
+	uint16_t ovlan;
+
+	uint8_t lip[16];
+	uint8_t fip[16];
+	uint16_t lport;
+	uint16_t fport;
+};
+
+struct cxgbe_fdir_action {
+	uint8_t eport;
+	uint8_t newdmac;
+	uint8_t newsmac;
+	uint8_t swapmac;
+	uint8_t newvlan;
+	uint8_t nat_mode;
+	uint8_t dmac[ETHER_ADDR_LEN];
+	uint8_t smac[ETHER_ADDR_LEN];
+	uint16_t vlan;
+
+	uint8_t nat_lip[16];
+	uint8_t nat_fip[16];
+	uint16_t nat_lport;
+	uint16_t nat_fport;
+};
+
+/* The cxgbe_fdir_map is a mapping between the DPDK filter stack and the
+ * CXGBE filtering support. Its main intention is to translate info between
+ * the two.
+ */
+struct cxgbe_fdir_map_entry {
+	u32 tid;                           /* Filter index */
+	struct ch_filter_specification fs; /* Filter specification */
+};
+
+struct cxgbe_fdir_map {
+	/* DPDK related info */
+	struct rte_eth_fdir_stats stats;
+
+	/* CXGBE related info */
+	unsigned int maskfull_size;    /* Size of Maskfull region */
+	unsigned int maskless_size;    /* Size of Maskless region */
+	rte_spinlock_t lock;           /* Lock to access an entry */
+
+	uint8_t *mfull_bmap_array;     /* Bitmap array for maskfull entries */
+	struct rte_bitmap *mfull_bmap; /* Bitmap for maskfull entries */
+	uint8_t *mless_bmap_array;     /* Bitmap array for maskless entries */
+	struct rte_bitmap *mless_bmap; /* Bitmap for maskless entries */
+	struct cxgbe_fdir_map_entry *mfull_entry; /* Maskfull fdir entries */
+	struct cxgbe_fdir_map_entry *mless_entry; /* Maskless fdir entries */
+};
+
+struct cxgbe_fdir_map *cxgbe_init_fdir(struct adapter *adap);
+void cxgbe_cleanup_fdir(struct adapter *adap);
+int cxgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
+			 enum rte_filter_op filter_op, void *arg);
+#endif /* _CXGBE_FDIR_H_ */
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 1f79ba3..f8168c6 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -148,6 +148,38 @@ out:
 }
 
 /**
+ * cxgbe_poll_for_completion: Poll rxq for completion
+ * @q: rxq to poll
+ * @us: microseconds to delay
+ * @cnt: number of times to poll
+ * @c: completion to check for 'done' status
+ *
+ * Polls the rxq for reples until completion is done or the count
+ * expires.
+ */
+int cxgbe_poll_for_completion(struct sge_rspq *q, unsigned int us,
+			      unsigned int cnt, struct t4_completion *c)
+{
+	unsigned int i;
+	unsigned int work_done, budget = 4;
+
+	if (!c)
+		return -EINVAL;
+
+	for (i = 0; i < cnt; i++) {
+		cxgbe_poll(q, NULL, budget, &work_done);
+		t4_os_lock(&c->lock);
+		if (c->done) {
+			t4_os_unlock(&c->lock);
+			return 0;
+		}
+		t4_os_unlock(&c->lock);
+		udelay(us);
+	}
+	return -ETIMEDOUT;
+}
+
+/**
  * Setup sge control queues to pass control information.
  */
 int setup_sge_ctrl_txq(struct adapter *adapter)
@@ -1355,6 +1387,7 @@ void cxgbe_close(struct adapter *adapter)
 
 	if (adapter->flags & FULL_INIT_DONE) {
 		cxgbe_clear_all_filters(adapter);
+		cxgbe_cleanup_fdir(adapter);
 		tid_free(&adapter->tids);
 		t4_cleanup_clip_tbl(adapter);
 		t4_cleanup_l2t(adapter);
@@ -1542,6 +1575,13 @@ allocate_mac:
 			 "Maskless filter support disabled. Continuing\n");
 	}
 
+	adapter->fdir = cxgbe_init_fdir(adapter);
+	if (!adapter->fdir) {
+		/* Disable Flow Director */
+		dev_warn(adapter, "could not allocate FDIR "
+			 "flow director support disabled. Continuing\n");
+	}
+
 	err = init_rss(adapter);
 	if (err)
 		goto out_free;
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index bd4b381..83e833c 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1664,7 +1664,8 @@ static int process_responses(struct sge_rspq *q, int budget,
 			unsigned int params;
 			u32 val;
 
-			if (fl_cap(&rxq->fl) - rxq->fl.avail >= 64)
+			if (q->offset >= 0 &&
+			    fl_cap(&rxq->fl) - rxq->fl.avail >= 64)
 				__refill_fl(q->adapter, &rxq->fl);
 			params = V_QINTR_TIMER_IDX(X_TIMERREG_UPDATE_CIDX);
 			q->next_intr_params = params;
-- 
2.5.3



More information about the dev mailing list