[dpdk-dev] [PATCH v3 3/5] bus/vdev: bus scan by multi-process channel

Burakov, Anatoly anatoly.burakov at intel.com
Fri Apr 20 17:39:59 CEST 2018


On 20-Apr-18 4:32 PM, Tan, Jianfeng wrote:
> 
> 
> On 4/20/2018 11:19 PM, Burakov, Anatoly wrote:
>> On 20-Apr-18 3:28 PM, Tan, Jianfeng wrote:
>>>
>>>
>>> On 4/20/2018 4:41 PM, Burakov, Anatoly wrote:
>>>> On 19-Apr-18 5:50 PM, Jianfeng Tan wrote:
>>>>> To scan the vdevs in primary, we send request to primary process
>>>>> to obtain the names for vdevs.
>>>>>
>>>>> Only the name is shared from the primary. In probe(), the device
>>>>> driver is supposed to locate (or request more) the detail
>>>>> information from the primary.
>>>>>
>>>>> Signed-off-by: Jianfeng Tan <jianfeng.tan at intel.com>
>>>>> Reviewed-by: Qi Zhang <qi.z.zhang at intel.com>
>>>>> ---
>>>>
>>>> <...>
>>>>
>>>>> +static int
>>>>> +vdev_action(const struct rte_mp_msg *mp_msg, const void *peer)
>>>>> +{
>>>>> +    struct rte_vdev_device *dev;
>>>>> +    struct rte_mp_msg mp_resp;
>>>>> +    struct vdev_param *ou = (struct vdev_param *)&mp_resp.param;
>>>>> +    const struct vdev_param *in = (const struct vdev_param 
>>>>> *)mp_msg->param;
>>>>> +    const char *devname;
>>>>> +    int num;
>>>>> +
>>>>> +    strcpy(mp_resp.name, "vdev");
>>>>> +    mp_resp.len_param = sizeof(*ou);
>>>>> +    mp_resp.num_fds = 0;
>>>>> +
>>>>> +    switch (in->type) {
>>>>> +    case VDEV_SCAN_REQ:
>>>>> +        ou->type = VDEV_SCAN_ONE;
>>>>> +        ou->num = 1;
>>>>> +        num = 0;
>>>>> +
>>>>> +        rte_spinlock_lock(&vdev_device_list_lock);
>>>>> +        TAILQ_FOREACH(dev, &vdev_device_list, next) {
>>>>> +            devname = rte_vdev_device_name(dev);
>>>>> +            if (strlen(devname) == 0)
>>>>> +                VDEV_LOG(INFO, "vdev with no name is not sent");
>>>>> +            VDEV_LOG(INFO, "send vdev, %s", devname);
>>>>> +            strncpy(ou->name, devname, RTE_DEV_NAME_MAX_LEN);
>>>>
>>>> Probably better use strlcpy as it always null-terminates.
>>>
>>> Yep.
>>>
>>>>
>>>>> +            if (rte_mp_sendmsg(&mp_resp) < 0)
>>>>> +                VDEV_LOG(ERR, "send vdev, %s, failed, %s",
>>>>> +                     devname, strerror(rte_errno));
>>>>> +            num++;
>>>>
>>>> Some comments on what is going on here (why are we sending messages 
>>>> in response? why multiple? who will receive these messages?) would 
>>>> be nice.
>>>
>>> Yep, will explain that below.
>>>
>>>> I have a sneaking suspicion that you could've packed the response 
>>>> into one single message, but i'm not completely sure what is going 
>>>> on here, so maybe what you have here makes sense...
>>>
>>> What's happening here is that:
>>>
>>> a. Secondary process sends a sync request to ask for vdev in primary.
>>> b. Primary process receives the request, and send vdevs one by one.
>>> c. Primary process sends back reply, which indicates how many vdevs 
>>> are sent.
>>>
>>> The reason we don't pack all vdevs in the reply message is that, the 
>>> message length is RTE_MP_MAX_PARAM_LEN (256) in length. It's possible 
>>> that we cannot pack all vdevs in the single reply message.
>>>
>>
>> OK. How does secondary know which vdevs are new and which aren't?
> 
> This auto discovery is designed for secondary boot to know which vdevs 
> are used in primary. So they are all new to the secondary process. For 
> runtime vdev add in primary, we are going to rely on hotplug framework 
> to tell the news to secondary processes.
> 
>> Does it even matter how many vdevs primary has sent? Correct me if i'm 
>> wrong, but it seems that you're only using sync request as kind of 
>> synchronization mechanism, and are not actually expecting any useful 
>> data in the reply. Which is OK, but in that case just don't bother 
>> sending any data in the reply in the first place :)
> 
> I would like to keep this information, so that secondary process can 
> tell how many vdevs come from primary process (secondary process can 
> definitely iterate the vdev list to know, but it's that straightforward).
> 

OK, no strong objections here :)

-- 
Thanks,
Anatoly


More information about the dev mailing list