[PATCH net-next] ibmveth: Use WARN_ON with error handling rather than BUG_ON

Dave Marquardt davemarq at linux.ibm.com
Mon Apr 14 23:17:49 AEST 2025


Michal Swiatkowski <michal.swiatkowski at linux.intel.com> writes:

> On Thu, Apr 10, 2025 at 01:39:18PM -0500, Dave Marquardt wrote:
>> - Replaced BUG_ON calls with WARN_ON calls with error handling,
>>   with calls to a new ibmveth_reset routine, which resets the device.
>> - Added KUnit tests for ibmveth_remove_buffer_from_pool and
>>   ibmveth_rxq_get_buffer under new IBMVETH_KUNIT_TEST config option.
>> - Removed unneeded forward declaration of ibmveth_rxq_harvest_buffer.
>
> It will be great if you split this patch into 3 patches according to
> your description.

Thanks. I debated the right approach here. Thanks for the guidance.

>>  static struct kobj_type ktype_veth_pool;
>> @@ -231,7 +230,10 @@ static void ibmveth_replenish_buffer_pool(struct ibmveth_adapter *adapter,
>>  		index = pool->free_map[free_index];
>>  		skb = NULL;
>>  
>> -		BUG_ON(index == IBM_VETH_INVALID_MAP);
>> +		if (WARN_ON(index == IBM_VETH_INVALID_MAP)) {
>> +			(void)schedule_work(&adapter->work);
>
> What is the purpose of void casting here (and in other places in this
> patch)?

I'm indicating that I'm ignoring the bool returned by schedule_work().
Since this seemed odd to you, I take it the convention in Linux code is
not doing this.

>> +			goto failure2;
>
> Maybe increment_buffer_failure, or sth that is telling what happen after
> goto.

Okay, I can change that.

>> +		}
>>  
>>  		/* are we allocating a new buffer or recycling an old one */
>>  		if (pool->skbuff[index])
>> @@ -300,6 +302,7 @@ static void ibmveth_replenish_buffer_pool(struct ibmveth_adapter *adapter,
>>  		                 DMA_FROM_DEVICE);
>>  	dev_kfree_skb_any(pool->skbuff[index]);
>>  	pool->skbuff[index] = NULL;
>> +failure2:
>>  	adapter->replenish_add_buff_failure++;
>>  
>>  	mb();
>> @@ -370,20 +373,36 @@ static void ibmveth_free_buffer_pool(struct ibmveth_adapter *adapter,
>>  	}
>>  }
>>  
>
> [...]

Thanks for your review!

-Dave


More information about the Linuxppc-dev mailing list