[PATCH net-next RESEND v4] ibmvnic: Increase max subcrq indirect entries with fallback
Paolo Abeni
pabeni at redhat.com
Tue Aug 26 17:44:43 AEST 2025
On 8/21/25 3:02 PM, Mingming Cao wrote:
> POWER8 support a maximum of 16 subcrq indirect descriptor entries per
> H_SEND_SUB_CRQ_INDIRECT call, while POWER9 and newer hypervisors
> support up to 128 entries. Increasing the max number of indirect
> descriptor entries improves batching efficiency and reduces
> hcall overhead, which enhances throughput under large workload on POWER9+.
>
> Currently, ibmvnic driver always uses a fixed number of max indirect
> descriptor entries (16). send_subcrq_indirect() treats all hypervisor
> errors the same:
> - Cleanup and Drop the entire batch of descriptors.
> - Return an error to the caller.
> - Rely on TCP/IP retransmissions to recover.
> - If the hypervisor returns H_PARAMETER (e.g., because 128
> entries are not supported on POWER8), the driver will continue
> to drop batches, resulting in unnecessary packet loss.
>
> In this patch:
> Raise the default maximum indirect entries to 128 to improve ibmvnic
> batching on morden platform. But also gracefully fall back to
> 16 entries for Power 8 systems.
>
> Since there is no VIO interface to query the hypervisor’s supported
> limit, vnic handles send_subcrq_indirect() H_PARAMETER errors:
> - On first H_PARAMETER failure, log the failure context
> - Reduce max_indirect_entries to 16 and allow the single batch to drop.
> - Subsequent calls automatically use the correct lower limit,
> avoiding repeated drops.
>
> The goal is to optimizes performance on modern systems while handles
> falling back for older POWER8 hypervisors.
>
> Performance shows 40% improvements with MTU (1500) on largework load.
>
> Signed-off-by: Mingming Cao <mmc at linux.ibm.com>
> Reviewed-by: Brian King <bjking1 at linux.ibm.com>
> Reviewed-by: Haren Myneni <haren at linux.ibm.com>
> Reviewed-by: Simon Horman <horms at kernel.org>
> --------------------------------------
> Changes since v3:
> Link to v3: https://www.spinics.net/lists/netdev/msg1112828.html
For future memory: please use lore links instead.
Thanks,
Paolo
More information about the Linuxppc-dev
mailing list