[PATCH] update irq affinity mask when migrating irqs

Joel Schopp jschopp at austin.ibm.com
Wed Mar 9 04:20:33 EST 2005


Comments below.

> 
> 
> Signed-off-by: Nathan Lynch <ntl at pobox.com>
> 
>  xics.c |   11 ++---------
>  1 files changed, 2 insertions(+), 9 deletions(-)
> 
> Index: linux-2.6.11-bk2/arch/ppc64/kernel/xics.c
> ===================================================================
> --- linux-2.6.11-bk2.orig/arch/ppc64/kernel/xics.c	2005-03-02 07:38:10.000000000 +0000
> +++ linux-2.6.11-bk2/arch/ppc64/kernel/xics.c	2005-03-07 03:52:08.000000000 +0000
> @@ -704,15 +704,8 @@ void xics_migrate_irqs_away(void)
>  		       virq, cpu);
>  
>  		/* Reset affinity to all cpus */
> -		xics_status[0] = default_distrib_server;
> -
> -		status = rtas_call(ibm_set_xive, 3, 1, NULL, irq,
> -				xics_status[0], xics_status[1]);
> -		if (status)
> -			printk(KERN_ERR "migrate_irqs_away: irq=%d "
> -					"ibm,set-xive returns %d\n",
> -					virq, status);
> -
> +		desc->handler->set_affinity(virq, CPU_MASK_ALL);

The downside of calling this is it increases the path length and causes 
  ibm_get_xive to be called again.  Usually slightly slower is a fine 
tradeoff for more readable code, but in this case I would have left it 
how it was.  With all the cpus stopped it is best to be as fast as 
possible.  Maybe this is still fast enough, but you'd have to test under 
heavy load on a variety of systems to be sure.

> +		irq_affinity[virq] = CPU_MASK_ALL;

This was a good catch.

>  unlock:
>  		spin_unlock_irqrestore(&desc->lock, flags);
>  	}
> _______________________________________________
> Linuxppc64-dev mailing list
> Linuxppc64-dev at ozlabs.org
> https://ozlabs.org/cgi-bin/mailman/listinfo/linuxppc64-dev
> 




More information about the Linuxppc64-dev mailing list