[Lguest] [PATCH RFC/RFB] x86_64, i386: interrupt dispatch changes

Cyrill Gorcunov gorcunov at gmail.com
Wed Nov 5 03:47:17 EST 2008


[Alexander van Heukelum - Tue, Nov 04, 2008 at 05:23:09PM +0100]
...
| 
| I did some timings using the little program below (32-bit only), doing
| 1024 times the same sequence. TEST1 is just pushing a constant onto
| the stack; TEST2 is pushing the cs register; TEST3 is the sequence
| from the patch to extract the vector number from the cs register.
| 
| Opteron    (cycles): 1024 / 1157 / 3527
| Xeon E5345 (cycles): 1092 / 1085 / 6622
| Athlon XP  (cycles): 1028 / 1166 / 5192

Xeon is defenitely out of luck :-)

| 
| I'ld say that the cost of the push %cs itself is negligible.
| 
| > ( another advantage is that the 6 bytes GDT descriptor is more 
| >   compressed and hence uses up less L1/L2 cache footprint than the 
| >   larger (~7 byte) trampolines we have at the moment. )
| 
| A GDT descriptor has to be read and processed anyhow... It might
| just not be in cache. But at least it is aligned. The trampolines
| are 7 bytes (irq#<128) or 10 bytes (irq#>127) on i386 and x86_64.
| And one is data, and the other is code, which might also cause
| different behaviour. It's just a bit too complicated to decide by
| just reasoning about it ;).
| 
| > plus it's possible to observe the typical cost of irqs from user-space 
| > as well: run a task on a single CPU and save away all the RDTSC deltas 
| > that are larger than ~10 cycles - these will be the IRQ entry costs. 
| > Print out these deltas after 60 seconds of runtime (or something like 
| > that), and look at the histogram.
| 
| I'll see if I can do that. Maybe in a few days...
| 
| Thanks,
|     Alexander
| 
| > 	Ingo
...

		- Cyrill -



More information about the Lguest mailing list