Brandeburg, Jesse writes:
> As the part owner of the e100 driver, this issue has been nagging at me
> for a while. NAPI seems to be able to swamp a system with interrupts
> and context switches. In this case the system does not respond to
> having zero cpu cycles available by polling more, as I figured it would.
> This was mostly based on observation, but I included some quick data
> gathering for this mail.
Well to sort out some things. NAPI address just the balance between
irq and softirq's nothing else, In the balance between userland and
sofirq's we pray that sofirq's should behave well. In some cases
these are deferred to ksoftirqd to balance softirq/user.
> With the current e100 driver in NAPI mode the adapter can generate
> upwards of 36000 interrupts a second from only receiving 77000
> packets/second (pktgen traffic)
You have a very fast CPU and slow device. Interrupt ISR handles two
packets etc. Same for NAPI/non-NAPI path. Not bad really. Probably
you had interrupt delay w. non-NAPI so the interrupt rate was a bit
lower this OK w, NAPI too if you like it.
> Is there any way we can configure how soon or often to be polled
> (called)? For 100Mb speeds at worst we can get about one 64 byte packet
> every 9us (if I did my math right) and since the processors don't take
> that long to schedule NAPI, process a packet and handle an interrupt, we
> just overload the system with interrupts going into and out of napi
> mode. In this case I only have one adapter getting scheduled.
Interrupt delay is the most straight forward... You can play with some
driver timer. Balance timer and the RX ring size so it does not overflow
and register for poll when the timer func have receive packets. You would
see no interrupts at all... Could be fun it feel like sub-optimizing.
No I think at these levels interrupts are our friends.