> In my analysis i noted that the "tx timeout" problems under moderate
> network loads was _mostly_ because the tx thread was being starved.
> (i was blasting a lot of 64 byte packets at the tulip and eepro and
> trying to see where they start dying).
> One of the main reasons was that the rx thread interupt was consistently
> pre-empting the tx.
> Is the total paralelization you are proposing providing any protection
> against such issues? The suggested locks do fix this problem.
I believe that this tx starvation is due to the decision to schedule the
tx in the device ISR, for BH handling, rather than to actually dequeue
and send packets within the Tx ISR. I can see why the bh scheduling is
I like the loop-until-max_interrupt_work-exceeded architecture. It's
_very_ efficient compared with interrupt per packet, and it kicks in
when the system is under stress. But it's not being leveraged for
The 3c59x driver's ISR does this:
while (stuff_to_do && (count++ < max_interrupt_work))
if (the device has room for a tx packet)
It appears to me that netif_wake_queue can be called multiple times
within this loop, at considerable expense, when the system is under Rx
Wouldn't it be better to have a local flag in the ISR which prevents
bool done_wake = false;
while (count++ < max_interrupt_work)
if (!done_wake && the device has room for a tx packet)
done_wake = true;