> Unfortunately netfilter hash is a bad example for this, because its
> DoS handling requirements (LRU etc.) are more complex than what most other
> linux hash tables need and I am not sure if it would make sense to
> put it into generic code.
There is actually one issue with the current netfilter hash code:
The code intentionally does not 0 out the next pointer when
a conntrack is removed from the hashes; only new, never-yet-hashed
conntracks have their next field be 0, and the confirm logic relies
on that. Could be easily changed to use an appropriate flag bit in
As a consequence, the single linked list I'm prototyping must be a
ring list, with the hash bucket pointer within the list - same scheme
as with the doubly linked list. It's oopsing on me as I type :)
A non-ring implementation would be smaller, so I think we really want
that flag bit for the confirmations. Rusty?
All other cases could be handled by a general hash implementation
with per-list-entry user supplied comparison callback, and a
per-table hash function.
I'm sure that any real DoS handling will work by varying constants
used in the hash function. That's the result of the recent "abcd"
The thing that worries me, even with the current setup, is the idea of
a general boottime sizing of all such general hash tables. The things
are hard to override once loaded, so sizes must fit what's needed in
the real world, and that's over a _mix_ of various tables that all
play together under this or that workload. Maybe runtime rehashing
is the way to go here, to make this fully adaptive.