Timothy Shimmin <tes@xxxxxxx> writes:
> I'm still wondering if likely() and unlikely() should ever be used or not?
It's more than just branch predictors. unlikely also moves unlikely
code out of line and keeps it out of the icache (the obvious
drawback is that it makes the asm code much harder to read
during debugging though -- that is why it used to be turned off
Then CPUs have two types of branch predictors: dynamic ones with
history and static ones. Dynamic branch predictors tend to only work
when the code has been recently executed several times and is still
cached in their history buffers
Now the nature of the kernel is that it is a library serving much more
code running in user space. This leads to often the user space
clearing out the history buffers and caches so kernel code has to often
deal with running cache cold and also branch predictor cold.
Then there are the static branch predictors in the CPU and unlikely()
actually rearranges code to make them predict correctly.
Personally I would say the cache effects (moving code out of line)
are more important than the branch prediction because cache misses
are more costly than branch misprediction.
That all said it might make sense in some really performance critical code,
especially if it's a in a loop and the gcc static branch predictors
(gcc has a large range of builtin heuristics that say e.g. (x < 0) or
(x == NULL) is unlikely). Most code is probably not performance critical
enough to justify the ugliness of the code annotations. And again
for many situations the builtin predictors of gcc (and the CPU) do
fine without help anyways.
Also if you add them you should at some point run with the unlikely
profiler patch from mm just to make sure that your guesses about
which paths are likely are actually correct. Humans are unfortunately
often wrong on such guesses.
Ideally (but that might ask for too much for normal code writing) you
would only add them to code where you have oprofile data for branch
mispredictions or icache misses.