Dmitry> Basically, HW offloading all kind of is a different
Dmitry> subject. Yes, iSER/RDMA/RNIC will help to avoid bunch of
Dmitry> problems but at the same time will add bunch of new
Dmitry> problems. OOM/deadlock problem we are discussing is a
Dmitry> software, *not* hardware related.
Yes, that's why I said I was hijacking the topic to bring up something
else I was interested in :)
Dmitry> If you have plans to start new project such as SoftRDMA
Dmitry> than yes. lets discuss it since set of problems will be
Dmitry> similar to what we've got with software iSCSI Initiators.
No, I don't have plans for such a project, although I would be
interesting in participating in a small way. Unfortunately I'm
involved in too many other things on to do much real work.
My main interest comes from the InfiniBand world. Right now we have
the beginnings of good support for IB in drivers/infiniband, but
people are always talking to me about adding support for RDMA/TCP
hardware. I think we should be able to evolve the curent InfiniBand
API to a more generic RDMA API, and I would hope that a "SoftRDMA"
project can fit in as just another low-level device driver (soft of
the same way software iSCSI sits under the SCSI stack).
In fact I think SoftRDMA would be very good for this generalization
work, as it would force us to come up with very flexible APIs.
Dmitry> I'm not a believer in any HW state-full protocol
Dmitry> offloading technologies and that was one of my motivations
Dmitry> to initiate Open-iSCSI project to prove that performance
Dmitry> is not an issue anymore. And we succeeded, by showing
Dmitry> comparable to iSCSI HW Initiator's numbers.
Fair enough. I think I agree that HW offload is not really justified
if all you care about is storage, although a cheap iSCSI HBA than
handles all the transport and just lets the host queue IOs seems like
a reasonable thing to put in a server that has work to do beyond
running a storage stack.
It seems that many people are using RDMA hardware (mostly InfiniBand
now, maybe RDMA/TCP will catch on) hardware for other reasons. In
those cases users often want to share the same fabric and NIC for
storage too. But my main interest right now is in getting RDMA
working well on Linux for the users that are already out there -- I
know many IB clusters with hundreds and even thousands of nodes are
being built all the time, so InfiniBand must be solving some real
problems for users.