netdev
[Top] [All Lists]

Re: Route cache performance

To: Alexey Kuznetsov <kuznet@xxxxxxxxxxxxx>
Subject: Re: Route cache performance
From: Ben Greear <greearb@xxxxxxxxxxxxxxx>
Date: Fri, 16 Sep 2005 12:22:27 -0700
Cc: Robert Olsson <Robert.Olsson@xxxxxxxxxxx>, Simon Kirby <sim@xxxxxxxxxxxxx>, Eric Dumazet <dada1@xxxxxxxxxxxxx>, netdev@xxxxxxxxxxx
In-reply-to: <20050916190404.GA11012@xxxxxxxxxxxxxxx>
Organization: Candela Technologies
References: <17167.29239.469711.847951@xxxxxxxxxxxx> <20050906235700.GA31820@xxxxxxxxxxxxx> <17182.64751.340488.996748@xxxxxxxxxxxx> <20050907162854.GB24735@xxxxxxxxxxxxx> <20050907195911.GA8382@xxxxxxxxxxxxxxx> <20050913221448.GD15704@xxxxxxxxxxxxx> <20050915210432.GD28925@xxxxxxxxxxxxxxx> <17193.59406.200787.819069@xxxxxxxxxxxx> <20050915222102.GA30387@xxxxxxxxxxxxxxx> <17194.47097.607795.141059@xxxxxxxxxxxx> <20050916190404.GA11012@xxxxxxxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.7.10) Gecko/20050719 Fedora/1.7.10-1.3.1
Alexey Kuznetsov wrote:
Hello!


Yes sounds famliar XEON with e1000... So why not for 2.6?


Most likely, something is broken in the e1000 driver. Otherwise, no ideas.

Has anyone tried using bridging to compare numbers?  I would assume that
the bridging code is lower-overhead than the routing, so if it's a route
cache problem, the bridge traffic should be significantly higher than
the routed traffic.  If they are both about the same, then either
bridging has lots of overhead too, or the driver (or other network
sub-system) is the bottleneck.

For reference, I was able to bridge only about 200kpps (in each direction, 64 
byte pkts)
on a P-IV 3Ghz system with dual Intel e1000 NIC in a PCI-X 64/133 bus....

I would like to hear of any other bridging benchmarks that someone
may have, especially for bi-directional traffic flows.

Thanks,
Ben

--
Ben Greear <greearb@xxxxxxxxxxxxxxx>
Candela Technologies Inc  http://www.candelatech.com


<Prev in Thread] Current Thread [Next in Thread>