Have you ever tried sending one Ethernet packet every 78 microseconds? If not, what would you expect to happen? Actually, I did that kind of experiment (and many others) last year in my graduation thesis “Development of a Scalable and Distributed System for Precise Performance Analysis of Communication Networks“, which is now published. For the thesis I developed a system called the Lightweight Universal Network Analyzer (LUNA), which can generate packets at precise times and record their arrival times, among other things. When I tested it on different hardware, I got some surprising results, as you can see in the figure below.
The diagram shows packet inter arrival times (IAT) on the x-axis. I had configured the packet source to send a packet every 78 microsecond, and the IAT measurement shows at which intervals they actually arrived. The y-axis shows how frequently a certain IAT occurred, note that it has a logarithmic scale. The differently colored curves are from different measurements:
- The measurement for the red curve was done between two hosts equipped with Realtek RTL8111/8168B Gigabit Ethernet controllers,
- the cyan one between two hosts with Intel Gigabit Ethernet controllers (82567LF and 82579LM, to be precise),
- and the dark blue one via the loopback interface on one of the hosts for reference.
The hosts were sufficiently similar in processing power (for details, see chapter 8 of the thesis).
The loopback measurement looks as expected, with a strong peak at 78 µs IAT and a packets distributed around it. In both measurements with real hardware some packets were transmitted in rapid succession, probably after some of them were stalled. The really interesting thing, however, is the different behavior at and above the intended IAT. The measurement with Intel hardware led to a peak around 78 µs, although much wider than the loopback one. Using the Realtek cards, almost no packet arrived with the intended intervals, instead, there is a very wide peak around approximately 250 µs. All three measurements showed average IATs of 77 µs, though.
If you now think that the Intel hardware followed the timing pattern created by the software much better, well, it’s not that easy. Yes, the distribution looks more like the one I wanted, but the maximum deviation from the intended inter arrival time was actually much larger. For the red curve, representing the measurement with Realtek hardware, the rightmost signal (328 µs IAT) in the graph is indeed the maximum deviation. The largest IAT recorded in the Intel measurement, however, was 1922 µs. These outliers are not shown in the figure because otherwise the peaks would be very difficult to distinguish. You can find the detailed numbers in Table 8.8 in the thesis.
The hardware for this experiment was essentially just what was available at the lab. 😉 Nonetheless the results show that networking hardware can have an impressive impact on the timing behavior of packet transmissions. I’d really like to see some studies on other devices! Also, it may be interesting to check in what part the difference is caused by the hardware itself, and what influence the hardware drivers have on the results.