-
Notifications
You must be signed in to change notification settings - Fork 129
eflatency: Optionally echo the packet in the pong reply and support VLAN tags #238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@jfeather-amd
jfeather-amd
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution, @osresearch!
This review just covers things I found by inspection, and I plan on doing some testing in the coming days, but overall I quite like these changes! I do have one concern in regards to performance which is perhaps a non-issue, I will confer with other members of the team in regards to this concern. There are also quite a few style nit-picks, please feel free to disregard these if you want - I can apply these changes in the merging process if you would rather focus on the code itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This certainly feels a lot easier to parse what's going on! The previous dance of incrementing vi->i early if we're returning rather than continuing to process the remaining events is quite obtuse.
src/tests/ef_vi/eflatency.c
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if when validating it's worth validating the whole packet using memcp for example, or if for more detail about the first octet that's different a custom loop would suffice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first version of the patch (as you noted above) only set the first byte... I can add a loop to check the rest of them.
osresearch
commented
Aug 8, 2024
Thanks for the feedback on the patch. I'll make the style corrections and push an updated version.
bf7d196 to
732e00f
Compare
@jfeather-amd
jfeather-amd
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing my comments so quickly! I'm still looking at some tests for this, and have kicked off a test run to go overnight.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, good question! Because handle_rx_ref() calls efct_vi_rxpkt_release(), our app's reference to this has been released. The data may still be valid (e.g., if something else still has a reference to this), but I don't believe it's safe to use at this point.
Indeed, checking my own knowledge here against the user guide:
Once released, the packet identifier and any pointers to the packet data must be considered invalid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The memset() and checksum_udp_pkt() calls are outside of the timing loop for the ping process
This looks like it's only half true, although I hadn't noticed the nuance before! We call gettimeofday(&start, NULL); above the loop, and internally (per iteration) call uint64_t start = ci_frc64_get(); so I would expect the "full" measurement will see an increase, but perhaps the per-iteration one won't. I would definitely want to verify this behaviour before accepting this though, as numbers changing for whatever reason can be quite a nasty surprise to end users!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After you pointed out the memory operations were outside of the timing loop, I spotted that this bit of code isn't. I would be interested to see if your 150ns time increase changes if this it moved after uint64_t stop = ci_frc64_get();
jfeather-amd
commented
Aug 14, 2024
Hi @osresearch, sorry for the delay in getting back to you on this! I just finished looking into performance testing this patch and found that there seems to be a significant enough regression that I am hesitant to merge this in its current state. I would like to think for a while longer about how to progress this PR, as I do think this would be a nice change to have! Some options to consider are:
- Making a similar change to a different ef_vi test app
- Locking this behind a build option
- Duplicating eflatency to have eflatency_memops (for example) which incorporates this change
Although I haven't thought for long enough to decide which one of these would be most appropriate.
osresearch
commented
Aug 14, 2024
Thanks for doing the performance testing on the patch, @jfeather-amd . Can you describe where the slowdowns seem to be? In the non-vlan, non-echo, non-validating case (the default), my latency deltas were in the noise on the X2 and X3 cards, so I'm very curious about your methodology so that I can replicate the results for my future testing.
I've re-run tests on the X3 cards with better isolation and pinning the eflatency task to a single CPU; the results show no change in the min, 50%, 95% and 99% numbers, although there is an unexpected increase in the mean of about 50ns. This is caused by the unconditional memset() and checksum_udp_pkt() on the send side, although these occur outside of the ci_frc64_get() timing loop and which I had assumed would not affect the timing. Adding if(cfg_validating)... around the packet rewriting removes this effect.
However, this performance regression appears to be an issue with the way mean is computed -- it is the total time for all packets (delta between the two gettimeofday() calls), not the mean of the measured times (rdtsc ticks). I wonder if the mean should be computed as the average of the actual times instead. It is unexpected to me that the first column of results doesn't match the data used for the other columns. I've submitted #240 to compute the mean from the timings array instead of the wall clock time.
ivatet-amd
commented
Dec 2, 2024
Hello @osresearch, sorry for not replying sooner. We are busy!
The purposes why we use eflatency include performance comparisons between various OpenOnload versions on different hardware configurations. If we change the methodology behind eflatency reports, we won't be able to compare them between the released OpenOnload versions without the change and newer ones with the change. ("Comparing apples and oranges".) It might not be a technical problem, but it is still significant work to update the ecosystem to accommodate this change gracefully without confusing the users (e.g. "Why did the performance unexpectedly become better/worse?"), and we are not motivated to do it now.
Would you like to contact our customer support team (and sales team) so we can better understand your motivation and usage cases and figure out what we can do about them? You would need to email support-nic@amd.com.
This patch adds the option of having the
pongnode copy the contents of thepingmessage into the reply, which adds a little more realism to theeflatencytest since it requires the receiver to read the contents of the message, not just receive the notice that a message has arrived.Additionally, it also adds the option for 802.1q VLAN tagging for
eflatencytests that traverse switches, making it possible to benchmark those switches as well.It also cleans up a bit of the logic by removing some magic sizes by using
sizeof()on various ethernet headers.