D16626 expanded on these changes and was recently committed. I believe we can close this one out.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Aug 20 2018
Jul 21 2018
Jun 7 2018
This has been tested under moderate load in our lab @ LLNW over a few days with no stability issues. We haven't gotten to more in depth profiling at higher loads, but I also saw no performance concerns.
Jan 25 2018
If it could be useful to others, might you commit it?
Oct 3 2017
Sep 26 2017
I believe we need to update cddl/lib/libdtrace/tcp.d with this change also:
Jun 21 2017
Assuming this has been used locally with NCQ TRIM for a while or it's been confirmed with imp's suggestion, we should get this in.
Jun 1 2017
May 31 2017
It may be more proper for this to go into tflag, but the downsides there is then it won't apply to existing sessions and we consume more space (which is low with the current padding). Open to any opinions, as locally we just have it hard coded on in any case.
Feb 14 2017
Feb 2 2017
This will be reworked in the future to be proper, abandoning for now. Thank you for the follow up.
Jan 17 2017
Oct 11 2016
Sorry for the delay, this had dropped off my radar until sbruno prodded me about it. Thanks for taking the time to review, looking over the goto outs in krpc_call() reveals the possible double-frees as you've mentioned.
Allow krpc_call() to do all freeing in out: to avoid possible double-frees
Aug 4 2016
Jun 1 2016
The 4096 physical sector size reported by the hardware is properly being keyed off of (r228846), and we are properly setting a stripesize of 4096 based on this. The quirk is not needed for this drive.
Relevant output from drives:
May 17 2016
We have been running the stable/10 iteration of this patch at Limelight on a number of boxes over the past few weeks. We have had no ill effects, and these machines have not hit the process stuck in vodead bug. We don't have a way to reliably reproduce, and we did hit it fairly rarely so this isn't conclusive. Since this review has been fairly quiet I did want to give some positive feedback though. Thanks!
May 16 2016
When someone has a moment, might this look good after the updates?
May 4 2016
I've moved all functionality to post declaration and updated the comment about the return on tcp_hc_getmtu().
@rwatson yes I just figure it's a near free feature should you be using jails to do various testing. It has been tested with both v4 and v6 traffic.
The default needs to maintain the current behavior
Mar 29 2016
We don't have official word from Samsung, but through light research all MZ7 drives are recent SSDs.
Mar 23 2016
Drop extraneous E
Mar 22 2016
Jan 26 2016
So all of the current VNET tooling expects compile time constants, it appears this is what may have been holding back the simple transition of everything in tcp_timer.h based on hz. Talking with @bz, this commit is good to go, and we'll circle back and virtualize this and other timers with a revised method.
Jan 22 2016
In D5024#106674, @bz wrote:These should be per-VNET changeable really; I see no harm at least why they shouldn't be?
Jan 21 2016
Nov 13 2015
Oct 30 2015
In D4039#84487, @hiren wrote:Is there a way we can unify *_process_limit and *_processing_limit? I see both being used and that is confusing. I am not sure if its just the code and if user is presented with only one but in any case we should choose one and stick with it. BTW, this can be done as a followup/separate commit.
This change looks okay to me.
Oct 15 2015
It's worth noting that you could set the initcwnd via the net.inet.tcp.slowstart_flightsize sysctl from r50673 16 years ago until it was removed in r226447. 9.2 was the first release without it, and we used it heavily at Limelight during its tenure. Its long lived existence and removal not receiving much notice should at least ease concerns a bit about users blowing things up.
Sep 24 2015
Sep 23 2015
Clean up now unused old process_limits, we use the ones in the adapter struct for all limits in the rx/tx task queues.
Aug 31 2015
This tests well and resolves the proposed issue for me on head.
Jun 30 2015
Order SCRIPTS alphabetically.
Jun 10 2015
So far this is looking solid for us. Both with defaults and lowered keep alives on the same traffic patterns that caused the cores prior. Running with net.inet.tcp.per_cpu_timers = 1
Jun 6 2015
It sounds like you guys are well into this, but FWIW we at Limelight are seeing the same and would be happy to sacrifice some machines to the debugging altar should it be needed. Everything has been TIME_WAIT so far, but I've only analyzed a handful of cores. The rest of the CB is still populated.