Use ADDR_IS_KERNEL.
- Feed Queries
- All Stories
- Search
- Feed Search
- Transactions
- Transaction Logs
Jul 16 2025
I also have an upcoming change in this area. AMD Ryzen processors have long supported a subset of the invpcid instruction’s functionality, even though they don’t support PCID. Specifically, they support the functionality to invalidate PG_G mappings, and not surprisingly this is supposed to be faster than toggling the PGE bit in CR4.
Jul 15 2025
This can be abandoned.
Jul 14 2025
Jul 13 2025
Jul 12 2025
In D51220#1170373, @markj wrote:Do you have some local modifications to test the ADDR_IS_KERNEL(va) code path in pmap_enter_l2()?
See inline comment for an explanation.
Jul 11 2025
Jul 10 2025
Jul 9 2025
Rename remove_pt to demote_kl2e to better reflect what it controls.
Jul 7 2025
Jul 6 2025
In D51180#1168328, @kib wrote:Should the pmap_demote_pde() in pmap_unwire() get the same treatment? As I understand, wire should saved the pt page in radix.
In D51179#1168325, @alc wrote:This lookup originated here:
commit 87b646630c4892e21446cd096bea6bcaecea33ac Author: Mark Johnston <markj@FreeBSD.org> Date: Mon Nov 15 11:35:44 2021 -0500 vm_page: Consolidate page busy sleep mechanisms - Modify vm_page_busy_sleep() and vm_page_busy_sleep_unlocked() to take a VM_ALLOC_* flag indicating whether to sleep on shared-busy, and fix up callers. - Modify vm_page_busy_sleep() to return a status indicating whether the object lock was dropped, and fix up callers. - Convert callers of vm_page_sleep_if_busy() to use vm_page_busy_sleep() instead. - Remove vm_page_sleep_if_(x)busy(). No functional change intended. Obtained from: jeff (object_concurrency patches) Reviewed by: kib MFC after: 2 weeks Differential Revision: https://reviews.freebsd.org/D32947
This lookup originated here:
commit 87b646630c4892e21446cd096bea6bcaecea33ac Author: Mark Johnston <markj@FreeBSD.org> Date: Mon Nov 15 11:35:44 2021 -0500
Jul 5 2025
Rebase. Add lockp KASSERT.
I am rather concerned that the pathological case of having to walk up to the root and then back down will be common place. For example, consider a memory mapped file that is read sequentially. The first access, when the file is not yet memory resident, will leave the cursor at the end. Subsequent accesses well then have to walk all the way up, and all the way down to get to the first page.
Jul 4 2025
Jul 3 2025
Jul 1 2025
In D51093#1166331, @kib wrote:It would be useful to provide a reasoning why the setting is safe.
From my understanding, there are (at least) two situations where TCE would be unsafe:
- Recursive pt mapping. But when we modify the kernel page table in a way that modifies the paging structure above the lowest level, we also explicitly invalidate the recursive mapping, in pmap_remove_kernel_pde(), pmap_demote_pde(), pmap_demote_pdpe().
- Sharing page table pages, mostly relevant when sharing occurs not at the leafs of the page table radix tree. We do not do that at all.
Anything else I missing?
Jun 30 2025
dougm@ has been running stress on a Ryzen processor for more than 24 hours, and seen no ill effects.
In D51091#1166112, @kib wrote:Or, do you want me to integrate this into the series of patches for D50970?
Jun 28 2025
Jun 27 2025
Jun 26 2025
Jun 25 2025
Jun 24 2025
Jun 23 2025
In D49442#1163665, @kib wrote:In D49442#1163641, @alc wrote:A more direct approach would be to change pmap_demote_pde_locked() to handle wired mappings when the PDE was never accessed:
diff --git a/sys/amd64/amd64/pmap.c b/sys/amd64/amd64/pmap.c index 6d1c2d70d8c0..97ff9c67e8d5 100644 --- a/sys/amd64/amd64/pmap.c +++ b/sys/amd64/amd64/pmap.c @@ -6104,9 +6104,7 @@ pmap_demote_pde_locked(pmap_t pmap, pd_entry_t *pde, vm_offset_t va, * Invalidate the 2MB page mapping and return "failure" if the * mapping was never accessed. */ - if ((oldpde & PG_A) == 0) { - KASSERT((oldpde & PG_W) == 0, - ("pmap_demote_pde: a wired mapping is missing PG_A")); + if ((oldpde & (PG_W | PG_A)) == 0) {Just for my understanding, do you mean
if ((oldpde & (PG_W | PG_A)) == PG_W) {?
A more direct approach would be to change pmap_demote_pde_locked() to handle wired mappings when the PDE was never accessed:
diff --git a/sys/amd64/amd64/pmap.c b/sys/amd64/amd64/pmap.c index 6d1c2d70d8c0..97ff9c67e8d5 100644 --- a/sys/amd64/amd64/pmap.c +++ b/sys/amd64/amd64/pmap.c @@ -6104,9 +6104,7 @@ pmap_demote_pde_locked(pmap_t pmap, pd_entry_t *pde, vm_offset_t va, * Invalidate the 2MB page mapping and return "failure" if the * mapping was never accessed. */ - if ((oldpde & PG_A) == 0) { - KASSERT((oldpde & PG_W) == 0, - ("pmap_demote_pde: a wired mapping is missing PG_A")); + if ((oldpde & (PG_W | PG_A)) == 0) { pmap_demote_pde_abort(pmap, va, pde, oldpde, lockp); return (false); } @@ -6164,7 +6162,7 @@ pmap_demote_pde_locked(pmap_t pmap, pd_entry_t *pde, vm_offset_t va, * have PG_A set in every PTE, then fill it. The new PTEs will all * have PG_A set. */ - if (!vm_page_all_valid(mpte)) + if (vm_page_all_valid(mpte) ^ (oldpde & PG_A) != 0) pmap_fill_ptp(firstpte, newpte);
Jun 20 2025
Introduce VM_ALLOC_COMMON.
Jun 19 2025
Jun 18 2025
Jun 17 2025
In D49442#1161281, @markj wrote:@alc did you have any thoughts on this patch?
Jun 16 2025
Jun 15 2025
You should add an entry to ObsoleteFiles.inc.
Jun 14 2025
Jun 13 2025
Jun 12 2025
Jun 11 2025
Jun 10 2025
Use busy style synchronization in linux emulation.
Jun 9 2025
Jun 8 2025
Jun 7 2025
In D50515#1157527, @kib wrote:BTW, did you considered only marking the page for free if it is on the page queue, and doing the real free when processing the batch?
Jun 6 2025
Jun 5 2025
@kib Do you have any comments?
Should I bump __FreeBSD_version after this change?