Short: The code was suffering from an off-by-one bug (inconsistency between inclusive end exclusive end address), which could lead to freeing one page above the initialization code. This led to freeing part of the kernel import section on x64. Now it is consistently using the aligned/exclusive end address.
Long:
* Initialization sections are freed both for the boot loaded images as well as for drivers that are loaded later. Obviously the second mechanism needs to be able to run at any time, so it is not initialization code itself. For some reason someone decided though, it would be a smart idea to implement the code twice, once for the boot loaded images, once for drivers and concluding that the former was initialization code itself and had to be freed.
* Since freeing the code that frees the initialization sections, while it is doing that is not possible, it uses a "smart trick", initially skipping that range, returning its start and end to the caller and have the caller free it afterwards.
* The code was using the end address in an inconsistent way, partly aligning it to the start of the following section, sometimes pointing to the last byte that should be freed. The function that freed each chunk was assuming the latter (i.e. that the end was included in the range) and thus freed the page that contained the end address. The end address for the range that was returned to the caller was aligned to the start of the next section, and the caller used it to free the range including the following page. On x64 this was the start of the import section of ntoskrnl. How that worked on x86 I don't even want to know.
Kernel stacks that re freed, can be placed on an SLIST for quick reuse. The old code was using a member of the PFN of the last stack page as the SLIST_ENTRY. This relies on the following (non-portable) assumptions:
- A stack always has a PTE associated with it.
- This PTE has a PFN associated with it.
- The PFN has an empty field that can be re-used as an SLIST_ENTRY.
- The PFN has another field that points back to the PTE, which then can be used to get the stack base.
Specifically: On x64 the PFN field is not 16 bytes aligned, so it cannot be used as an SLIST_ENTRY. (In a "usermode kernel" the other assumptions are also invalid).
The new code does what Windows does (and which seems absolutely obvious to do): Place the SLIST_ENTRY directly on the stack.
The size is in bytes, not in pages! On x86 we got away with it, since PEB and TEB require only a single page and the 1 passed to MiInsertVadEx() was aligned up to PAGE_SIZE. On x64 this doesn't work, since the size is 2 pages.
This means that MmSystemCacheStart, MmSystemCacheEnd, MmSizeOfSystemCacheInPages
have now a valid value.
System cache is not used atm the moment though. MmMapViewInSystemCache() is to
be implemented, and Cc is to be made aware of this.
CORE-14259
- Change MM_SYSTEM_SPACE_START to 0xFFFFF88000000000
- Move MI_DEBUG_MAPPING to the end of the system PTE range
- Add MI_SYSTEM_CACHE_START and MI_SYSTEM_CACHE_END, which is in the range that Vista uses as dynamic VA space for cache and other allocations
- Wrap x86 specific code that makes now invalid assumptions about the address space layout in #ifdef _M_IX86
[REACTOS] Misc 64 bit fixes
* [NTOS:MM] Allow MEM_DOS_LIM in NtMapViewOfSection on x64 as well
* [NTOS:MM] Implement x64 version of MmIsDisabledPage
* [HAL] Remove obsolete code
* [NTOS:KE] Fix amd64 version of KeContextToTrapFrame and KeTrapFrameToContext
* [XDK] Fix CONTEXT_XSTATE definition
* [PCNET] Convert physical address types from pointers to PHYSICAL_ADDRESS
Its purpose is to dump the non paged consumption, tag by tag,
to allow tracking potential faulting driver in case ReactOS starts lacking memory.
This will look like what !poolused outputs, even though it doesn't deal with paged pool.
Thanks to Thomas for his kind review and improvement suggestions.
CORE-14048