- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
/*
|
|
|
|
* PROJECT: ReactOS Kernel
|
|
|
|
* LICENSE: BSD - See COPYING.ARM in the top level directory
|
|
|
|
* FILE: ntoskrnl/mm/ARM3/pagfault.c
|
|
|
|
* PURPOSE: ARM Memory Manager Page Fault Handling
|
|
|
|
* PROGRAMMERS: ReactOS Portable Systems Group
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* INCLUDES *******************************************************************/
|
|
|
|
|
|
|
|
#include <ntoskrnl.h>
|
|
|
|
#define NDEBUG
|
|
|
|
#include <debug.h>
|
|
|
|
|
|
|
|
#define MODULE_INVOLVED_IN_ARM3
|
2014-11-10 16:26:55 +00:00
|
|
|
#include <mm/ARM3/miarm.h>
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
|
|
|
|
/* GLOBALS ********************************************************************/
|
|
|
|
|
2012-07-31 06:47:47 +00:00
|
|
|
#define HYDRA_PROCESS (PEPROCESS)1
|
2010-11-02 15:16:22 +00:00
|
|
|
#if MI_TRACE_PFNS
|
|
|
|
BOOLEAN UserPdeFault = FALSE;
|
|
|
|
#endif
|
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
/* PRIVATE FUNCTIONS **********************************************************/
|
|
|
|
|
2018-01-02 10:31:37 +00:00
|
|
|
static
|
2013-08-29 07:33:10 +00:00
|
|
|
NTSTATUS
|
|
|
|
NTAPI
|
|
|
|
MiCheckForUserStackOverflow(IN PVOID Address,
|
|
|
|
IN PVOID TrapInformation)
|
|
|
|
{
|
|
|
|
PETHREAD CurrentThread = PsGetCurrentThread();
|
|
|
|
PTEB Teb = CurrentThread->Tcb.Teb;
|
|
|
|
PVOID StackBase, DeallocationStack, NextStackAddress;
|
2018-12-12 11:12:24 +00:00
|
|
|
SIZE_T GuaranteedSize;
|
2013-08-29 07:33:10 +00:00
|
|
|
NTSTATUS Status;
|
|
|
|
|
|
|
|
/* Do we own the address space lock? */
|
|
|
|
if (CurrentThread->AddressSpaceOwner == 1)
|
|
|
|
{
|
|
|
|
/* This isn't valid */
|
|
|
|
DPRINT1("Process owns address space lock\n");
|
|
|
|
ASSERT(KeAreAllApcsDisabled() == TRUE);
|
|
|
|
return STATUS_GUARD_PAGE_VIOLATION;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Are we attached? */
|
|
|
|
if (KeIsAttachedProcess())
|
|
|
|
{
|
|
|
|
/* This isn't valid */
|
|
|
|
DPRINT1("Process is attached\n");
|
|
|
|
return STATUS_GUARD_PAGE_VIOLATION;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Read the current settings */
|
|
|
|
StackBase = Teb->NtTib.StackBase;
|
|
|
|
DeallocationStack = Teb->DeallocationStack;
|
2018-12-12 11:12:24 +00:00
|
|
|
GuaranteedSize = Teb->GuaranteedStackBytes;
|
2013-09-11 19:59:59 +00:00
|
|
|
DPRINT("Handling guard page fault with Stacks Addresses 0x%p and 0x%p, guarantee: %lx\n",
|
2018-12-12 11:12:24 +00:00
|
|
|
StackBase, DeallocationStack, GuaranteedSize);
|
2013-08-29 07:33:10 +00:00
|
|
|
|
|
|
|
/* Guarantees make this code harder, for now, assume there aren't any */
|
2018-12-12 11:12:24 +00:00
|
|
|
ASSERT(GuaranteedSize == 0);
|
2013-08-29 07:33:10 +00:00
|
|
|
|
|
|
|
/* So allocate only the minimum guard page size */
|
2018-12-12 11:12:24 +00:00
|
|
|
GuaranteedSize = PAGE_SIZE;
|
2013-08-29 07:33:10 +00:00
|
|
|
|
|
|
|
/* Does this faulting stack address actually exist in the stack? */
|
|
|
|
if ((Address >= StackBase) || (Address < DeallocationStack))
|
|
|
|
{
|
|
|
|
/* That's odd... */
|
2013-10-12 16:49:19 +00:00
|
|
|
DPRINT1("Faulting address outside of stack bounds. Address=%p, StackBase=%p, DeallocationStack=%p\n",
|
|
|
|
Address, StackBase, DeallocationStack);
|
2013-08-29 07:33:10 +00:00
|
|
|
return STATUS_GUARD_PAGE_VIOLATION;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* This is where the stack will start now */
|
2018-12-12 11:12:24 +00:00
|
|
|
NextStackAddress = (PVOID)((ULONG_PTR)PAGE_ALIGN(Address) - GuaranteedSize);
|
2013-08-29 07:33:10 +00:00
|
|
|
|
|
|
|
/* Do we have at least one page between here and the end of the stack? */
|
|
|
|
if (((ULONG_PTR)NextStackAddress - PAGE_SIZE) <= (ULONG_PTR)DeallocationStack)
|
|
|
|
{
|
2018-11-13 19:45:27 +00:00
|
|
|
/* We don't -- Trying to make this guard page valid now */
|
2013-08-29 07:33:10 +00:00
|
|
|
DPRINT1("Close to our death...\n");
|
2018-11-13 19:45:27 +00:00
|
|
|
|
|
|
|
/* Calculate the next memory address */
|
2018-12-12 11:12:24 +00:00
|
|
|
NextStackAddress = (PVOID)((ULONG_PTR)PAGE_ALIGN(DeallocationStack) + GuaranteedSize);
|
2018-11-13 19:45:27 +00:00
|
|
|
|
|
|
|
/* Allocate the memory */
|
|
|
|
Status = ZwAllocateVirtualMemory(NtCurrentProcess(),
|
|
|
|
&NextStackAddress,
|
|
|
|
0,
|
2018-12-12 11:12:24 +00:00
|
|
|
&GuaranteedSize,
|
2018-11-13 19:45:27 +00:00
|
|
|
MEM_COMMIT,
|
|
|
|
PAGE_READWRITE);
|
|
|
|
if (NT_SUCCESS(Status))
|
|
|
|
{
|
|
|
|
/* Success! */
|
|
|
|
Teb->NtTib.StackLimit = NextStackAddress;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
DPRINT1("Failed to allocate memory\n");
|
|
|
|
}
|
|
|
|
|
2013-08-29 07:33:10 +00:00
|
|
|
return STATUS_STACK_OVERFLOW;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Don't handle this flag yet */
|
|
|
|
ASSERT((PsGetCurrentProcess()->Peb->NtGlobalFlag & FLG_DISABLE_STACK_EXTENSION) == 0);
|
|
|
|
|
|
|
|
/* Update the stack limit */
|
2018-12-12 11:12:24 +00:00
|
|
|
Teb->NtTib.StackLimit = (PVOID)((ULONG_PTR)NextStackAddress + GuaranteedSize);
|
2013-08-29 07:33:10 +00:00
|
|
|
|
|
|
|
/* Now move the guard page to the next page */
|
|
|
|
Status = ZwAllocateVirtualMemory(NtCurrentProcess(),
|
|
|
|
&NextStackAddress,
|
|
|
|
0,
|
2018-12-12 11:12:24 +00:00
|
|
|
&GuaranteedSize,
|
2013-08-29 07:33:10 +00:00
|
|
|
MEM_COMMIT,
|
|
|
|
PAGE_READWRITE | PAGE_GUARD);
|
|
|
|
if ((NT_SUCCESS(Status) || (Status == STATUS_ALREADY_COMMITTED)))
|
|
|
|
{
|
|
|
|
/* We did it! */
|
2013-09-11 19:59:59 +00:00
|
|
|
DPRINT("Guard page handled successfully for %p\n", Address);
|
2013-08-29 07:33:10 +00:00
|
|
|
return STATUS_PAGE_FAULT_GUARD_PAGE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Fail, we couldn't move the guard page */
|
|
|
|
DPRINT1("Guard page failure: %lx\n", Status);
|
|
|
|
ASSERT(FALSE);
|
|
|
|
return STATUS_STACK_OVERFLOW;
|
|
|
|
}
|
|
|
|
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
FORCEINLINE
|
|
|
|
BOOLEAN
|
|
|
|
MiIsAccessAllowed(
|
|
|
|
_In_ ULONG ProtectionMask,
|
|
|
|
_In_ BOOLEAN Write,
|
|
|
|
_In_ BOOLEAN Execute)
|
|
|
|
{
|
|
|
|
#define _BYTE_MASK(Bit0, Bit1, Bit2, Bit3, Bit4, Bit5, Bit6, Bit7) \
|
|
|
|
(Bit0) | ((Bit1) << 1) | ((Bit2) << 2) | ((Bit3) << 3) | \
|
|
|
|
((Bit4) << 4) | ((Bit5) << 5) | ((Bit6) << 6) | ((Bit7) << 7)
|
2013-11-27 00:04:26 +00:00
|
|
|
static const UCHAR AccessAllowedMask[2][2] =
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
{
|
|
|
|
{ // Protect 0 1 2 3 4 5 6 7
|
|
|
|
_BYTE_MASK(0, 1, 1, 1, 1, 1, 1, 1), // READ
|
|
|
|
_BYTE_MASK(0, 0, 1, 1, 0, 0, 1, 1), // EXECUTE READ
|
|
|
|
},
|
|
|
|
{
|
|
|
|
_BYTE_MASK(0, 0, 0, 0, 1, 1, 1, 1), // WRITE
|
|
|
|
_BYTE_MASK(0, 0, 0, 0, 0, 0, 1, 1), // EXECUTE WRITE
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2013-11-27 00:04:26 +00:00
|
|
|
/* We want only the lower access bits */
|
|
|
|
ProtectionMask &= MM_PROTECT_ACCESS;
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
|
|
|
|
/* Look it up in the table */
|
2013-11-27 00:04:26 +00:00
|
|
|
return (AccessAllowedMask[Write != 0][Execute != 0] >> ProtectionMask) & 1;
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
}
|
|
|
|
|
2018-01-02 10:31:37 +00:00
|
|
|
static
|
2013-08-29 07:33:10 +00:00
|
|
|
NTSTATUS
|
|
|
|
NTAPI
|
|
|
|
MiAccessCheck(IN PMMPTE PointerPte,
|
|
|
|
IN BOOLEAN StoreInstruction,
|
|
|
|
IN KPROCESSOR_MODE PreviousMode,
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
IN ULONG_PTR ProtectionMask,
|
2013-08-29 07:33:10 +00:00
|
|
|
IN PVOID TrapFrame,
|
|
|
|
IN BOOLEAN LockHeld)
|
|
|
|
{
|
|
|
|
MMPTE TempPte;
|
|
|
|
|
|
|
|
/* Check for invalid user-mode access */
|
|
|
|
if ((PreviousMode == UserMode) && (PointerPte > MiHighestUserPte))
|
|
|
|
{
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Capture the PTE -- is it valid? */
|
|
|
|
TempPte = *PointerPte;
|
|
|
|
if (TempPte.u.Hard.Valid)
|
|
|
|
{
|
|
|
|
/* Was someone trying to write to it? */
|
|
|
|
if (StoreInstruction)
|
|
|
|
{
|
|
|
|
/* Is it writable?*/
|
2015-05-10 19:35:24 +00:00
|
|
|
if (MI_IS_PAGE_WRITEABLE(&TempPte) ||
|
|
|
|
MI_IS_PAGE_COPY_ON_WRITE(&TempPte))
|
2013-08-29 07:33:10 +00:00
|
|
|
{
|
|
|
|
/* Then there's nothing to worry about */
|
|
|
|
return STATUS_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Oops! This isn't allowed */
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Someone was trying to read from a valid PTE, that's fine too */
|
|
|
|
return STATUS_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check if the protection on the page allows what is being attempted */
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
if (!MiIsAccessAllowed(ProtectionMask, StoreInstruction, FALSE))
|
2013-08-29 07:33:10 +00:00
|
|
|
{
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check if this is a guard page */
|
2013-11-27 00:04:26 +00:00
|
|
|
if ((ProtectionMask & MM_PROTECT_SPECIAL) == MM_GUARDPAGE)
|
2013-08-29 07:33:10 +00:00
|
|
|
{
|
2015-09-01 01:45:59 +00:00
|
|
|
ASSERT(ProtectionMask != MM_DECOMMIT);
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
|
2013-08-29 07:33:10 +00:00
|
|
|
/* Attached processes can't expand their stack */
|
|
|
|
if (KeIsAttachedProcess()) return STATUS_ACCESS_VIOLATION;
|
|
|
|
|
2016-08-28 19:59:13 +00:00
|
|
|
/* No support for prototype PTEs yet */
|
|
|
|
ASSERT(TempPte.u.Soft.Prototype == 0);
|
2013-08-29 07:33:10 +00:00
|
|
|
|
|
|
|
/* Remove the guard page bit, and return a guard page violation */
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
TempPte.u.Soft.Protection = ProtectionMask & ~MM_GUARDPAGE;
|
2015-09-01 01:45:59 +00:00
|
|
|
ASSERT(TempPte.u.Long != 0);
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
MI_WRITE_INVALID_PTE(PointerPte, TempPte);
|
2013-08-29 07:33:10 +00:00
|
|
|
return STATUS_GUARD_PAGE_VIOLATION;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Nothing to do */
|
|
|
|
return STATUS_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2018-01-02 10:31:37 +00:00
|
|
|
static
|
2010-07-22 18:37:27 +00:00
|
|
|
PMMPTE
|
|
|
|
NTAPI
|
|
|
|
MiCheckVirtualAddress(IN PVOID VirtualAddress,
|
|
|
|
OUT PULONG ProtectCode,
|
|
|
|
OUT PMMVAD *ProtoVad)
|
|
|
|
{
|
|
|
|
PMMVAD Vad;
|
2010-10-05 08:14:02 +00:00
|
|
|
PMMPTE PointerPte;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 18:37:27 +00:00
|
|
|
/* No prototype/section support for now */
|
|
|
|
*ProtoVad = NULL;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-31 06:47:47 +00:00
|
|
|
/* User or kernel fault? */
|
|
|
|
if (VirtualAddress <= MM_HIGHEST_USER_ADDRESS)
|
2010-10-07 17:27:23 +00:00
|
|
|
{
|
2012-07-31 06:47:47 +00:00
|
|
|
/* Special case for shared data */
|
|
|
|
if (PAGE_ALIGN(VirtualAddress) == (PVOID)MM_SHARED_USER_DATA_VA)
|
2010-10-07 17:27:23 +00:00
|
|
|
{
|
2012-07-31 06:47:47 +00:00
|
|
|
/* It's a read-only page */
|
|
|
|
*ProtectCode = MM_READONLY;
|
|
|
|
return MmSharedUserDataPte;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Find the VAD, it might not exist if the address is bogus */
|
|
|
|
Vad = MiLocateAddress(VirtualAddress);
|
|
|
|
if (!Vad)
|
|
|
|
{
|
|
|
|
/* Bogus virtual address */
|
2010-10-07 17:27:23 +00:00
|
|
|
*ProtectCode = MM_NOACCESS;
|
|
|
|
return NULL;
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-31 06:47:47 +00:00
|
|
|
/* ReactOS does not handle physical memory VADs yet */
|
|
|
|
ASSERT(Vad->u.VadFlags.VadType != VadDevicePhysicalMemory);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-31 06:47:47 +00:00
|
|
|
/* Check if it's a section, or just an allocation */
|
|
|
|
if (Vad->u.VadFlags.PrivateMemory)
|
|
|
|
{
|
|
|
|
/* ReactOS does not handle AWE VADs yet */
|
|
|
|
ASSERT(Vad->u.VadFlags.VadType != VadAwe);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-31 06:47:47 +00:00
|
|
|
/* This must be a TEB/PEB VAD */
|
|
|
|
if (Vad->u.VadFlags.MemCommit)
|
|
|
|
{
|
|
|
|
/* It's committed, so return the VAD protection */
|
|
|
|
*ProtectCode = (ULONG)Vad->u.VadFlags.Protection;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* It has not yet been committed, so return no access */
|
|
|
|
*ProtectCode = MM_NOACCESS;
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-31 06:47:47 +00:00
|
|
|
/* In both cases, return no PTE */
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* ReactOS does not supoprt these VADs yet */
|
|
|
|
ASSERT(Vad->u.VadFlags.VadType != VadImageMap);
|
|
|
|
ASSERT(Vad->u2.VadFlags2.ExtendableFile == 0);
|
2010-10-05 08:14:02 +00:00
|
|
|
|
2012-07-31 06:47:47 +00:00
|
|
|
/* Return the proto VAD */
|
|
|
|
*ProtoVad = Vad;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-31 06:47:47 +00:00
|
|
|
/* Get the prototype PTE for this page */
|
|
|
|
PointerPte = (((ULONG_PTR)VirtualAddress >> PAGE_SHIFT) - Vad->StartingVpn) + Vad->FirstPrototypePte;
|
|
|
|
ASSERT(PointerPte != NULL);
|
|
|
|
ASSERT(PointerPte <= Vad->LastContiguousPte);
|
|
|
|
|
|
|
|
/* Return the Prototype PTE and the protection for the page mapping */
|
[NTOS]: A few key changes to the page fault path:
1) MiCheckVirtualAddress should be called *after* determining if the PTE is a Demand Zero PTE. This is because when memory is allocated with MEM_RESERVE, and then MEM_COMMIT is called later, the VAD does not have the MemCommit flag set to TRUE. As such, MiCheckVirtualAddress returns MM_NOACCESS for the VAD (even though one is found) and the demand zero fault results in an access violation. Double-checked with Windows and this is the right behavior.
2) MiCheckVirtualAddress now supports non-commited reserve VADs (ie: trying to access MEM_RESERVE memory). It used to ASSERT, now it returns MM_NOACCESS so an access violation is raised. Before change #1, this would also happen if MEM_COMMIT was later performed on the ranges, but this is now fixed.
3) When calling MiResolveDemandZeroFault, we should not make the PDE a demand zero PDE. This is senseless. The whole point is that the PDE does exist, and MiInitializePfn needs it to keep track of the page table allocation. Removed the nonsensical line of code which performed cleard the PDE during a demand-zero fault.
I am able to boot to 3rd stage with these changes, so I have seen no regressions. Additionally, with these changes, the as-of-yet-uncommitted VAD-based Virtual Memory code completes 1st stage setup successfully, instead of instantly crashing on boot.
svn path=/trunk/; revision=55894
2012-02-27 23:42:22 +00:00
|
|
|
*ProtectCode = (ULONG)Vad->u.VadFlags.Protection;
|
2012-07-31 06:47:47 +00:00
|
|
|
return PointerPte;
|
[NTOS]: A few key changes to the page fault path:
1) MiCheckVirtualAddress should be called *after* determining if the PTE is a Demand Zero PTE. This is because when memory is allocated with MEM_RESERVE, and then MEM_COMMIT is called later, the VAD does not have the MemCommit flag set to TRUE. As such, MiCheckVirtualAddress returns MM_NOACCESS for the VAD (even though one is found) and the demand zero fault results in an access violation. Double-checked with Windows and this is the right behavior.
2) MiCheckVirtualAddress now supports non-commited reserve VADs (ie: trying to access MEM_RESERVE memory). It used to ASSERT, now it returns MM_NOACCESS so an access violation is raised. Before change #1, this would also happen if MEM_COMMIT was later performed on the ranges, but this is now fixed.
3) When calling MiResolveDemandZeroFault, we should not make the PDE a demand zero PDE. This is senseless. The whole point is that the PDE does exist, and MiInitializePfn needs it to keep track of the page table allocation. Removed the nonsensical line of code which performed cleard the PDE during a demand-zero fault.
I am able to boot to 3rd stage with these changes, so I have seen no regressions. Additionally, with these changes, the as-of-yet-uncommitted VAD-based Virtual Memory code completes 1st stage setup successfully, instead of instantly crashing on boot.
svn path=/trunk/; revision=55894
2012-02-27 23:42:22 +00:00
|
|
|
}
|
2012-07-31 06:47:47 +00:00
|
|
|
}
|
|
|
|
else if (MI_IS_PAGE_TABLE_ADDRESS(VirtualAddress))
|
|
|
|
{
|
|
|
|
/* This should never happen, as these addresses are handled by the double-maping */
|
|
|
|
if (((PMMPTE)VirtualAddress >= MiAddressToPte(MmPagedPoolStart)) &&
|
|
|
|
((PMMPTE)VirtualAddress <= MmPagedPoolInfo.LastPteForPagedPool))
|
[NTOS]: A few key changes to the page fault path:
1) MiCheckVirtualAddress should be called *after* determining if the PTE is a Demand Zero PTE. This is because when memory is allocated with MEM_RESERVE, and then MEM_COMMIT is called later, the VAD does not have the MemCommit flag set to TRUE. As such, MiCheckVirtualAddress returns MM_NOACCESS for the VAD (even though one is found) and the demand zero fault results in an access violation. Double-checked with Windows and this is the right behavior.
2) MiCheckVirtualAddress now supports non-commited reserve VADs (ie: trying to access MEM_RESERVE memory). It used to ASSERT, now it returns MM_NOACCESS so an access violation is raised. Before change #1, this would also happen if MEM_COMMIT was later performed on the ranges, but this is now fixed.
3) When calling MiResolveDemandZeroFault, we should not make the PDE a demand zero PDE. This is senseless. The whole point is that the PDE does exist, and MiInitializePfn needs it to keep track of the page table allocation. Removed the nonsensical line of code which performed cleard the PDE during a demand-zero fault.
I am able to boot to 3rd stage with these changes, so I have seen no regressions. Additionally, with these changes, the as-of-yet-uncommitted VAD-based Virtual Memory code completes 1st stage setup successfully, instead of instantly crashing on boot.
svn path=/trunk/; revision=55894
2012-02-27 23:42:22 +00:00
|
|
|
{
|
2012-07-31 06:47:47 +00:00
|
|
|
/* Fail such access */
|
[NTOS]: A few key changes to the page fault path:
1) MiCheckVirtualAddress should be called *after* determining if the PTE is a Demand Zero PTE. This is because when memory is allocated with MEM_RESERVE, and then MEM_COMMIT is called later, the VAD does not have the MemCommit flag set to TRUE. As such, MiCheckVirtualAddress returns MM_NOACCESS for the VAD (even though one is found) and the demand zero fault results in an access violation. Double-checked with Windows and this is the right behavior.
2) MiCheckVirtualAddress now supports non-commited reserve VADs (ie: trying to access MEM_RESERVE memory). It used to ASSERT, now it returns MM_NOACCESS so an access violation is raised. Before change #1, this would also happen if MEM_COMMIT was later performed on the ranges, but this is now fixed.
3) When calling MiResolveDemandZeroFault, we should not make the PDE a demand zero PDE. This is senseless. The whole point is that the PDE does exist, and MiInitializePfn needs it to keep track of the page table allocation. Removed the nonsensical line of code which performed cleard the PDE during a demand-zero fault.
I am able to boot to 3rd stage with these changes, so I have seen no regressions. Additionally, with these changes, the as-of-yet-uncommitted VAD-based Virtual Memory code completes 1st stage setup successfully, instead of instantly crashing on boot.
svn path=/trunk/; revision=55894
2012-02-27 23:42:22 +00:00
|
|
|
*ProtectCode = MM_NOACCESS;
|
2012-07-31 06:47:47 +00:00
|
|
|
return NULL;
|
[NTOS]: A few key changes to the page fault path:
1) MiCheckVirtualAddress should be called *after* determining if the PTE is a Demand Zero PTE. This is because when memory is allocated with MEM_RESERVE, and then MEM_COMMIT is called later, the VAD does not have the MemCommit flag set to TRUE. As such, MiCheckVirtualAddress returns MM_NOACCESS for the VAD (even though one is found) and the demand zero fault results in an access violation. Double-checked with Windows and this is the right behavior.
2) MiCheckVirtualAddress now supports non-commited reserve VADs (ie: trying to access MEM_RESERVE memory). It used to ASSERT, now it returns MM_NOACCESS so an access violation is raised. Before change #1, this would also happen if MEM_COMMIT was later performed on the ranges, but this is now fixed.
3) When calling MiResolveDemandZeroFault, we should not make the PDE a demand zero PDE. This is senseless. The whole point is that the PDE does exist, and MiInitializePfn needs it to keep track of the page table allocation. Removed the nonsensical line of code which performed cleard the PDE during a demand-zero fault.
I am able to boot to 3rd stage with these changes, so I have seen no regressions. Additionally, with these changes, the as-of-yet-uncommitted VAD-based Virtual Memory code completes 1st stage setup successfully, instead of instantly crashing on boot.
svn path=/trunk/; revision=55894
2012-02-27 23:42:22 +00:00
|
|
|
}
|
2012-07-31 06:47:47 +00:00
|
|
|
|
|
|
|
/* Return full access rights */
|
2018-03-21 20:22:03 +00:00
|
|
|
*ProtectCode = MM_EXECUTE_READWRITE;
|
2010-10-05 08:14:02 +00:00
|
|
|
return NULL;
|
|
|
|
}
|
2012-07-31 06:47:47 +00:00
|
|
|
else if (MI_IS_SESSION_ADDRESS(VirtualAddress))
|
2010-10-05 08:14:02 +00:00
|
|
|
{
|
2012-07-31 06:47:47 +00:00
|
|
|
/* ReactOS does not have an image list yet, so bail out to failure case */
|
|
|
|
ASSERT(IsListEmpty(&MmSessionSpace->ImageList));
|
2010-10-05 08:14:02 +00:00
|
|
|
}
|
2012-07-31 06:47:47 +00:00
|
|
|
|
|
|
|
/* Default case -- failure */
|
|
|
|
*ProtectCode = MM_NOACCESS;
|
|
|
|
return NULL;
|
2010-07-22 18:37:27 +00:00
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-02-06 10:46:52 +00:00
|
|
|
#if (_MI_PAGING_LEVELS == 2)
|
2018-01-02 10:31:37 +00:00
|
|
|
static
|
2012-07-31 06:47:47 +00:00
|
|
|
NTSTATUS
|
|
|
|
FASTCALL
|
|
|
|
MiCheckPdeForSessionSpace(IN PVOID Address)
|
|
|
|
{
|
2012-08-01 07:54:37 +00:00
|
|
|
MMPTE TempPde;
|
2015-05-10 19:35:00 +00:00
|
|
|
PMMPDE PointerPde;
|
2012-09-20 07:44:43 +00:00
|
|
|
PVOID SessionAddress;
|
2012-08-01 07:54:37 +00:00
|
|
|
ULONG Index;
|
|
|
|
|
|
|
|
/* Is this a session PTE? */
|
|
|
|
if (MI_IS_SESSION_PTE(Address))
|
|
|
|
{
|
|
|
|
/* Make sure the PDE for session space is valid */
|
|
|
|
PointerPde = MiAddressToPde(MmSessionSpace);
|
|
|
|
if (!PointerPde->u.Hard.Valid)
|
|
|
|
{
|
|
|
|
/* This means there's no valid session, bail out */
|
|
|
|
DbgPrint("MiCheckPdeForSessionSpace: No current session for PTE %p\n",
|
|
|
|
Address);
|
|
|
|
DbgBreakPoint();
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now get the session-specific page table for this address */
|
2012-09-20 07:44:43 +00:00
|
|
|
SessionAddress = MiPteToAddress(Address);
|
2012-09-20 08:20:24 +00:00
|
|
|
PointerPde = MiAddressToPte(Address);
|
2012-08-01 07:54:37 +00:00
|
|
|
if (PointerPde->u.Hard.Valid) return STATUS_WAIT_1;
|
|
|
|
|
|
|
|
/* It's not valid, so find it in the page table array */
|
2012-09-20 07:44:43 +00:00
|
|
|
Index = ((ULONG_PTR)SessionAddress - (ULONG_PTR)MmSessionBase) >> 22;
|
2012-08-01 07:54:37 +00:00
|
|
|
TempPde.u.Long = MmSessionSpace->PageTables[Index].u.Long;
|
|
|
|
if (TempPde.u.Hard.Valid)
|
|
|
|
{
|
|
|
|
/* The copy is valid, so swap it in */
|
|
|
|
InterlockedExchange((PLONG)PointerPde, TempPde.u.Long);
|
|
|
|
return STATUS_WAIT_1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We don't seem to have allocated a page table for this address yet? */
|
|
|
|
DbgPrint("MiCheckPdeForSessionSpace: No Session PDE for PTE %p, %p\n",
|
2012-09-20 07:44:43 +00:00
|
|
|
PointerPde->u.Long, SessionAddress);
|
2012-08-01 07:54:37 +00:00
|
|
|
DbgBreakPoint();
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Is the address also a session address? If not, we're done */
|
|
|
|
if (!MI_IS_SESSION_ADDRESS(Address)) return STATUS_SUCCESS;
|
|
|
|
|
|
|
|
/* It is, so again get the PDE for session space */
|
|
|
|
PointerPde = MiAddressToPde(MmSessionSpace);
|
|
|
|
if (!PointerPde->u.Hard.Valid)
|
|
|
|
{
|
|
|
|
/* This means there's no valid session, bail out */
|
|
|
|
DbgPrint("MiCheckPdeForSessionSpace: No current session for VA %p\n",
|
|
|
|
Address);
|
|
|
|
DbgBreakPoint();
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now get the PDE for the address itself */
|
|
|
|
PointerPde = MiAddressToPde(Address);
|
|
|
|
if (!PointerPde->u.Hard.Valid)
|
|
|
|
{
|
|
|
|
/* Do the swap, we should be good to go */
|
|
|
|
Index = ((ULONG_PTR)Address - (ULONG_PTR)MmSessionBase) >> 22;
|
|
|
|
PointerPde->u.Long = MmSessionSpace->PageTables[Index].u.Long;
|
|
|
|
if (PointerPde->u.Hard.Valid) return STATUS_WAIT_1;
|
|
|
|
|
|
|
|
/* We had not allocated a page table for this session address yet, fail! */
|
|
|
|
DbgPrint("MiCheckPdeForSessionSpace: No Session PDE for VA %p, %p\n",
|
|
|
|
PointerPde->u.Long, Address);
|
|
|
|
DbgBreakPoint();
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* It's valid, so there's nothing to do */
|
|
|
|
return STATUS_SUCCESS;
|
2012-07-31 06:47:47 +00:00
|
|
|
}
|
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
NTSTATUS
|
|
|
|
FASTCALL
|
|
|
|
MiCheckPdeForPagedPool(IN PVOID Address)
|
|
|
|
{
|
2010-02-09 22:56:21 +00:00
|
|
|
PMMPDE PointerPde;
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
NTSTATUS Status = STATUS_SUCCESS;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-31 06:47:47 +00:00
|
|
|
/* Check session PDE */
|
|
|
|
if (MI_IS_SESSION_ADDRESS(Address)) return MiCheckPdeForSessionSpace(Address);
|
|
|
|
if (MI_IS_SESSION_PTE(Address)) return MiCheckPdeForSessionSpace(Address);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
//
|
|
|
|
// Check if this is a fault while trying to access the page table itself
|
|
|
|
//
|
2010-07-22 02:20:27 +00:00
|
|
|
if (MI_IS_SYSTEM_PAGE_TABLE_ADDRESS(Address))
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
{
|
|
|
|
//
|
|
|
|
// Send a hint to the page fault handler that this is only a valid fault
|
|
|
|
// if we already detected this was access within the page table range
|
|
|
|
//
|
2010-02-09 22:56:21 +00:00
|
|
|
PointerPde = (PMMPDE)MiAddressToPte(Address);
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
Status = STATUS_WAIT_1;
|
|
|
|
}
|
|
|
|
else if (Address < MmSystemRangeStart)
|
|
|
|
{
|
|
|
|
//
|
|
|
|
// This is totally illegal
|
|
|
|
//
|
2010-12-26 15:23:03 +00:00
|
|
|
return STATUS_ACCESS_VIOLATION;
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
//
|
|
|
|
// Get the PDE for the address
|
|
|
|
//
|
|
|
|
PointerPde = MiAddressToPde(Address);
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
//
|
|
|
|
// Check if it's not valid
|
|
|
|
//
|
|
|
|
if (PointerPde->u.Hard.Valid == 0)
|
|
|
|
{
|
|
|
|
//
|
|
|
|
// Copy it from our double-mapped system page directory
|
|
|
|
//
|
|
|
|
InterlockedExchangePte(PointerPde,
|
2010-09-30 14:48:03 +00:00
|
|
|
MmSystemPagePtes[((ULONG_PTR)PointerPde & (SYSTEM_PD_SIZE - 1)) / sizeof(MMPTE)].u.Long);
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
//
|
|
|
|
// Return status
|
|
|
|
//
|
|
|
|
return Status;
|
|
|
|
}
|
2012-02-06 15:08:32 +00:00
|
|
|
#else
|
|
|
|
NTSTATUS
|
|
|
|
FASTCALL
|
|
|
|
MiCheckPdeForPagedPool(IN PVOID Address)
|
|
|
|
{
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
2012-02-06 10:46:52 +00:00
|
|
|
#endif
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
|
2010-07-22 02:20:27 +00:00
|
|
|
VOID
|
|
|
|
NTAPI
|
|
|
|
MiZeroPfn(IN PFN_NUMBER PageFrameNumber)
|
|
|
|
{
|
|
|
|
PMMPTE ZeroPte;
|
|
|
|
MMPTE TempPte;
|
|
|
|
PMMPFN Pfn1;
|
|
|
|
PVOID ZeroAddress;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 02:20:27 +00:00
|
|
|
/* Get the PFN for this page */
|
|
|
|
Pfn1 = MiGetPfnEntry(PageFrameNumber);
|
|
|
|
ASSERT(Pfn1);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 02:20:27 +00:00
|
|
|
/* Grab a system PTE we can use to zero the page */
|
|
|
|
ZeroPte = MiReserveSystemPtes(1, SystemPteSpace);
|
|
|
|
ASSERT(ZeroPte);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 02:20:27 +00:00
|
|
|
/* Initialize the PTE for it */
|
|
|
|
TempPte = ValidKernelPte;
|
|
|
|
TempPte.u.Hard.PageFrameNumber = PageFrameNumber;
|
|
|
|
|
|
|
|
/* Setup caching */
|
|
|
|
if (Pfn1->u3.e1.CacheAttribute == MiWriteCombined)
|
|
|
|
{
|
|
|
|
/* Write combining, no caching */
|
|
|
|
MI_PAGE_DISABLE_CACHE(&TempPte);
|
|
|
|
MI_PAGE_WRITE_COMBINED(&TempPte);
|
|
|
|
}
|
|
|
|
else if (Pfn1->u3.e1.CacheAttribute == MiNonCached)
|
|
|
|
{
|
|
|
|
/* Write through, no caching */
|
|
|
|
MI_PAGE_DISABLE_CACHE(&TempPte);
|
|
|
|
MI_PAGE_WRITE_THROUGH(&TempPte);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Make the system PTE valid with our PFN */
|
|
|
|
MI_WRITE_VALID_PTE(ZeroPte, TempPte);
|
|
|
|
|
|
|
|
/* Get the address it maps to, and zero it out */
|
|
|
|
ZeroAddress = MiPteToAddress(ZeroPte);
|
|
|
|
KeZeroPages(ZeroAddress, PAGE_SIZE);
|
|
|
|
|
|
|
|
/* Now get rid of it */
|
|
|
|
MiReleaseSystemPtes(ZeroPte, 1, SystemPteSpace);
|
|
|
|
}
|
|
|
|
|
2016-08-19 17:24:53 +00:00
|
|
|
VOID
|
|
|
|
NTAPI
|
|
|
|
MiCopyPfn(
|
|
|
|
_In_ PFN_NUMBER DestPage,
|
|
|
|
_In_ PFN_NUMBER SrcPage)
|
|
|
|
{
|
|
|
|
PMMPTE SysPtes;
|
|
|
|
MMPTE TempPte;
|
|
|
|
PMMPFN DestPfn, SrcPfn;
|
|
|
|
PVOID DestAddress;
|
|
|
|
const VOID* SrcAddress;
|
|
|
|
|
|
|
|
/* Get the PFNs */
|
|
|
|
DestPfn = MiGetPfnEntry(DestPage);
|
|
|
|
ASSERT(DestPfn);
|
|
|
|
SrcPfn = MiGetPfnEntry(SrcPage);
|
|
|
|
ASSERT(SrcPfn);
|
|
|
|
|
|
|
|
/* Grab 2 system PTEs */
|
|
|
|
SysPtes = MiReserveSystemPtes(2, SystemPteSpace);
|
|
|
|
ASSERT(SysPtes);
|
|
|
|
|
|
|
|
/* Initialize the destination PTE */
|
|
|
|
TempPte = ValidKernelPte;
|
|
|
|
TempPte.u.Hard.PageFrameNumber = DestPage;
|
|
|
|
|
|
|
|
/* Setup caching */
|
|
|
|
if (DestPfn->u3.e1.CacheAttribute == MiWriteCombined)
|
|
|
|
{
|
|
|
|
/* Write combining, no caching */
|
|
|
|
MI_PAGE_DISABLE_CACHE(&TempPte);
|
|
|
|
MI_PAGE_WRITE_COMBINED(&TempPte);
|
|
|
|
}
|
|
|
|
else if (DestPfn->u3.e1.CacheAttribute == MiNonCached)
|
|
|
|
{
|
|
|
|
/* Write through, no caching */
|
|
|
|
MI_PAGE_DISABLE_CACHE(&TempPte);
|
|
|
|
MI_PAGE_WRITE_THROUGH(&TempPte);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Make the system PTE valid with our PFN */
|
|
|
|
MI_WRITE_VALID_PTE(&SysPtes[0], TempPte);
|
|
|
|
|
|
|
|
/* Initialize the source PTE */
|
|
|
|
TempPte = ValidKernelPte;
|
|
|
|
TempPte.u.Hard.PageFrameNumber = SrcPage;
|
|
|
|
|
|
|
|
/* Setup caching */
|
|
|
|
if (SrcPfn->u3.e1.CacheAttribute == MiNonCached)
|
|
|
|
{
|
|
|
|
MI_PAGE_DISABLE_CACHE(&TempPte);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Make the system PTE valid with our PFN */
|
|
|
|
MI_WRITE_VALID_PTE(&SysPtes[1], TempPte);
|
|
|
|
|
|
|
|
/* Get the addresses and perform the copy */
|
|
|
|
DestAddress = MiPteToAddress(&SysPtes[0]);
|
|
|
|
SrcAddress = MiPteToAddress(&SysPtes[1]);
|
|
|
|
RtlCopyMemory(DestAddress, SrcAddress, PAGE_SIZE);
|
|
|
|
|
|
|
|
/* Now get rid of it */
|
|
|
|
MiReleaseSystemPtes(SysPtes, 2, SystemPteSpace);
|
|
|
|
}
|
|
|
|
|
2011-11-01 21:03:00 +00:00
|
|
|
static
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
NTSTATUS
|
|
|
|
NTAPI
|
|
|
|
MiResolveDemandZeroFault(IN PVOID Address,
|
Two Part Patch which fixes ARM3 Section Support (not yet enabled). This had been enabled in the past for testing and resulted in bizare crashes during testing. The amount of fixing required should reveal why:
Part 1: Page Fault Path Fixes
[NTOS]: As an optimization, someone seems to have had changed the MiResolveDemandZeroFault prototype not to require a PTE, and to instead take a protection mask directly. While clever, this broke support for ARM3 sections, because the code was now assuming that the protection of the PTE for the input address should be used -- while in NT Sections we instead use what are called ProtoType PTEs. This was very annoying to debug, but since the cause has been fixed, I've reverted back to the old convention in which the PTE is passed-in, and this can be a different PTE than the PTE for the address, as it should be.
[NTOS]: Due to the reverting of the original path, another optimization, in which MiResolveDemandZeroFault was being called directly instead of going through MiDispatchFault and writing an invalid demand-zero PDE has also been removed. PDE faults are now going through the correct, expected path.
[NTOS]: MiResolveDemandZeroFault was always creating Kernel PTEs. It should create User PTEs when necessary.
[NTOS]: MiDeletePte was assuming any prototype PTE is a forked PTE. Forked PTEs only happen when the addresses in the PTE don't match, so check for that too.
Part 2: ARM3 Section Object Fixes
[NTOS]: Fix issue when trying to make both ROS_SECTION_OBJECTs and NT's SECTION co-exist. We relied on the *caller* knowing what kind of section this is, and that can't be a good idea. Now, when the caller requests an ARM3 section vs a ROS section, we use a marker to detect what kind of section this is for later APIs.
[NTOS]: For section VADs, we were storing the ReactOS MEMORY_AREA in the ControlArea... however, the mappings of one individual section object share a single control area, even though they have multiple MEMORY_AREAs (one for each mapping). As such, we overwrote the MEMORY_AREA continously, and at free-time, double or triple-freed the same memory area.
[NTOS]: Moved the MEMORY_AREA to the "Banked" field of the long VAD, instead of the ControlArea. Allocate MMVAD_LONGs for ARM3 sections for now, to support this. Also, after deleting the MEMORY_AREA while parsing VADs, we now use a special marker to detect double-frees, and we also use a special marker to make sure we have a Long VAD as expected.
svn path=/trunk/; revision=56035
2012-03-05 16:41:46 +00:00
|
|
|
IN PMMPTE PointerPte,
|
2011-11-01 21:03:00 +00:00
|
|
|
IN ULONG Protection,
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
IN PEPROCESS Process,
|
|
|
|
IN KIRQL OldIrql)
|
|
|
|
{
|
2010-09-29 01:10:28 +00:00
|
|
|
PFN_NUMBER PageFrameNumber = 0;
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
MMPTE TempPte;
|
2010-10-04 18:51:07 +00:00
|
|
|
BOOLEAN NeedZero = FALSE, HaveLock = FALSE;
|
2010-09-29 01:10:28 +00:00
|
|
|
ULONG Color;
|
2012-07-31 06:47:47 +00:00
|
|
|
PMMPFN Pfn1;
|
2010-06-06 04:37:53 +00:00
|
|
|
DPRINT("ARM3 Demand Zero Page Fault Handler for address: %p in process: %p\n",
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
Address,
|
|
|
|
Process);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 02:20:27 +00:00
|
|
|
/* Must currently only be called by paging path */
|
2012-07-31 06:47:47 +00:00
|
|
|
if ((Process > HYDRA_PROCESS) && (OldIrql == MM_NOIRQL))
|
2010-07-22 02:20:27 +00:00
|
|
|
{
|
|
|
|
/* Sanity check */
|
|
|
|
ASSERT(MI_IS_PAGE_TABLE_ADDRESS(PointerPte));
|
|
|
|
|
|
|
|
/* No forking yet */
|
|
|
|
ASSERT(Process->ForkInProgress == NULL);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-09-29 01:10:28 +00:00
|
|
|
/* Get process color */
|
|
|
|
Color = MI_GET_NEXT_PROCESS_COLOR(Process);
|
2010-10-04 18:51:07 +00:00
|
|
|
ASSERT(Color != 0xFFFFFFFF);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 02:20:27 +00:00
|
|
|
/* We'll need a zero page */
|
|
|
|
NeedZero = TRUE;
|
|
|
|
}
|
2010-09-29 01:10:28 +00:00
|
|
|
else
|
|
|
|
{
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Check if we need a zero page */
|
|
|
|
NeedZero = (OldIrql != MM_NOIRQL);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-31 07:11:52 +00:00
|
|
|
/* Session-backed image views must be zeroed */
|
|
|
|
if ((Process == HYDRA_PROCESS) &&
|
|
|
|
((MI_IS_SESSION_IMAGE_ADDRESS(Address)) ||
|
2012-08-01 07:54:37 +00:00
|
|
|
((Address >= MiSessionViewStart) && (Address < MiSessionSpaceWs))))
|
2012-07-31 07:11:52 +00:00
|
|
|
{
|
|
|
|
NeedZero = TRUE;
|
|
|
|
}
|
2012-08-01 07:54:37 +00:00
|
|
|
|
2012-07-31 07:11:52 +00:00
|
|
|
/* Hardcode unknown color */
|
|
|
|
Color = 0xFFFFFFFF;
|
2010-09-29 01:10:28 +00:00
|
|
|
}
|
2010-10-04 18:51:07 +00:00
|
|
|
|
|
|
|
/* Check if the PFN database should be acquired */
|
|
|
|
if (OldIrql == MM_NOIRQL)
|
|
|
|
{
|
|
|
|
/* Acquire it and remember we should release it after */
|
2017-11-21 22:33:42 +00:00
|
|
|
OldIrql = MiAcquirePfnLock();
|
2010-10-04 18:51:07 +00:00
|
|
|
HaveLock = TRUE;
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* We either manually locked the PFN DB, or already came with it locked */
|
2017-11-21 22:33:42 +00:00
|
|
|
MI_ASSERT_PFN_LOCK_HELD();
|
2010-10-04 18:51:07 +00:00
|
|
|
ASSERT(PointerPte->u.Hard.Valid == 0);
|
2012-07-31 06:47:47 +00:00
|
|
|
|
|
|
|
/* Assert we have enough pages */
|
|
|
|
ASSERT(MmAvailablePages >= 32);
|
|
|
|
|
2010-11-02 15:16:22 +00:00
|
|
|
#if MI_TRACE_PFNS
|
|
|
|
if (UserPdeFault) MI_SET_USAGE(MI_USAGE_PAGE_TABLE);
|
|
|
|
if (!UserPdeFault) MI_SET_USAGE(MI_USAGE_DEMAND_ZERO);
|
|
|
|
#endif
|
2017-05-22 13:30:44 +00:00
|
|
|
if (Process == HYDRA_PROCESS) MI_SET_PROCESS2("Hydra");
|
|
|
|
else if (Process) MI_SET_PROCESS2(Process->ImageFileName);
|
|
|
|
else MI_SET_PROCESS2("Kernel Demand 0");
|
2012-07-31 06:47:47 +00:00
|
|
|
|
|
|
|
/* Do we need a zero page? */
|
2012-07-31 07:11:52 +00:00
|
|
|
if (Color != 0xFFFFFFFF)
|
2010-09-29 01:10:28 +00:00
|
|
|
{
|
|
|
|
/* Try to get one, if we couldn't grab a free page and zero it */
|
|
|
|
PageFrameNumber = MiRemoveZeroPageSafe(Color);
|
2012-07-31 07:11:52 +00:00
|
|
|
if (!PageFrameNumber)
|
2010-10-04 18:51:07 +00:00
|
|
|
{
|
|
|
|
/* We'll need a free page and zero it manually */
|
|
|
|
PageFrameNumber = MiRemoveAnyPage(Color);
|
2012-07-31 07:11:52 +00:00
|
|
|
NeedZero = TRUE;
|
2010-10-04 18:51:07 +00:00
|
|
|
}
|
2021-08-02 15:06:35 +00:00
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Page guaranteed to be zero-filled */
|
|
|
|
NeedZero = FALSE;
|
|
|
|
}
|
2010-09-29 01:10:28 +00:00
|
|
|
}
|
2010-10-04 18:51:07 +00:00
|
|
|
else
|
|
|
|
{
|
2012-07-31 07:11:52 +00:00
|
|
|
/* Get a color, and see if we should grab a zero or non-zero page */
|
|
|
|
Color = MI_GET_NEXT_COLOR();
|
|
|
|
if (!NeedZero)
|
|
|
|
{
|
|
|
|
/* Process or system doesn't want a zero page, grab anything */
|
|
|
|
PageFrameNumber = MiRemoveAnyPage(Color);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* System wants a zero page, obtain one */
|
|
|
|
PageFrameNumber = MiRemoveZeroPage(Color);
|
2021-08-02 15:06:35 +00:00
|
|
|
/* No need to zero-fill it */
|
|
|
|
NeedZero = FALSE;
|
2012-07-31 07:11:52 +00:00
|
|
|
}
|
2010-09-29 01:10:28 +00:00
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-06-04 20:18:27 +00:00
|
|
|
/* Initialize it */
|
|
|
|
MiInitializePfn(PageFrameNumber, PointerPte, TRUE);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2017-12-27 14:09:39 +00:00
|
|
|
/* Increment demand zero faults */
|
|
|
|
KeGetCurrentPrcb()->MmDemandZeroCount++;
|
|
|
|
|
2021-05-19 22:19:43 +00:00
|
|
|
/* Do we have the lock? */
|
|
|
|
if (HaveLock)
|
|
|
|
{
|
|
|
|
/* Release it */
|
|
|
|
MiReleasePfnLock(OldIrql);
|
|
|
|
|
|
|
|
/* Update performance counters */
|
|
|
|
if (Process > HYDRA_PROCESS) Process->NumberOfPrivatePages++;
|
|
|
|
}
|
|
|
|
|
2010-07-22 02:20:27 +00:00
|
|
|
/* Zero the page if need be */
|
|
|
|
if (NeedZero) MiZeroPfn(PageFrameNumber);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
Two Part Patch which fixes ARM3 Section Support (not yet enabled). This had been enabled in the past for testing and resulted in bizare crashes during testing. The amount of fixing required should reveal why:
Part 1: Page Fault Path Fixes
[NTOS]: As an optimization, someone seems to have had changed the MiResolveDemandZeroFault prototype not to require a PTE, and to instead take a protection mask directly. While clever, this broke support for ARM3 sections, because the code was now assuming that the protection of the PTE for the input address should be used -- while in NT Sections we instead use what are called ProtoType PTEs. This was very annoying to debug, but since the cause has been fixed, I've reverted back to the old convention in which the PTE is passed-in, and this can be a different PTE than the PTE for the address, as it should be.
[NTOS]: Due to the reverting of the original path, another optimization, in which MiResolveDemandZeroFault was being called directly instead of going through MiDispatchFault and writing an invalid demand-zero PDE has also been removed. PDE faults are now going through the correct, expected path.
[NTOS]: MiResolveDemandZeroFault was always creating Kernel PTEs. It should create User PTEs when necessary.
[NTOS]: MiDeletePte was assuming any prototype PTE is a forked PTE. Forked PTEs only happen when the addresses in the PTE don't match, so check for that too.
Part 2: ARM3 Section Object Fixes
[NTOS]: Fix issue when trying to make both ROS_SECTION_OBJECTs and NT's SECTION co-exist. We relied on the *caller* knowing what kind of section this is, and that can't be a good idea. Now, when the caller requests an ARM3 section vs a ROS section, we use a marker to detect what kind of section this is for later APIs.
[NTOS]: For section VADs, we were storing the ReactOS MEMORY_AREA in the ControlArea... however, the mappings of one individual section object share a single control area, even though they have multiple MEMORY_AREAs (one for each mapping). As such, we overwrote the MEMORY_AREA continously, and at free-time, double or triple-freed the same memory area.
[NTOS]: Moved the MEMORY_AREA to the "Banked" field of the long VAD, instead of the ControlArea. Allocate MMVAD_LONGs for ARM3 sections for now, to support this. Also, after deleting the MEMORY_AREA while parsing VADs, we now use a special marker to detect double-frees, and we also use a special marker to make sure we have a Long VAD as expected.
svn path=/trunk/; revision=56035
2012-03-05 16:41:46 +00:00
|
|
|
/* Fault on user PDE, or fault on user PTE? */
|
|
|
|
if (PointerPte <= MiHighestUserPte)
|
|
|
|
{
|
|
|
|
/* User fault, build a user PTE */
|
|
|
|
MI_MAKE_HARDWARE_PTE_USER(&TempPte,
|
|
|
|
PointerPte,
|
2011-11-01 21:03:00 +00:00
|
|
|
Protection,
|
Two Part Patch which fixes ARM3 Section Support (not yet enabled). This had been enabled in the past for testing and resulted in bizare crashes during testing. The amount of fixing required should reveal why:
Part 1: Page Fault Path Fixes
[NTOS]: As an optimization, someone seems to have had changed the MiResolveDemandZeroFault prototype not to require a PTE, and to instead take a protection mask directly. While clever, this broke support for ARM3 sections, because the code was now assuming that the protection of the PTE for the input address should be used -- while in NT Sections we instead use what are called ProtoType PTEs. This was very annoying to debug, but since the cause has been fixed, I've reverted back to the old convention in which the PTE is passed-in, and this can be a different PTE than the PTE for the address, as it should be.
[NTOS]: Due to the reverting of the original path, another optimization, in which MiResolveDemandZeroFault was being called directly instead of going through MiDispatchFault and writing an invalid demand-zero PDE has also been removed. PDE faults are now going through the correct, expected path.
[NTOS]: MiResolveDemandZeroFault was always creating Kernel PTEs. It should create User PTEs when necessary.
[NTOS]: MiDeletePte was assuming any prototype PTE is a forked PTE. Forked PTEs only happen when the addresses in the PTE don't match, so check for that too.
Part 2: ARM3 Section Object Fixes
[NTOS]: Fix issue when trying to make both ROS_SECTION_OBJECTs and NT's SECTION co-exist. We relied on the *caller* knowing what kind of section this is, and that can't be a good idea. Now, when the caller requests an ARM3 section vs a ROS section, we use a marker to detect what kind of section this is for later APIs.
[NTOS]: For section VADs, we were storing the ReactOS MEMORY_AREA in the ControlArea... however, the mappings of one individual section object share a single control area, even though they have multiple MEMORY_AREAs (one for each mapping). As such, we overwrote the MEMORY_AREA continously, and at free-time, double or triple-freed the same memory area.
[NTOS]: Moved the MEMORY_AREA to the "Banked" field of the long VAD, instead of the ControlArea. Allocate MMVAD_LONGs for ARM3 sections for now, to support this. Also, after deleting the MEMORY_AREA while parsing VADs, we now use a special marker to detect double-frees, and we also use a special marker to make sure we have a Long VAD as expected.
svn path=/trunk/; revision=56035
2012-03-05 16:41:46 +00:00
|
|
|
PageFrameNumber);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* This is a user-mode PDE, create a kernel PTE for it */
|
|
|
|
MI_MAKE_HARDWARE_PTE(&TempPte,
|
|
|
|
PointerPte,
|
2011-11-01 21:03:00 +00:00
|
|
|
Protection,
|
Two Part Patch which fixes ARM3 Section Support (not yet enabled). This had been enabled in the past for testing and resulted in bizare crashes during testing. The amount of fixing required should reveal why:
Part 1: Page Fault Path Fixes
[NTOS]: As an optimization, someone seems to have had changed the MiResolveDemandZeroFault prototype not to require a PTE, and to instead take a protection mask directly. While clever, this broke support for ARM3 sections, because the code was now assuming that the protection of the PTE for the input address should be used -- while in NT Sections we instead use what are called ProtoType PTEs. This was very annoying to debug, but since the cause has been fixed, I've reverted back to the old convention in which the PTE is passed-in, and this can be a different PTE than the PTE for the address, as it should be.
[NTOS]: Due to the reverting of the original path, another optimization, in which MiResolveDemandZeroFault was being called directly instead of going through MiDispatchFault and writing an invalid demand-zero PDE has also been removed. PDE faults are now going through the correct, expected path.
[NTOS]: MiResolveDemandZeroFault was always creating Kernel PTEs. It should create User PTEs when necessary.
[NTOS]: MiDeletePte was assuming any prototype PTE is a forked PTE. Forked PTEs only happen when the addresses in the PTE don't match, so check for that too.
Part 2: ARM3 Section Object Fixes
[NTOS]: Fix issue when trying to make both ROS_SECTION_OBJECTs and NT's SECTION co-exist. We relied on the *caller* knowing what kind of section this is, and that can't be a good idea. Now, when the caller requests an ARM3 section vs a ROS section, we use a marker to detect what kind of section this is for later APIs.
[NTOS]: For section VADs, we were storing the ReactOS MEMORY_AREA in the ControlArea... however, the mappings of one individual section object share a single control area, even though they have multiple MEMORY_AREAs (one for each mapping). As such, we overwrote the MEMORY_AREA continously, and at free-time, double or triple-freed the same memory area.
[NTOS]: Moved the MEMORY_AREA to the "Banked" field of the long VAD, instead of the ControlArea. Allocate MMVAD_LONGs for ARM3 sections for now, to support this. Also, after deleting the MEMORY_AREA while parsing VADs, we now use a special marker to detect double-frees, and we also use a special marker to make sure we have a Long VAD as expected.
svn path=/trunk/; revision=56035
2012-03-05 16:41:46 +00:00
|
|
|
PageFrameNumber);
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 02:20:27 +00:00
|
|
|
/* Set it dirty if it's a writable page */
|
2010-11-24 15:21:45 +00:00
|
|
|
if (MI_IS_PAGE_WRITEABLE(&TempPte)) MI_MAKE_DIRTY_PAGE(&TempPte);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 02:20:27 +00:00
|
|
|
/* Write it */
|
2010-06-06 18:45:46 +00:00
|
|
|
MI_WRITE_VALID_PTE(PointerPte, TempPte);
|
|
|
|
|
2012-07-31 06:47:47 +00:00
|
|
|
/* Did we manually acquire the lock */
|
|
|
|
if (HaveLock)
|
|
|
|
{
|
|
|
|
/* Get the PFN entry */
|
|
|
|
Pfn1 = MI_PFN_ELEMENT(PageFrameNumber);
|
|
|
|
|
|
|
|
/* Windows does these sanity checks */
|
|
|
|
ASSERT(Pfn1->u1.Event == 0);
|
|
|
|
ASSERT(Pfn1->u3.e1.PrototypePte == 0);
|
|
|
|
}
|
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
//
|
|
|
|
// It's all good now
|
|
|
|
//
|
[NTOS]: A few key changes to the page fault path:
1) MiCheckVirtualAddress should be called *after* determining if the PTE is a Demand Zero PTE. This is because when memory is allocated with MEM_RESERVE, and then MEM_COMMIT is called later, the VAD does not have the MemCommit flag set to TRUE. As such, MiCheckVirtualAddress returns MM_NOACCESS for the VAD (even though one is found) and the demand zero fault results in an access violation. Double-checked with Windows and this is the right behavior.
2) MiCheckVirtualAddress now supports non-commited reserve VADs (ie: trying to access MEM_RESERVE memory). It used to ASSERT, now it returns MM_NOACCESS so an access violation is raised. Before change #1, this would also happen if MEM_COMMIT was later performed on the ranges, but this is now fixed.
3) When calling MiResolveDemandZeroFault, we should not make the PDE a demand zero PDE. This is senseless. The whole point is that the PDE does exist, and MiInitializePfn needs it to keep track of the page table allocation. Removed the nonsensical line of code which performed cleard the PDE during a demand-zero fault.
I am able to boot to 3rd stage with these changes, so I have seen no regressions. Additionally, with these changes, the as-of-yet-uncommitted VAD-based Virtual Memory code completes 1st stage setup successfully, instead of instantly crashing on boot.
svn path=/trunk/; revision=55894
2012-02-27 23:42:22 +00:00
|
|
|
DPRINT("Demand zero page has now been paged in\n");
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
return STATUS_PAGE_FAULT_DEMAND_ZERO;
|
|
|
|
}
|
|
|
|
|
2018-01-02 10:31:37 +00:00
|
|
|
static
|
2010-07-22 20:52:23 +00:00
|
|
|
NTSTATUS
|
|
|
|
NTAPI
|
|
|
|
MiCompleteProtoPteFault(IN BOOLEAN StoreInstruction,
|
|
|
|
IN PVOID Address,
|
|
|
|
IN PMMPTE PointerPte,
|
|
|
|
IN PMMPTE PointerProtoPte,
|
|
|
|
IN KIRQL OldIrql,
|
2012-08-03 11:34:35 +00:00
|
|
|
IN PMMPFN* LockedProtoPfn)
|
2010-07-22 20:52:23 +00:00
|
|
|
{
|
|
|
|
MMPTE TempPte;
|
2012-03-04 17:42:56 +00:00
|
|
|
PMMPTE OriginalPte, PageTablePte;
|
[HAL/NDK]
- Make Vector parameter in HalEnableSystemInterrupt, HalDisableSystemInterrupt and HalBeginSystemInterrupt an ULONG, not an UCHAR
[NDK]
- 64bit fixes for HANDLE_TABLE, KPROCESS, SECTION_IMAGE_INFORMATION, MMADDRESS_LIST, MMVAD_FLAGS, MMVAD, MMVAD_LONG, MMVAD_SHORT, MEMORY_DESCRIPTOR, MEMORY_ALLOCATION_DESCRIPTOR, LdrVerifyMappedImageMatchesChecksum
- KDPC_DATA::DpcQueueDepth is signed on amd64, unsigned on x86
[NTOSKRNL]
- Fix hundreds of MSVC and amd64 warnings
- add a pragma message to FstubFixupEfiPartition, since it looks broken
- Move portable Ke constants from <arch>/cpu.c to krnlinit.c
- Fixed a bug in amd64 KiGeneralProtectionFaultHandler
svn path=/trunk/; revision=53734
2011-09-18 13:11:45 +00:00
|
|
|
ULONG_PTR Protection;
|
2010-07-22 20:52:23 +00:00
|
|
|
PFN_NUMBER PageFrameIndex;
|
2012-07-31 06:47:47 +00:00
|
|
|
PMMPFN Pfn1, Pfn2;
|
2012-07-31 07:11:52 +00:00
|
|
|
BOOLEAN OriginalProtection, DirtyPage;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 20:52:23 +00:00
|
|
|
/* Must be called with an valid prototype PTE, with the PFN lock held */
|
2017-11-21 22:33:42 +00:00
|
|
|
MI_ASSERT_PFN_LOCK_HELD();
|
2010-07-22 20:52:23 +00:00
|
|
|
ASSERT(PointerProtoPte->u.Hard.Valid == 1);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 20:52:23 +00:00
|
|
|
/* Get the page */
|
|
|
|
PageFrameIndex = PFN_FROM_PTE(PointerProtoPte);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Get the PFN entry and set it as a prototype PTE */
|
|
|
|
Pfn1 = MiGetPfnEntry(PageFrameIndex);
|
|
|
|
Pfn1->u3.e1.PrototypePte = 1;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-03-04 17:42:56 +00:00
|
|
|
/* Increment the share count for the page table */
|
|
|
|
PageTablePte = MiAddressToPte(PointerPte);
|
|
|
|
Pfn2 = MiGetPfnEntry(PageTablePte->u.Hard.PageFrameNumber);
|
2014-08-25 12:33:49 +00:00
|
|
|
Pfn2->u2.ShareCount++;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Check where we should be getting the protection information from */
|
|
|
|
if (PointerPte->u.Soft.PageFileHigh == MI_PTE_LOOKUP_NEEDED)
|
|
|
|
{
|
|
|
|
/* Get the protection from the PTE, there's no real Proto PTE data */
|
|
|
|
Protection = PointerPte->u.Soft.Protection;
|
2012-07-31 07:11:52 +00:00
|
|
|
|
|
|
|
/* Remember that we did not use the proto protection */
|
|
|
|
OriginalProtection = FALSE;
|
2010-10-04 18:51:07 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Get the protection from the original PTE link */
|
|
|
|
OriginalPte = &Pfn1->OriginalPte;
|
|
|
|
Protection = OriginalPte->u.Soft.Protection;
|
2012-07-31 07:11:52 +00:00
|
|
|
|
|
|
|
/* Remember that we used the original protection */
|
|
|
|
OriginalProtection = TRUE;
|
|
|
|
|
|
|
|
/* Check if this was a write on a read only proto */
|
|
|
|
if ((StoreInstruction) && !(Protection & MM_READWRITE))
|
|
|
|
{
|
|
|
|
/* Clear the flag */
|
|
|
|
StoreInstruction = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check if this was a write on a non-COW page */
|
|
|
|
DirtyPage = FALSE;
|
|
|
|
if ((StoreInstruction) && ((Protection & MM_WRITECOPY) != MM_WRITECOPY))
|
|
|
|
{
|
|
|
|
/* Then the page should be marked dirty */
|
|
|
|
DirtyPage = TRUE;
|
|
|
|
|
|
|
|
/* ReactOS check */
|
|
|
|
ASSERT(Pfn1->OriginalPte.u.Soft.Prototype != 0);
|
2010-10-04 18:51:07 +00:00
|
|
|
}
|
2010-07-22 20:52:23 +00:00
|
|
|
|
2012-08-03 11:34:35 +00:00
|
|
|
/* Did we get a locked incoming PFN? */
|
|
|
|
if (*LockedProtoPfn)
|
|
|
|
{
|
|
|
|
/* Drop a reference */
|
|
|
|
ASSERT((*LockedProtoPfn)->u3.e2.ReferenceCount >= 1);
|
|
|
|
MiDereferencePfnAndDropLockCount(*LockedProtoPfn);
|
|
|
|
*LockedProtoPfn = NULL;
|
|
|
|
}
|
2012-07-31 07:11:52 +00:00
|
|
|
|
2010-07-22 20:52:23 +00:00
|
|
|
/* Release the PFN lock */
|
2017-11-21 22:33:42 +00:00
|
|
|
MiReleasePfnLock(OldIrql);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2013-11-27 00:04:26 +00:00
|
|
|
/* Remove special/caching bits */
|
|
|
|
Protection &= ~MM_PROTECT_SPECIAL;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-31 07:11:52 +00:00
|
|
|
/* Setup caching */
|
|
|
|
if (Pfn1->u3.e1.CacheAttribute == MiWriteCombined)
|
|
|
|
{
|
|
|
|
/* Write combining, no caching */
|
|
|
|
MI_PAGE_DISABLE_CACHE(&TempPte);
|
|
|
|
MI_PAGE_WRITE_COMBINED(&TempPte);
|
|
|
|
}
|
|
|
|
else if (Pfn1->u3.e1.CacheAttribute == MiNonCached)
|
|
|
|
{
|
|
|
|
/* Write through, no caching */
|
|
|
|
MI_PAGE_DISABLE_CACHE(&TempPte);
|
|
|
|
MI_PAGE_WRITE_THROUGH(&TempPte);
|
|
|
|
}
|
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Check if this is a kernel or user address */
|
|
|
|
if (Address < MmSystemRangeStart)
|
|
|
|
{
|
|
|
|
/* Build the user PTE */
|
|
|
|
MI_MAKE_HARDWARE_PTE_USER(&TempPte, PointerPte, Protection, PageFrameIndex);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Build the kernel PTE */
|
|
|
|
MI_MAKE_HARDWARE_PTE(&TempPte, PointerPte, Protection, PageFrameIndex);
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-31 07:11:52 +00:00
|
|
|
/* Set the dirty flag if needed */
|
2015-05-10 19:35:24 +00:00
|
|
|
if (DirtyPage) MI_MAKE_DIRTY_PAGE(&TempPte);
|
2012-07-31 07:11:52 +00:00
|
|
|
|
2010-07-22 20:52:23 +00:00
|
|
|
/* Write the PTE */
|
|
|
|
MI_WRITE_VALID_PTE(PointerPte, TempPte);
|
|
|
|
|
2012-07-31 07:11:52 +00:00
|
|
|
/* Reset the protection if needed */
|
|
|
|
if (OriginalProtection) Protection = MM_ZERO_ACCESS;
|
|
|
|
|
2010-07-22 20:52:23 +00:00
|
|
|
/* Return success */
|
2012-07-31 07:11:52 +00:00
|
|
|
ASSERT(PointerPte == MiAddressToPte(Address));
|
2010-07-22 20:52:23 +00:00
|
|
|
return STATUS_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2018-01-02 10:31:37 +00:00
|
|
|
static
|
2014-08-06 21:53:09 +00:00
|
|
|
NTSTATUS
|
|
|
|
NTAPI
|
|
|
|
MiResolvePageFileFault(_In_ BOOLEAN StoreInstruction,
|
|
|
|
_In_ PVOID FaultingAddress,
|
|
|
|
_In_ PMMPTE PointerPte,
|
|
|
|
_In_ PEPROCESS CurrentProcess,
|
|
|
|
_Inout_ KIRQL *OldIrql)
|
|
|
|
{
|
|
|
|
ULONG Color;
|
|
|
|
PFN_NUMBER Page;
|
|
|
|
NTSTATUS Status;
|
|
|
|
MMPTE TempPte = *PointerPte;
|
|
|
|
PMMPFN Pfn1;
|
|
|
|
ULONG PageFileIndex = TempPte.u.Soft.PageFileLow;
|
|
|
|
ULONG_PTR PageFileOffset = TempPte.u.Soft.PageFileHigh;
|
2016-08-28 19:53:27 +00:00
|
|
|
ULONG Protection = TempPte.u.Soft.Protection;
|
2014-08-06 21:53:09 +00:00
|
|
|
|
|
|
|
/* Things we don't support yet */
|
|
|
|
ASSERT(CurrentProcess > HYDRA_PROCESS);
|
|
|
|
ASSERT(*OldIrql != MM_NOIRQL);
|
|
|
|
|
2021-03-30 14:20:25 +00:00
|
|
|
MI_SET_USAGE(MI_USAGE_PAGE_FILE);
|
|
|
|
MI_SET_PROCESS(CurrentProcess);
|
|
|
|
|
2014-08-06 21:53:09 +00:00
|
|
|
/* We must hold the PFN lock */
|
2017-11-21 22:33:42 +00:00
|
|
|
MI_ASSERT_PFN_LOCK_HELD();
|
2014-08-06 21:53:09 +00:00
|
|
|
|
|
|
|
/* Some sanity checks */
|
|
|
|
ASSERT(TempPte.u.Hard.Valid == 0);
|
|
|
|
ASSERT(TempPte.u.Soft.PageFileHigh != 0);
|
|
|
|
ASSERT(TempPte.u.Soft.PageFileHigh != MI_PTE_LOOKUP_NEEDED);
|
|
|
|
|
|
|
|
/* Get any page, it will be overwritten */
|
|
|
|
Color = MI_GET_NEXT_PROCESS_COLOR(CurrentProcess);
|
|
|
|
Page = MiRemoveAnyPage(Color);
|
|
|
|
|
|
|
|
/* Initialize this PFN */
|
|
|
|
MiInitializePfn(Page, PointerPte, StoreInstruction);
|
|
|
|
|
|
|
|
/* Sets the PFN as being in IO operation */
|
|
|
|
Pfn1 = MI_PFN_ELEMENT(Page);
|
|
|
|
ASSERT(Pfn1->u1.Event == NULL);
|
|
|
|
ASSERT(Pfn1->u3.e1.ReadInProgress == 0);
|
|
|
|
ASSERT(Pfn1->u3.e1.WriteInProgress == 0);
|
|
|
|
Pfn1->u3.e1.ReadInProgress = 1;
|
|
|
|
|
|
|
|
/* We must write the PTE now as the PFN lock will be released while performing the IO operation */
|
2016-08-28 19:53:27 +00:00
|
|
|
MI_MAKE_TRANSITION_PTE(&TempPte, Page, Protection);
|
2014-08-06 21:53:09 +00:00
|
|
|
|
|
|
|
MI_WRITE_INVALID_PTE(PointerPte, TempPte);
|
|
|
|
|
|
|
|
/* Release the PFN lock while we proceed */
|
2017-11-21 22:33:42 +00:00
|
|
|
MiReleasePfnLock(*OldIrql);
|
2014-08-06 21:53:09 +00:00
|
|
|
|
|
|
|
/* Do the paging IO */
|
|
|
|
Status = MiReadPageFile(Page, PageFileIndex, PageFileOffset);
|
|
|
|
|
|
|
|
/* Lock the PFN database again */
|
2017-11-21 22:33:42 +00:00
|
|
|
*OldIrql = MiAcquirePfnLock();
|
2014-08-06 21:53:09 +00:00
|
|
|
|
|
|
|
/* Nobody should have changed that while we were not looking */
|
|
|
|
ASSERT(Pfn1->u3.e1.ReadInProgress == 1);
|
|
|
|
ASSERT(Pfn1->u3.e1.WriteInProgress == 0);
|
|
|
|
|
|
|
|
if (!NT_SUCCESS(Status))
|
|
|
|
{
|
|
|
|
/* Malheur! */
|
|
|
|
ASSERT(FALSE);
|
|
|
|
Pfn1->u4.InPageError = 1;
|
|
|
|
Pfn1->u1.ReadStatus = Status;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* And the PTE can finally be valid */
|
2016-08-28 19:53:27 +00:00
|
|
|
MI_MAKE_HARDWARE_PTE(&TempPte, PointerPte, Protection, Page);
|
2014-08-06 21:53:09 +00:00
|
|
|
MI_WRITE_VALID_PTE(PointerPte, TempPte);
|
|
|
|
|
2016-08-28 19:53:27 +00:00
|
|
|
Pfn1->u3.e1.ReadInProgress = 0;
|
|
|
|
/* Did someone start to wait on us while we proceeded ? */
|
|
|
|
if (Pfn1->u1.Event)
|
|
|
|
{
|
|
|
|
/* Tell them we're done */
|
|
|
|
KeSetEvent(Pfn1->u1.Event, IO_NO_INCREMENT, FALSE);
|
|
|
|
}
|
2014-08-06 21:53:09 +00:00
|
|
|
|
|
|
|
return Status;
|
|
|
|
}
|
|
|
|
|
2018-01-02 10:31:37 +00:00
|
|
|
static
|
2012-03-26 07:41:47 +00:00
|
|
|
NTSTATUS
|
|
|
|
NTAPI
|
2016-08-28 19:53:27 +00:00
|
|
|
MiResolveTransitionFault(IN BOOLEAN StoreInstruction,
|
|
|
|
IN PVOID FaultingAddress,
|
2012-03-26 07:41:47 +00:00
|
|
|
IN PMMPTE PointerPte,
|
|
|
|
IN PEPROCESS CurrentProcess,
|
|
|
|
IN KIRQL OldIrql,
|
2016-08-28 19:53:27 +00:00
|
|
|
OUT PKEVENT **InPageBlock)
|
2012-03-26 07:41:47 +00:00
|
|
|
{
|
|
|
|
PFN_NUMBER PageFrameIndex;
|
|
|
|
PMMPFN Pfn1;
|
|
|
|
MMPTE TempPte;
|
|
|
|
PMMPTE PointerToPteForProtoPage;
|
2014-10-05 20:37:30 +00:00
|
|
|
DPRINT("Transition fault on 0x%p with PTE 0x%p in process %s\n",
|
2013-11-27 00:04:26 +00:00
|
|
|
FaultingAddress, PointerPte, CurrentProcess->ImageFileName);
|
2012-03-26 07:41:47 +00:00
|
|
|
|
|
|
|
/* Windowss does this check */
|
|
|
|
ASSERT(*InPageBlock == NULL);
|
|
|
|
|
|
|
|
/* ARM3 doesn't support this path */
|
|
|
|
ASSERT(OldIrql != MM_NOIRQL);
|
|
|
|
|
|
|
|
/* Capture the PTE and make sure it's in transition format */
|
|
|
|
TempPte = *PointerPte;
|
|
|
|
ASSERT((TempPte.u.Soft.Valid == 0) &&
|
|
|
|
(TempPte.u.Soft.Prototype == 0) &&
|
|
|
|
(TempPte.u.Soft.Transition == 1));
|
|
|
|
|
|
|
|
/* Get the PFN and the PFN entry */
|
|
|
|
PageFrameIndex = TempPte.u.Trans.PageFrameNumber;
|
2014-10-05 20:37:30 +00:00
|
|
|
DPRINT("Transition PFN: %lx\n", PageFrameIndex);
|
2012-03-26 07:41:47 +00:00
|
|
|
Pfn1 = MiGetPfnEntry(PageFrameIndex);
|
|
|
|
|
|
|
|
/* One more transition fault! */
|
|
|
|
InterlockedIncrement(&KeGetCurrentPrcb()->MmTransitionCount);
|
|
|
|
|
|
|
|
/* This is from ARM3 -- Windows normally handles this here */
|
|
|
|
ASSERT(Pfn1->u4.InPageError == 0);
|
|
|
|
|
2014-08-06 21:53:09 +00:00
|
|
|
/* See if we should wait before terminating the fault */
|
2016-08-28 19:53:27 +00:00
|
|
|
if ((Pfn1->u3.e1.ReadInProgress == 1)
|
|
|
|
|| ((Pfn1->u3.e1.WriteInProgress == 1) && StoreInstruction))
|
2014-08-06 21:53:09 +00:00
|
|
|
{
|
2016-08-28 19:53:27 +00:00
|
|
|
DPRINT1("The page is currently in a page transition !\n");
|
|
|
|
*InPageBlock = &Pfn1->u1.Event;
|
2014-10-22 12:29:31 +00:00
|
|
|
if (PointerPte == Pfn1->PteAddress)
|
|
|
|
{
|
|
|
|
DPRINT1("And this if for this particular PTE.\n");
|
|
|
|
/* The PTE will be made valid by the thread serving the fault */
|
|
|
|
return STATUS_SUCCESS; // FIXME: Maybe something more descriptive
|
|
|
|
}
|
2014-08-06 21:53:09 +00:00
|
|
|
}
|
2012-03-26 07:41:47 +00:00
|
|
|
|
|
|
|
/* Windows checks there's some free pages and this isn't an in-page error */
|
2012-09-28 12:17:23 +00:00
|
|
|
ASSERT(MmAvailablePages > 0);
|
2012-03-26 07:41:47 +00:00
|
|
|
ASSERT(Pfn1->u4.InPageError == 0);
|
|
|
|
|
2012-07-31 06:47:47 +00:00
|
|
|
/* ReactOS checks for this */
|
|
|
|
ASSERT(MmAvailablePages > 32);
|
|
|
|
|
2012-03-26 07:41:47 +00:00
|
|
|
/* Was this a transition page in the valid list, or free/zero list? */
|
|
|
|
if (Pfn1->u3.e1.PageLocation == ActiveAndValid)
|
|
|
|
{
|
|
|
|
/* All Windows does here is a bunch of sanity checks */
|
2014-10-05 20:37:30 +00:00
|
|
|
DPRINT("Transition in active list\n");
|
2012-03-26 07:41:47 +00:00
|
|
|
ASSERT((Pfn1->PteAddress >= MiAddressToPte(MmPagedPoolStart)) &&
|
|
|
|
(Pfn1->PteAddress <= MiAddressToPte(MmPagedPoolEnd)));
|
|
|
|
ASSERT(Pfn1->u2.ShareCount != 0);
|
|
|
|
ASSERT(Pfn1->u3.e2.ReferenceCount != 0);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Otherwise, the page is removed from its list */
|
2014-10-05 20:37:30 +00:00
|
|
|
DPRINT("Transition page in free/zero list\n");
|
2012-03-26 07:41:47 +00:00
|
|
|
MiUnlinkPageFromList(Pfn1);
|
2012-08-03 11:34:35 +00:00
|
|
|
MiReferenceUnusedPageAndBumpLockCount(Pfn1);
|
2012-03-26 07:41:47 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* At this point, there should no longer be any in-page errors */
|
|
|
|
ASSERT(Pfn1->u4.InPageError == 0);
|
|
|
|
|
|
|
|
/* Check if this was a PFN with no more share references */
|
2012-08-03 11:34:35 +00:00
|
|
|
if (Pfn1->u2.ShareCount == 0) MiDropLockCount(Pfn1);
|
2012-03-26 07:41:47 +00:00
|
|
|
|
|
|
|
/* Bump the share count and make the page valid */
|
|
|
|
Pfn1->u2.ShareCount++;
|
|
|
|
Pfn1->u3.e1.PageLocation = ActiveAndValid;
|
|
|
|
|
|
|
|
/* Prototype PTEs are in paged pool, which itself might be in transition */
|
|
|
|
if (FaultingAddress >= MmSystemRangeStart)
|
|
|
|
{
|
|
|
|
/* Check if this is a paged pool PTE in transition state */
|
|
|
|
PointerToPteForProtoPage = MiAddressToPte(PointerPte);
|
|
|
|
TempPte = *PointerToPteForProtoPage;
|
|
|
|
if ((TempPte.u.Hard.Valid == 0) && (TempPte.u.Soft.Transition == 1))
|
|
|
|
{
|
|
|
|
/* This isn't yet supported */
|
|
|
|
DPRINT1("Double transition fault not yet supported\n");
|
|
|
|
ASSERT(FALSE);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-08-28 19:53:27 +00:00
|
|
|
/* Build the final PTE */
|
2012-03-26 07:41:47 +00:00
|
|
|
ASSERT(PointerPte->u.Hard.Valid == 0);
|
|
|
|
ASSERT(PointerPte->u.Trans.Prototype == 0);
|
|
|
|
ASSERT(PointerPte->u.Trans.Transition == 1);
|
|
|
|
TempPte.u.Long = (PointerPte->u.Long & ~0xFFF) |
|
2021-05-19 22:19:43 +00:00
|
|
|
(MmProtectToPteMask[PointerPte->u.Trans.Protection]) |
|
2012-03-26 07:41:47 +00:00
|
|
|
MiDetermineUserGlobalPteMask(PointerPte);
|
|
|
|
|
2012-07-31 06:47:47 +00:00
|
|
|
/* Is the PTE writeable? */
|
2015-05-10 19:35:24 +00:00
|
|
|
if ((Pfn1->u3.e1.Modified) &&
|
|
|
|
MI_IS_PAGE_WRITEABLE(&TempPte) &&
|
|
|
|
!MI_IS_PAGE_COPY_ON_WRITE(&TempPte))
|
2012-07-31 06:47:47 +00:00
|
|
|
{
|
|
|
|
/* Make it dirty */
|
2015-05-10 19:35:24 +00:00
|
|
|
MI_MAKE_DIRTY_PAGE(&TempPte);
|
2012-07-31 06:47:47 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Make it clean */
|
2015-05-10 19:35:24 +00:00
|
|
|
MI_MAKE_CLEAN_PAGE(&TempPte);
|
2012-07-31 06:47:47 +00:00
|
|
|
}
|
2012-03-26 07:41:47 +00:00
|
|
|
|
|
|
|
/* Write the valid PTE */
|
|
|
|
MI_WRITE_VALID_PTE(PointerPte, TempPte);
|
|
|
|
|
|
|
|
/* Return success */
|
|
|
|
return STATUS_PAGE_FAULT_TRANSITION;
|
|
|
|
}
|
|
|
|
|
2018-01-02 10:31:37 +00:00
|
|
|
static
|
2010-07-22 20:52:23 +00:00
|
|
|
NTSTATUS
|
|
|
|
NTAPI
|
|
|
|
MiResolveProtoPteFault(IN BOOLEAN StoreInstruction,
|
|
|
|
IN PVOID Address,
|
|
|
|
IN PMMPTE PointerPte,
|
|
|
|
IN PMMPTE PointerProtoPte,
|
|
|
|
IN OUT PMMPFN *OutPfn,
|
|
|
|
OUT PVOID *PageFileData,
|
|
|
|
OUT PMMPTE PteValue,
|
|
|
|
IN PEPROCESS Process,
|
|
|
|
IN KIRQL OldIrql,
|
|
|
|
IN PVOID TrapInformation)
|
|
|
|
{
|
2012-03-26 07:41:47 +00:00
|
|
|
MMPTE TempPte, PteContents;
|
2010-07-22 20:52:23 +00:00
|
|
|
PMMPFN Pfn1;
|
|
|
|
PFN_NUMBER PageFrameIndex;
|
2010-10-04 18:51:07 +00:00
|
|
|
NTSTATUS Status;
|
2016-08-28 19:53:27 +00:00
|
|
|
PKEVENT* InPageBlock = NULL;
|
2016-08-19 17:24:53 +00:00
|
|
|
ULONG Protection;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 20:52:23 +00:00
|
|
|
/* Must be called with an invalid, prototype PTE, with the PFN lock held */
|
2017-11-21 22:33:42 +00:00
|
|
|
MI_ASSERT_PFN_LOCK_HELD();
|
2010-07-22 20:52:23 +00:00
|
|
|
ASSERT(PointerPte->u.Hard.Valid == 0);
|
|
|
|
ASSERT(PointerPte->u.Soft.Prototype == 1);
|
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Read the prototype PTE and check if it's valid */
|
2010-07-22 20:52:23 +00:00
|
|
|
TempPte = *PointerProtoPte;
|
2010-10-04 18:51:07 +00:00
|
|
|
if (TempPte.u.Hard.Valid == 1)
|
|
|
|
{
|
|
|
|
/* One more user of this mapped page */
|
|
|
|
PageFrameIndex = PFN_FROM_PTE(&TempPte);
|
|
|
|
Pfn1 = MiGetPfnEntry(PageFrameIndex);
|
|
|
|
Pfn1->u2.ShareCount++;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Call it a transition */
|
|
|
|
InterlockedIncrement(&KeGetCurrentPrcb()->MmTransitionCount);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Complete the prototype PTE fault -- this will release the PFN lock */
|
|
|
|
return MiCompleteProtoPteFault(StoreInstruction,
|
|
|
|
Address,
|
|
|
|
PointerPte,
|
|
|
|
PointerProtoPte,
|
|
|
|
OldIrql,
|
2012-08-03 11:34:35 +00:00
|
|
|
OutPfn);
|
2010-10-04 18:51:07 +00:00
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Make sure there's some protection mask */
|
|
|
|
if (TempPte.u.Long == 0)
|
|
|
|
{
|
|
|
|
/* Release the lock */
|
|
|
|
DPRINT1("Access on reserved section?\n");
|
2017-11-21 22:33:42 +00:00
|
|
|
MiReleasePfnLock(OldIrql);
|
2010-10-04 18:51:07 +00:00
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
/* There is no such thing as a decommitted prototype PTE */
|
2015-09-01 01:45:59 +00:00
|
|
|
ASSERT(TempPte.u.Long != MmDecommittedPte.u.Long);
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
|
2012-03-26 07:41:47 +00:00
|
|
|
/* Check for access rights on the PTE proper */
|
|
|
|
PteContents = *PointerPte;
|
|
|
|
if (PteContents.u.Soft.PageFileHigh != MI_PTE_LOOKUP_NEEDED)
|
|
|
|
{
|
|
|
|
if (!PteContents.u.Proto.ReadOnly)
|
|
|
|
{
|
2016-08-19 17:24:53 +00:00
|
|
|
Protection = TempPte.u.Soft.Protection;
|
2012-03-26 07:41:47 +00:00
|
|
|
}
|
2016-08-19 17:24:53 +00:00
|
|
|
else
|
|
|
|
{
|
|
|
|
Protection = MM_READONLY;
|
|
|
|
}
|
|
|
|
/* Check for page acess in software */
|
|
|
|
Status = MiAccessCheck(PointerProtoPte,
|
|
|
|
StoreInstruction,
|
|
|
|
KernelMode,
|
|
|
|
TempPte.u.Soft.Protection,
|
|
|
|
TrapInformation,
|
|
|
|
TRUE);
|
|
|
|
ASSERT(Status == STATUS_SUCCESS);
|
2012-03-26 07:41:47 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2016-08-19 17:24:53 +00:00
|
|
|
Protection = PteContents.u.Soft.Protection;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check for writing copy on write page */
|
|
|
|
if (((Protection & MM_WRITECOPY) == MM_WRITECOPY) && StoreInstruction)
|
|
|
|
{
|
|
|
|
PFN_NUMBER PageFrameIndex, ProtoPageFrameIndex;
|
|
|
|
ULONG Color;
|
|
|
|
|
|
|
|
/* Resolve the proto fault as if it was a read operation */
|
|
|
|
Status = MiResolveProtoPteFault(FALSE,
|
|
|
|
Address,
|
|
|
|
PointerPte,
|
|
|
|
PointerProtoPte,
|
|
|
|
OutPfn,
|
|
|
|
PageFileData,
|
|
|
|
PteValue,
|
|
|
|
Process,
|
|
|
|
OldIrql,
|
|
|
|
TrapInformation);
|
|
|
|
|
|
|
|
if (!NT_SUCCESS(Status))
|
2012-07-31 06:47:47 +00:00
|
|
|
{
|
2016-08-19 17:24:53 +00:00
|
|
|
return Status;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Lock again the PFN lock, MiResolveProtoPteFault unlocked it */
|
2017-11-21 22:33:42 +00:00
|
|
|
OldIrql = MiAcquirePfnLock();
|
2016-08-19 17:24:53 +00:00
|
|
|
|
|
|
|
/* And re-read the proto PTE */
|
|
|
|
TempPte = *PointerProtoPte;
|
|
|
|
ASSERT(TempPte.u.Hard.Valid == 1);
|
|
|
|
ProtoPageFrameIndex = PFN_FROM_PTE(&TempPte);
|
|
|
|
|
2021-03-30 14:20:25 +00:00
|
|
|
MI_SET_USAGE(MI_USAGE_COW);
|
|
|
|
MI_SET_PROCESS(Process);
|
|
|
|
|
2016-08-19 17:24:53 +00:00
|
|
|
/* Get a new page for the private copy */
|
|
|
|
if (Process > HYDRA_PROCESS)
|
|
|
|
Color = MI_GET_NEXT_PROCESS_COLOR(Process);
|
|
|
|
else
|
|
|
|
Color = MI_GET_NEXT_COLOR();
|
|
|
|
|
|
|
|
PageFrameIndex = MiRemoveAnyPage(Color);
|
|
|
|
|
|
|
|
/* Perform the copy */
|
|
|
|
MiCopyPfn(PageFrameIndex, ProtoPageFrameIndex);
|
|
|
|
|
|
|
|
/* This will drop everything MiResolveProtoPteFault referenced */
|
|
|
|
MiDeletePte(PointerPte, Address, Process, PointerProtoPte);
|
|
|
|
|
|
|
|
/* Because now we use this */
|
|
|
|
Pfn1 = MI_PFN_ELEMENT(PageFrameIndex);
|
|
|
|
MiInitializePfn(PageFrameIndex, PointerPte, TRUE);
|
|
|
|
|
|
|
|
/* Fix the protection */
|
|
|
|
Protection &= ~MM_WRITECOPY;
|
|
|
|
Protection |= MM_READWRITE;
|
|
|
|
if (Address < MmSystemRangeStart)
|
|
|
|
{
|
|
|
|
/* Build the user PTE */
|
|
|
|
MI_MAKE_HARDWARE_PTE_USER(&PteContents, PointerPte, Protection, PageFrameIndex);
|
2012-07-31 06:47:47 +00:00
|
|
|
}
|
2016-08-19 17:24:53 +00:00
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Build the kernel PTE */
|
|
|
|
MI_MAKE_HARDWARE_PTE(&PteContents, PointerPte, Protection, PageFrameIndex);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* And finally, write the valid PTE */
|
|
|
|
MI_WRITE_VALID_PTE(PointerPte, PteContents);
|
|
|
|
|
2021-05-19 22:19:43 +00:00
|
|
|
/* The caller expects us to release the PFN lock */
|
|
|
|
MiReleasePfnLock(OldIrql);
|
2016-08-19 17:24:53 +00:00
|
|
|
return Status;
|
2012-03-26 07:41:47 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Check for clone PTEs */
|
|
|
|
if (PointerPte <= MiHighestUserPte) ASSERT(Process->CloneRoot == NULL);
|
|
|
|
|
|
|
|
/* We don't support mapped files yet */
|
2010-10-04 18:51:07 +00:00
|
|
|
ASSERT(TempPte.u.Soft.Prototype == 0);
|
2012-03-26 07:41:47 +00:00
|
|
|
|
|
|
|
/* We might however have transition PTEs */
|
|
|
|
if (TempPte.u.Soft.Transition == 1)
|
|
|
|
{
|
|
|
|
/* Resolve the transition fault */
|
|
|
|
ASSERT(OldIrql != MM_NOIRQL);
|
2016-08-28 19:53:27 +00:00
|
|
|
Status = MiResolveTransitionFault(StoreInstruction,
|
|
|
|
Address,
|
2012-03-26 07:41:47 +00:00
|
|
|
PointerProtoPte,
|
|
|
|
Process,
|
|
|
|
OldIrql,
|
|
|
|
&InPageBlock);
|
|
|
|
ASSERT(NT_SUCCESS(Status));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* We also don't support paged out pages */
|
|
|
|
ASSERT(TempPte.u.Soft.PageFileHigh == 0);
|
|
|
|
|
|
|
|
/* Resolve the demand zero fault */
|
|
|
|
Status = MiResolveDemandZeroFault(Address,
|
|
|
|
PointerProtoPte,
|
2011-11-01 21:03:00 +00:00
|
|
|
(ULONG)TempPte.u.Soft.Protection,
|
2012-03-26 07:41:47 +00:00
|
|
|
Process,
|
|
|
|
OldIrql);
|
2021-06-24 15:01:38 +00:00
|
|
|
#if MI_TRACE_PFNS
|
|
|
|
/* Update debug info */
|
|
|
|
if (TrapInformation)
|
|
|
|
MiGetPfnEntry(PointerProtoPte->u.Hard.PageFrameNumber)->CallSite = (PVOID)((PKTRAP_FRAME)TrapInformation)->Eip;
|
|
|
|
else
|
|
|
|
MiGetPfnEntry(PointerProtoPte->u.Hard.PageFrameNumber)->CallSite = _ReturnAddress();
|
|
|
|
#endif
|
|
|
|
|
2012-03-26 07:41:47 +00:00
|
|
|
ASSERT(NT_SUCCESS(Status));
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 20:52:23 +00:00
|
|
|
/* Complete the prototype PTE fault -- this will release the PFN lock */
|
2010-10-04 18:51:07 +00:00
|
|
|
ASSERT(PointerPte->u.Hard.Valid == 0);
|
2010-07-22 20:52:23 +00:00
|
|
|
return MiCompleteProtoPteFault(StoreInstruction,
|
|
|
|
Address,
|
|
|
|
PointerPte,
|
|
|
|
PointerProtoPte,
|
|
|
|
OldIrql,
|
2012-08-03 11:34:35 +00:00
|
|
|
OutPfn);
|
2010-07-22 20:52:23 +00:00
|
|
|
}
|
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
NTSTATUS
|
|
|
|
NTAPI
|
2018-01-01 21:40:43 +00:00
|
|
|
MiDispatchFault(IN ULONG FaultCode,
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
IN PVOID Address,
|
|
|
|
IN PMMPTE PointerPte,
|
2010-07-22 20:52:23 +00:00
|
|
|
IN PMMPTE PointerProtoPte,
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
IN BOOLEAN Recursive,
|
|
|
|
IN PEPROCESS Process,
|
|
|
|
IN PVOID TrapInformation,
|
2012-07-31 07:32:19 +00:00
|
|
|
IN PMMVAD Vad)
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
{
|
|
|
|
MMPTE TempPte;
|
2010-07-22 20:52:23 +00:00
|
|
|
KIRQL OldIrql, LockIrql;
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
NTSTATUS Status;
|
2010-07-22 20:52:23 +00:00
|
|
|
PMMPTE SuperProtoPte;
|
2012-08-03 11:34:35 +00:00
|
|
|
PMMPFN Pfn1, OutPfn = NULL;
|
2012-12-30 11:54:40 +00:00
|
|
|
PFN_NUMBER PageFrameIndex;
|
|
|
|
PFN_COUNT PteCount, ProcessedPtes;
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
DPRINT("ARM3 Page Fault Dispatcher for address: %p in process: %p\n",
|
2010-01-03 05:10:09 +00:00
|
|
|
Address,
|
|
|
|
Process);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-02-06 09:26:23 +00:00
|
|
|
/* Make sure the addresses are ok */
|
|
|
|
ASSERT(PointerPte == MiAddressToPte(Address));
|
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
//
|
|
|
|
// Make sure APCs are off and we're not at dispatch
|
|
|
|
//
|
2010-07-22 20:52:23 +00:00
|
|
|
OldIrql = KeGetCurrentIrql();
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
ASSERT(OldIrql <= APC_LEVEL);
|
2010-07-22 20:52:23 +00:00
|
|
|
ASSERT(KeAreAllApcsDisabled() == TRUE);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
//
|
|
|
|
// Grab a copy of the PTE
|
|
|
|
//
|
|
|
|
TempPte = *PointerPte;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 20:52:23 +00:00
|
|
|
/* Do we have a prototype PTE? */
|
|
|
|
if (PointerProtoPte)
|
|
|
|
{
|
|
|
|
/* This should never happen */
|
|
|
|
ASSERT(!MI_IS_PHYSICAL_ADDRESS(PointerProtoPte));
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Check if this is a kernel-mode address */
|
|
|
|
SuperProtoPte = MiAddressToPte(PointerProtoPte);
|
|
|
|
if (Address >= MmSystemRangeStart)
|
|
|
|
{
|
|
|
|
/* Lock the PFN database */
|
2017-11-21 22:33:42 +00:00
|
|
|
LockIrql = MiAcquirePfnLock();
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Has the PTE been made valid yet? */
|
|
|
|
if (!SuperProtoPte->u.Hard.Valid)
|
|
|
|
{
|
2012-07-31 07:11:52 +00:00
|
|
|
ASSERT(FALSE);
|
2010-10-04 18:51:07 +00:00
|
|
|
}
|
2012-07-31 07:11:52 +00:00
|
|
|
else if (PointerPte->u.Hard.Valid == 1)
|
2010-10-04 18:51:07 +00:00
|
|
|
{
|
2012-07-31 07:11:52 +00:00
|
|
|
ASSERT(FALSE);
|
2010-10-04 18:51:07 +00:00
|
|
|
}
|
2012-07-31 07:11:52 +00:00
|
|
|
|
|
|
|
/* Resolve the fault -- this will release the PFN lock */
|
2018-01-01 21:40:43 +00:00
|
|
|
Status = MiResolveProtoPteFault(!MI_IS_NOT_PRESENT_FAULT(FaultCode),
|
2012-07-31 07:11:52 +00:00
|
|
|
Address,
|
|
|
|
PointerPte,
|
|
|
|
PointerProtoPte,
|
2012-08-03 11:34:35 +00:00
|
|
|
&OutPfn,
|
2012-07-31 07:11:52 +00:00
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
Process,
|
|
|
|
LockIrql,
|
|
|
|
TrapInformation);
|
|
|
|
ASSERT(Status == STATUS_SUCCESS);
|
|
|
|
|
|
|
|
/* Complete this as a transition fault */
|
|
|
|
ASSERT(OldIrql == KeGetCurrentIrql());
|
|
|
|
ASSERT(OldIrql <= APC_LEVEL);
|
|
|
|
ASSERT(KeAreAllApcsDisabled() == TRUE);
|
|
|
|
return Status;
|
2010-10-04 18:51:07 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2012-07-31 07:32:19 +00:00
|
|
|
/* We only handle the lookup path */
|
2010-10-04 18:51:07 +00:00
|
|
|
ASSERT(PointerPte->u.Soft.PageFileHigh == MI_PTE_LOOKUP_NEEDED);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-31 07:32:19 +00:00
|
|
|
/* Is there a non-image VAD? */
|
|
|
|
if ((Vad) &&
|
|
|
|
(Vad->u.VadFlags.VadType != VadImageMap) &&
|
|
|
|
!(Vad->u2.VadFlags2.ExtendableFile))
|
|
|
|
{
|
|
|
|
/* One day, ReactOS will cluster faults */
|
|
|
|
ASSERT(Address <= MM_HIGHEST_USER_ADDRESS);
|
|
|
|
DPRINT("Should cluster fault, but won't\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Only one PTE to handle for now */
|
|
|
|
PteCount = 1;
|
|
|
|
ProcessedPtes = 0;
|
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Lock the PFN database */
|
2017-11-21 22:33:42 +00:00
|
|
|
LockIrql = MiAcquirePfnLock();
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-31 07:32:19 +00:00
|
|
|
/* We only handle the valid path */
|
2010-10-04 18:51:07 +00:00
|
|
|
ASSERT(SuperProtoPte->u.Hard.Valid == 1);
|
2010-07-22 20:52:23 +00:00
|
|
|
|
2012-07-31 07:32:19 +00:00
|
|
|
/* Capture the PTE */
|
|
|
|
TempPte = *PointerProtoPte;
|
|
|
|
|
|
|
|
/* Loop to handle future case of clustered faults */
|
|
|
|
while (TRUE)
|
|
|
|
{
|
|
|
|
/* For our current usage, this should be true */
|
2012-08-03 11:34:35 +00:00
|
|
|
if (TempPte.u.Hard.Valid == 1)
|
|
|
|
{
|
|
|
|
/* Bump the share count on the PTE */
|
|
|
|
PageFrameIndex = PFN_FROM_PTE(&TempPte);
|
|
|
|
Pfn1 = MI_PFN_ELEMENT(PageFrameIndex);
|
|
|
|
Pfn1->u2.ShareCount++;
|
|
|
|
}
|
|
|
|
else if ((TempPte.u.Soft.Prototype == 0) &&
|
|
|
|
(TempPte.u.Soft.Transition == 1))
|
|
|
|
{
|
2012-09-01 02:32:25 +00:00
|
|
|
/* This is a standby page, bring it back from the cache */
|
|
|
|
PageFrameIndex = TempPte.u.Trans.PageFrameNumber;
|
2012-09-02 18:54:05 +00:00
|
|
|
DPRINT("oooh, shiny, a soft fault! 0x%lx\n", PageFrameIndex);
|
2012-09-01 02:32:25 +00:00
|
|
|
Pfn1 = MI_PFN_ELEMENT(PageFrameIndex);
|
|
|
|
ASSERT(Pfn1->u3.e1.PageLocation != ActiveAndValid);
|
2013-06-02 19:04:02 +00:00
|
|
|
|
2012-09-01 02:32:25 +00:00
|
|
|
/* Should not yet happen in ReactOS */
|
|
|
|
ASSERT(Pfn1->u3.e1.ReadInProgress == 0);
|
|
|
|
ASSERT(Pfn1->u4.InPageError == 0);
|
2013-06-02 19:04:02 +00:00
|
|
|
|
2012-09-01 02:32:25 +00:00
|
|
|
/* Get the page */
|
|
|
|
MiUnlinkPageFromList(Pfn1);
|
2013-06-02 19:04:02 +00:00
|
|
|
|
2012-09-01 02:32:25 +00:00
|
|
|
/* Bump its reference count */
|
|
|
|
ASSERT(Pfn1->u2.ShareCount == 0);
|
|
|
|
InterlockedIncrement16((PSHORT)&Pfn1->u3.e2.ReferenceCount);
|
|
|
|
Pfn1->u2.ShareCount++;
|
2013-06-02 19:04:02 +00:00
|
|
|
|
2012-09-01 02:32:25 +00:00
|
|
|
/* Make it valid again */
|
|
|
|
/* This looks like another macro.... */
|
|
|
|
Pfn1->u3.e1.PageLocation = ActiveAndValid;
|
|
|
|
ASSERT(PointerProtoPte->u.Hard.Valid == 0);
|
|
|
|
ASSERT(PointerProtoPte->u.Trans.Prototype == 0);
|
|
|
|
ASSERT(PointerProtoPte->u.Trans.Transition == 1);
|
|
|
|
TempPte.u.Long = (PointerProtoPte->u.Long & ~0xFFF) |
|
|
|
|
MmProtectToPteMask[PointerProtoPte->u.Trans.Protection];
|
|
|
|
TempPte.u.Hard.Valid = 1;
|
2015-05-10 19:35:24 +00:00
|
|
|
MI_MAKE_ACCESSED_PAGE(&TempPte);
|
2013-06-02 19:04:02 +00:00
|
|
|
|
2012-09-01 02:32:25 +00:00
|
|
|
/* Is the PTE writeable? */
|
2015-05-10 19:35:24 +00:00
|
|
|
if ((Pfn1->u3.e1.Modified) &&
|
|
|
|
MI_IS_PAGE_WRITEABLE(&TempPte) &&
|
|
|
|
!MI_IS_PAGE_COPY_ON_WRITE(&TempPte))
|
2012-09-01 02:32:25 +00:00
|
|
|
{
|
|
|
|
/* Make it dirty */
|
2015-05-10 19:35:24 +00:00
|
|
|
MI_MAKE_DIRTY_PAGE(&TempPte);
|
2012-09-01 02:32:25 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Make it clean */
|
2015-05-10 19:35:24 +00:00
|
|
|
MI_MAKE_CLEAN_PAGE(&TempPte);
|
2012-09-01 02:32:25 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Write the valid PTE */
|
|
|
|
MI_WRITE_VALID_PTE(PointerProtoPte, TempPte);
|
|
|
|
ASSERT(PointerPte->u.Hard.Valid == 0);
|
2012-08-03 11:34:35 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Page is invalid, get out of the loop */
|
|
|
|
break;
|
|
|
|
}
|
2012-07-31 07:32:19 +00:00
|
|
|
|
|
|
|
/* One more done, was it the last? */
|
|
|
|
if (++ProcessedPtes == PteCount)
|
|
|
|
{
|
|
|
|
/* Complete the fault */
|
2018-01-01 21:40:43 +00:00
|
|
|
MiCompleteProtoPteFault(!MI_IS_NOT_PRESENT_FAULT(FaultCode),
|
2010-10-04 18:51:07 +00:00
|
|
|
Address,
|
|
|
|
PointerPte,
|
|
|
|
PointerProtoPte,
|
|
|
|
LockIrql,
|
2012-08-03 11:34:35 +00:00
|
|
|
&OutPfn);
|
2012-07-31 07:32:19 +00:00
|
|
|
|
|
|
|
/* THIS RELEASES THE PFN LOCK! */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* No clustered faults yet */
|
|
|
|
ASSERT(FALSE);
|
|
|
|
}
|
|
|
|
|
2012-08-03 11:34:35 +00:00
|
|
|
/* Did we resolve the fault? */
|
|
|
|
if (ProcessedPtes)
|
|
|
|
{
|
|
|
|
/* Bump the transition count */
|
2012-12-30 11:54:40 +00:00
|
|
|
InterlockedExchangeAddSizeT(&KeGetCurrentPrcb()->MmTransitionCount, ProcessedPtes);
|
2012-08-03 11:34:35 +00:00
|
|
|
ProcessedPtes--;
|
|
|
|
|
|
|
|
/* Loop all the processing we did */
|
|
|
|
ASSERT(ProcessedPtes == 0);
|
|
|
|
|
|
|
|
/* Complete this as a transition fault */
|
|
|
|
ASSERT(OldIrql == KeGetCurrentIrql());
|
|
|
|
ASSERT(OldIrql <= APC_LEVEL);
|
|
|
|
ASSERT(KeAreAllApcsDisabled() == TRUE);
|
|
|
|
return STATUS_PAGE_FAULT_TRANSITION;
|
|
|
|
}
|
2012-07-31 07:32:19 +00:00
|
|
|
|
2012-08-03 11:34:35 +00:00
|
|
|
/* We did not -- PFN lock is still held, prepare to resolve prototype PTE fault */
|
|
|
|
OutPfn = MI_PFN_ELEMENT(SuperProtoPte->u.Hard.PageFrameNumber);
|
|
|
|
MiReferenceUsedPageAndBumpLockCount(OutPfn);
|
|
|
|
ASSERT(OutPfn->u3.e2.ReferenceCount > 1);
|
|
|
|
ASSERT(PointerPte->u.Hard.Valid == 0);
|
2012-07-31 07:32:19 +00:00
|
|
|
|
2012-08-03 11:34:35 +00:00
|
|
|
/* Resolve the fault -- this will release the PFN lock */
|
2018-01-01 21:40:43 +00:00
|
|
|
Status = MiResolveProtoPteFault(!MI_IS_NOT_PRESENT_FAULT(FaultCode),
|
2012-08-03 11:34:35 +00:00
|
|
|
Address,
|
|
|
|
PointerPte,
|
|
|
|
PointerProtoPte,
|
|
|
|
&OutPfn,
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
Process,
|
|
|
|
LockIrql,
|
|
|
|
TrapInformation);
|
2012-09-02 08:13:24 +00:00
|
|
|
//ASSERT(Status != STATUS_ISSUE_PAGING_IO);
|
|
|
|
//ASSERT(Status != STATUS_REFAULT);
|
|
|
|
//ASSERT(Status != STATUS_PTE_CHANGED);
|
2012-08-03 11:34:35 +00:00
|
|
|
|
|
|
|
/* Did the routine clean out the PFN or should we? */
|
|
|
|
if (OutPfn)
|
|
|
|
{
|
|
|
|
/* We had a locked PFN, so acquire the PFN lock to dereference it */
|
|
|
|
ASSERT(PointerProtoPte != NULL);
|
2017-11-21 22:33:42 +00:00
|
|
|
OldIrql = MiAcquirePfnLock();
|
2012-08-03 11:34:35 +00:00
|
|
|
|
|
|
|
/* Dereference the locked PFN */
|
|
|
|
MiDereferencePfnAndDropLockCount(OutPfn);
|
|
|
|
ASSERT(OutPfn->u3.e2.ReferenceCount >= 1);
|
|
|
|
|
|
|
|
/* And now release the lock */
|
2017-11-21 22:33:42 +00:00
|
|
|
MiReleasePfnLock(OldIrql);
|
2012-08-03 11:34:35 +00:00
|
|
|
}
|
2010-07-22 20:52:23 +00:00
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Complete this as a transition fault */
|
|
|
|
ASSERT(OldIrql == KeGetCurrentIrql());
|
|
|
|
ASSERT(OldIrql <= APC_LEVEL);
|
|
|
|
ASSERT(KeAreAllApcsDisabled() == TRUE);
|
2012-08-03 11:34:35 +00:00
|
|
|
return Status;
|
2010-10-04 18:51:07 +00:00
|
|
|
}
|
2010-07-22 20:52:23 +00:00
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2014-07-25 22:13:35 +00:00
|
|
|
/* Is this a transition PTE */
|
|
|
|
if (TempPte.u.Soft.Transition)
|
|
|
|
{
|
2016-08-28 19:53:27 +00:00
|
|
|
PKEVENT* InPageBlock = NULL;
|
|
|
|
PKEVENT PreviousPageEvent;
|
|
|
|
KEVENT CurrentPageEvent;
|
|
|
|
|
2014-07-25 22:13:35 +00:00
|
|
|
/* Lock the PFN database */
|
2017-11-21 22:33:42 +00:00
|
|
|
LockIrql = MiAcquirePfnLock();
|
2014-07-25 22:13:35 +00:00
|
|
|
|
|
|
|
/* Resolve */
|
2018-01-01 21:40:43 +00:00
|
|
|
Status = MiResolveTransitionFault(!MI_IS_NOT_PRESENT_FAULT(FaultCode), Address, PointerPte, Process, LockIrql, &InPageBlock);
|
2014-07-25 22:13:35 +00:00
|
|
|
|
2015-09-01 01:45:59 +00:00
|
|
|
ASSERT(NT_SUCCESS(Status));
|
2014-07-25 22:13:35 +00:00
|
|
|
|
2016-08-28 19:53:27 +00:00
|
|
|
if (InPageBlock != NULL)
|
|
|
|
{
|
|
|
|
/* Another thread is reading or writing this page. Put us into the waiting queue. */
|
|
|
|
KeInitializeEvent(&CurrentPageEvent, NotificationEvent, FALSE);
|
|
|
|
PreviousPageEvent = *InPageBlock;
|
|
|
|
*InPageBlock = &CurrentPageEvent;
|
|
|
|
}
|
|
|
|
|
2014-07-25 22:13:35 +00:00
|
|
|
/* And now release the lock and leave*/
|
2017-11-21 22:33:42 +00:00
|
|
|
MiReleasePfnLock(LockIrql);
|
2014-07-25 22:13:35 +00:00
|
|
|
|
2014-08-06 21:53:09 +00:00
|
|
|
if (InPageBlock != NULL)
|
|
|
|
{
|
2016-08-28 19:53:27 +00:00
|
|
|
KeWaitForSingleObject(&CurrentPageEvent, WrPageIn, KernelMode, FALSE, NULL);
|
|
|
|
|
|
|
|
/* Let's the chain go on */
|
|
|
|
if (PreviousPageEvent)
|
|
|
|
{
|
|
|
|
KeSetEvent(PreviousPageEvent, IO_NO_INCREMENT, FALSE);
|
|
|
|
}
|
2014-08-06 21:53:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT(OldIrql == KeGetCurrentIrql());
|
|
|
|
ASSERT(OldIrql <= APC_LEVEL);
|
|
|
|
ASSERT(KeAreAllApcsDisabled() == TRUE);
|
|
|
|
return Status;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Should we page the data back in ? */
|
|
|
|
if (TempPte.u.Soft.PageFileHigh != 0)
|
|
|
|
{
|
|
|
|
/* Lock the PFN database */
|
2017-11-21 22:33:42 +00:00
|
|
|
LockIrql = MiAcquirePfnLock();
|
2014-08-06 21:53:09 +00:00
|
|
|
|
|
|
|
/* Resolve */
|
2018-01-01 21:40:43 +00:00
|
|
|
Status = MiResolvePageFileFault(!MI_IS_NOT_PRESENT_FAULT(FaultCode), Address, PointerPte, Process, &LockIrql);
|
2014-08-06 21:53:09 +00:00
|
|
|
|
|
|
|
/* And now release the lock and leave*/
|
2017-11-21 22:33:42 +00:00
|
|
|
MiReleasePfnLock(LockIrql);
|
2014-08-06 21:53:09 +00:00
|
|
|
|
2014-07-25 22:13:35 +00:00
|
|
|
ASSERT(OldIrql == KeGetCurrentIrql());
|
|
|
|
ASSERT(OldIrql <= APC_LEVEL);
|
|
|
|
ASSERT(KeAreAllApcsDisabled() == TRUE);
|
|
|
|
return Status;
|
|
|
|
}
|
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
//
|
2012-02-29 23:11:21 +00:00
|
|
|
// The PTE must be invalid but not completely empty. It must also not be a
|
2014-08-06 21:53:09 +00:00
|
|
|
// prototype a transition or a paged-out PTE as those scenarii should've been handled above.
|
2014-07-25 22:13:35 +00:00
|
|
|
// These are all Windows checks
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
//
|
|
|
|
ASSERT(TempPte.u.Hard.Valid == 0);
|
2012-02-29 23:11:21 +00:00
|
|
|
ASSERT(TempPte.u.Soft.Prototype == 0);
|
2014-07-25 22:13:35 +00:00
|
|
|
ASSERT(TempPte.u.Soft.Transition == 0);
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
ASSERT(TempPte.u.Soft.PageFileHigh == 0);
|
2014-08-06 21:53:09 +00:00
|
|
|
ASSERT(TempPte.u.Long != 0);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
//
|
|
|
|
// If we got this far, the PTE can only be a demand zero PTE, which is what
|
|
|
|
// we want. Go handle it!
|
|
|
|
//
|
|
|
|
Status = MiResolveDemandZeroFault(Address,
|
Two Part Patch which fixes ARM3 Section Support (not yet enabled). This had been enabled in the past for testing and resulted in bizare crashes during testing. The amount of fixing required should reveal why:
Part 1: Page Fault Path Fixes
[NTOS]: As an optimization, someone seems to have had changed the MiResolveDemandZeroFault prototype not to require a PTE, and to instead take a protection mask directly. While clever, this broke support for ARM3 sections, because the code was now assuming that the protection of the PTE for the input address should be used -- while in NT Sections we instead use what are called ProtoType PTEs. This was very annoying to debug, but since the cause has been fixed, I've reverted back to the old convention in which the PTE is passed-in, and this can be a different PTE than the PTE for the address, as it should be.
[NTOS]: Due to the reverting of the original path, another optimization, in which MiResolveDemandZeroFault was being called directly instead of going through MiDispatchFault and writing an invalid demand-zero PDE has also been removed. PDE faults are now going through the correct, expected path.
[NTOS]: MiResolveDemandZeroFault was always creating Kernel PTEs. It should create User PTEs when necessary.
[NTOS]: MiDeletePte was assuming any prototype PTE is a forked PTE. Forked PTEs only happen when the addresses in the PTE don't match, so check for that too.
Part 2: ARM3 Section Object Fixes
[NTOS]: Fix issue when trying to make both ROS_SECTION_OBJECTs and NT's SECTION co-exist. We relied on the *caller* knowing what kind of section this is, and that can't be a good idea. Now, when the caller requests an ARM3 section vs a ROS section, we use a marker to detect what kind of section this is for later APIs.
[NTOS]: For section VADs, we were storing the ReactOS MEMORY_AREA in the ControlArea... however, the mappings of one individual section object share a single control area, even though they have multiple MEMORY_AREAs (one for each mapping). As such, we overwrote the MEMORY_AREA continously, and at free-time, double or triple-freed the same memory area.
[NTOS]: Moved the MEMORY_AREA to the "Banked" field of the long VAD, instead of the ControlArea. Allocate MMVAD_LONGs for ARM3 sections for now, to support this. Also, after deleting the MEMORY_AREA while parsing VADs, we now use a special marker to detect double-frees, and we also use a special marker to make sure we have a Long VAD as expected.
svn path=/trunk/; revision=56035
2012-03-05 16:41:46 +00:00
|
|
|
PointerPte,
|
2011-11-01 21:03:00 +00:00
|
|
|
(ULONG)TempPte.u.Soft.Protection,
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
Process,
|
2010-05-12 20:48:15 +00:00
|
|
|
MM_NOIRQL);
|
2010-11-02 14:58:39 +00:00
|
|
|
ASSERT(KeAreAllApcsDisabled() == TRUE);
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
if (NT_SUCCESS(Status))
|
|
|
|
{
|
2021-06-24 15:01:38 +00:00
|
|
|
#if MI_TRACE_PFNS
|
|
|
|
/* Update debug info */
|
|
|
|
if (TrapInformation)
|
|
|
|
MiGetPfnEntry(PointerPte->u.Hard.PageFrameNumber)->CallSite = (PVOID)((PKTRAP_FRAME)TrapInformation)->Eip;
|
|
|
|
else
|
|
|
|
MiGetPfnEntry(PointerPte->u.Hard.PageFrameNumber)->CallSite = _ReturnAddress();
|
|
|
|
#endif
|
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
//
|
|
|
|
// Make sure we're returning in a sane state and pass the status down
|
|
|
|
//
|
2010-11-02 14:58:39 +00:00
|
|
|
ASSERT(OldIrql == KeGetCurrentIrql());
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
ASSERT(KeGetCurrentIrql() <= APC_LEVEL);
|
|
|
|
return Status;
|
|
|
|
}
|
|
|
|
|
|
|
|
//
|
|
|
|
// Generate an access fault
|
|
|
|
//
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
|
|
|
|
|
|
|
NTSTATUS
|
|
|
|
NTAPI
|
2018-01-01 21:52:37 +00:00
|
|
|
MmArmAccessFault(IN ULONG FaultCode,
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
IN PVOID Address,
|
|
|
|
IN KPROCESSOR_MODE Mode,
|
|
|
|
IN PVOID TrapInformation)
|
|
|
|
{
|
|
|
|
KIRQL OldIrql = KeGetCurrentIrql(), LockIrql;
|
2012-02-05 17:19:58 +00:00
|
|
|
PMMPTE ProtoPte = NULL;
|
|
|
|
PMMPTE PointerPte = MiAddressToPte(Address);
|
|
|
|
PMMPDE PointerPde = MiAddressToPde(Address);
|
|
|
|
#if (_MI_PAGING_LEVELS >= 3)
|
|
|
|
PMMPDE PointerPpe = MiAddressToPpe(Address);
|
|
|
|
#if (_MI_PAGING_LEVELS == 4)
|
|
|
|
PMMPDE PointerPxe = MiAddressToPxe(Address);
|
|
|
|
#endif
|
|
|
|
#endif
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
MMPTE TempPte;
|
|
|
|
PETHREAD CurrentThread;
|
2010-07-22 18:26:04 +00:00
|
|
|
PEPROCESS CurrentProcess;
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
NTSTATUS Status;
|
2010-07-22 18:26:04 +00:00
|
|
|
PMMSUPPORT WorkingSet;
|
2010-07-22 18:37:27 +00:00
|
|
|
ULONG ProtectionCode;
|
2014-10-15 22:03:50 +00:00
|
|
|
PMMVAD Vad = NULL;
|
2010-07-22 18:37:27 +00:00
|
|
|
PFN_NUMBER PageFrameIndex;
|
2010-09-29 01:10:28 +00:00
|
|
|
ULONG Color;
|
2012-07-21 19:07:11 +00:00
|
|
|
BOOLEAN IsSessionAddress;
|
|
|
|
PMMPFN Pfn1;
|
2012-02-05 17:19:58 +00:00
|
|
|
DPRINT("ARM3 FAULT AT: %p\n", Address);
|
2010-07-26 21:45:42 +00:00
|
|
|
|
2012-02-06 09:26:23 +00:00
|
|
|
/* Check for page fault on high IRQL */
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
if (OldIrql > APC_LEVEL)
|
|
|
|
{
|
2012-07-21 19:07:11 +00:00
|
|
|
#if (_MI_PAGING_LEVELS < 3)
|
|
|
|
/* Could be a page table for paged pool, which we'll allow */
|
|
|
|
if (MI_IS_SYSTEM_PAGE_TABLE_ADDRESS(Address)) MiSynchronizeSystemPde((PMMPDE)PointerPte);
|
|
|
|
MiCheckPdeForPagedPool(Address);
|
|
|
|
#endif
|
|
|
|
/* Check if any of the top-level pages are invalid */
|
|
|
|
if (
|
|
|
|
#if (_MI_PAGING_LEVELS == 4)
|
|
|
|
(PointerPxe->u.Hard.Valid == 0) ||
|
|
|
|
#endif
|
|
|
|
#if (_MI_PAGING_LEVELS >= 3)
|
|
|
|
(PointerPpe->u.Hard.Valid == 0) ||
|
|
|
|
#endif
|
2013-11-22 12:23:11 +00:00
|
|
|
(PointerPde->u.Hard.Valid == 0) ||
|
|
|
|
(PointerPte->u.Hard.Valid == 0))
|
2012-07-21 19:07:11 +00:00
|
|
|
{
|
2013-11-22 12:23:11 +00:00
|
|
|
/* This fault is not valid, print out some debugging help */
|
2012-07-21 19:07:11 +00:00
|
|
|
DbgPrint("MM:***PAGE FAULT AT IRQL > 1 Va %p, IRQL %lx\n",
|
|
|
|
Address,
|
|
|
|
OldIrql);
|
|
|
|
if (TrapInformation)
|
|
|
|
{
|
|
|
|
PKTRAP_FRAME TrapFrame = TrapInformation;
|
2012-12-30 11:54:40 +00:00
|
|
|
#ifdef _M_IX86
|
2012-07-21 19:07:11 +00:00
|
|
|
DbgPrint("MM:***EIP %p, EFL %p\n", TrapFrame->Eip, TrapFrame->EFlags);
|
|
|
|
DbgPrint("MM:***EAX %p, ECX %p EDX %p\n", TrapFrame->Eax, TrapFrame->Ecx, TrapFrame->Edx);
|
|
|
|
DbgPrint("MM:***EBX %p, ESI %p EDI %p\n", TrapFrame->Ebx, TrapFrame->Esi, TrapFrame->Edi);
|
2012-12-30 11:54:40 +00:00
|
|
|
#elif defined(_M_AMD64)
|
|
|
|
DbgPrint("MM:***RIP %p, EFL %p\n", TrapFrame->Rip, TrapFrame->EFlags);
|
|
|
|
DbgPrint("MM:***RAX %p, RCX %p RDX %p\n", TrapFrame->Rax, TrapFrame->Rcx, TrapFrame->Rdx);
|
|
|
|
DbgPrint("MM:***RBX %p, RSI %p RDI %p\n", TrapFrame->Rbx, TrapFrame->Rsi, TrapFrame->Rdi);
|
2015-05-14 22:31:58 +00:00
|
|
|
#elif defined(_M_ARM)
|
|
|
|
DbgPrint("MM:***PC %p\n", TrapFrame->Pc);
|
|
|
|
DbgPrint("MM:***R0 %p, R1 %p R2 %p, R3 %p\n", TrapFrame->R0, TrapFrame->R1, TrapFrame->R2, TrapFrame->R3);
|
|
|
|
DbgPrint("MM:***R11 %p, R12 %p SP %p, LR %p\n", TrapFrame->R11, TrapFrame->R12, TrapFrame->Sp, TrapFrame->Lr);
|
2012-12-30 11:54:40 +00:00
|
|
|
#endif
|
2012-07-21 19:07:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Tell the trap handler to fail */
|
|
|
|
return STATUS_IN_PAGE_ERROR | 0x10000000;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Not yet implemented in ReactOS */
|
|
|
|
ASSERT(MI_IS_PAGE_LARGE(PointerPde) == FALSE);
|
2018-01-01 22:03:56 +00:00
|
|
|
ASSERT((!MI_IS_NOT_PRESENT_FAULT(FaultCode) && MI_IS_PAGE_COPY_ON_WRITE(PointerPte)) == FALSE);
|
2012-07-21 19:07:11 +00:00
|
|
|
|
|
|
|
/* Check if this was a write */
|
2018-01-01 22:03:56 +00:00
|
|
|
if (MI_IS_WRITE_ACCESS(FaultCode))
|
2012-07-21 19:07:11 +00:00
|
|
|
{
|
|
|
|
/* Was it to a read-only page? */
|
|
|
|
Pfn1 = MI_PFN_ELEMENT(PointerPte->u.Hard.PageFrameNumber);
|
|
|
|
if (!(PointerPte->u.Long & PTE_READWRITE) &&
|
|
|
|
!(Pfn1->OriginalPte.u.Soft.Protection & MM_READWRITE))
|
|
|
|
{
|
|
|
|
/* Crash with distinguished bugcheck code */
|
|
|
|
KeBugCheckEx(ATTEMPTED_WRITE_TO_READONLY_MEMORY,
|
|
|
|
(ULONG_PTR)Address,
|
|
|
|
PointerPte->u.Long,
|
|
|
|
(ULONG_PTR)TrapInformation,
|
|
|
|
10);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Nothing is actually wrong */
|
2013-11-22 12:23:11 +00:00
|
|
|
DPRINT1("Fault at IRQL %u is ok (%p)\n", OldIrql, Address);
|
2012-07-21 19:07:11 +00:00
|
|
|
return STATUS_SUCCESS;
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-02-06 09:26:23 +00:00
|
|
|
/* Check for kernel fault address */
|
2012-02-06 14:32:07 +00:00
|
|
|
if (Address >= MmSystemRangeStart)
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
{
|
2012-02-06 09:26:23 +00:00
|
|
|
/* Bail out, if the fault came from user mode */
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
if (Mode == UserMode) return STATUS_ACCESS_VIOLATION;
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
#if (_MI_PAGING_LEVELS == 2)
|
|
|
|
if (MI_IS_SYSTEM_PAGE_TABLE_ADDRESS(Address)) MiSynchronizeSystemPde((PMMPDE)PointerPte);
|
|
|
|
MiCheckPdeForPagedPool(Address);
|
|
|
|
#endif
|
2012-02-06 14:32:07 +00:00
|
|
|
|
2013-11-27 00:04:26 +00:00
|
|
|
/* Check if the higher page table entries are invalid */
|
|
|
|
if (
|
|
|
|
#if (_MI_PAGING_LEVELS == 4)
|
|
|
|
/* AMD64 system, check if PXE is invalid */
|
|
|
|
(PointerPxe->u.Hard.Valid == 0) ||
|
|
|
|
#endif
|
|
|
|
#if (_MI_PAGING_LEVELS >= 3)
|
|
|
|
/* PAE/AMD64 system, check if PPE is invalid */
|
|
|
|
(PointerPpe->u.Hard.Valid == 0) ||
|
|
|
|
#endif
|
|
|
|
/* Always check if the PDE is valid */
|
|
|
|
(PointerPde->u.Hard.Valid == 0))
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
{
|
2013-11-27 00:04:26 +00:00
|
|
|
/* PXE/PPE/PDE (still) not valid, kill the system */
|
2012-07-21 19:07:11 +00:00
|
|
|
KeBugCheckEx(PAGE_FAULT_IN_NONPAGED_AREA,
|
|
|
|
(ULONG_PTR)Address,
|
2018-01-01 22:03:56 +00:00
|
|
|
FaultCode,
|
2012-07-21 19:07:11 +00:00
|
|
|
(ULONG_PTR)TrapInformation,
|
|
|
|
2);
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Not handling session faults yet */
|
|
|
|
IsSessionAddress = MI_IS_SESSION_ADDRESS(Address);
|
|
|
|
|
2012-02-06 09:26:23 +00:00
|
|
|
/* The PDE is valid, so read the PTE */
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
TempPte = *PointerPte;
|
|
|
|
if (TempPte.u.Hard.Valid == 1)
|
|
|
|
{
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Check if this was system space or session space */
|
|
|
|
if (!IsSessionAddress)
|
|
|
|
{
|
|
|
|
/* Check if the PTE is still valid under PFN lock */
|
2017-11-21 22:33:42 +00:00
|
|
|
OldIrql = MiAcquirePfnLock();
|
2012-07-21 19:07:11 +00:00
|
|
|
TempPte = *PointerPte;
|
|
|
|
if (TempPte.u.Hard.Valid)
|
|
|
|
{
|
|
|
|
/* Check if this was a write */
|
2018-01-01 22:03:56 +00:00
|
|
|
if (MI_IS_WRITE_ACCESS(FaultCode))
|
2012-07-21 19:07:11 +00:00
|
|
|
{
|
|
|
|
/* Was it to a read-only page? */
|
|
|
|
Pfn1 = MI_PFN_ELEMENT(PointerPte->u.Hard.PageFrameNumber);
|
|
|
|
if (!(PointerPte->u.Long & PTE_READWRITE) &&
|
|
|
|
!(Pfn1->OriginalPte.u.Soft.Protection & MM_READWRITE))
|
|
|
|
{
|
|
|
|
/* Crash with distinguished bugcheck code */
|
|
|
|
KeBugCheckEx(ATTEMPTED_WRITE_TO_READONLY_MEMORY,
|
|
|
|
(ULONG_PTR)Address,
|
|
|
|
PointerPte->u.Long,
|
|
|
|
(ULONG_PTR)TrapInformation,
|
|
|
|
11);
|
|
|
|
}
|
|
|
|
}
|
2018-01-02 10:22:22 +00:00
|
|
|
|
|
|
|
/* Check for execution of non-executable memory */
|
|
|
|
if (MI_IS_INSTRUCTION_FETCH(FaultCode) &&
|
|
|
|
!MI_IS_PAGE_EXECUTABLE(&TempPte))
|
|
|
|
{
|
|
|
|
KeBugCheckEx(ATTEMPTED_EXECUTE_OF_NOEXECUTE_MEMORY,
|
|
|
|
(ULONG_PTR)Address,
|
|
|
|
(ULONG_PTR)TempPte.u.Long,
|
|
|
|
(ULONG_PTR)TrapInformation,
|
|
|
|
1);
|
|
|
|
}
|
2012-07-21 19:07:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Release PFN lock and return all good */
|
2017-11-21 22:33:42 +00:00
|
|
|
MiReleasePfnLock(OldIrql);
|
2012-07-21 19:07:11 +00:00
|
|
|
return STATUS_SUCCESS;
|
|
|
|
}
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
}
|
2013-09-22 21:19:40 +00:00
|
|
|
#if (_MI_PAGING_LEVELS == 2)
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Check if this was a session PTE that needs to remap the session PDE */
|
|
|
|
if (MI_IS_SESSION_PTE(Address))
|
|
|
|
{
|
2012-08-01 07:54:37 +00:00
|
|
|
/* Do the remapping */
|
|
|
|
Status = MiCheckPdeForSessionSpace(Address);
|
|
|
|
if (!NT_SUCCESS(Status))
|
|
|
|
{
|
|
|
|
/* It failed, this address is invalid */
|
|
|
|
KeBugCheckEx(PAGE_FAULT_IN_NONPAGED_AREA,
|
|
|
|
(ULONG_PTR)Address,
|
2018-01-01 22:03:56 +00:00
|
|
|
FaultCode,
|
2012-08-01 07:54:37 +00:00
|
|
|
(ULONG_PTR)TrapInformation,
|
|
|
|
6);
|
|
|
|
}
|
2012-07-21 19:07:11 +00:00
|
|
|
}
|
2013-09-22 21:19:40 +00:00
|
|
|
#else
|
|
|
|
|
|
|
|
_WARN("Session space stuff is not implemented yet!")
|
|
|
|
|
|
|
|
#endif
|
2012-02-06 14:35:09 +00:00
|
|
|
|
2012-02-29 23:11:21 +00:00
|
|
|
/* Check for a fault on the page table or hyperspace */
|
2012-07-21 19:07:11 +00:00
|
|
|
if (MI_IS_PAGE_TABLE_OR_HYPER_ADDRESS(Address))
|
|
|
|
{
|
|
|
|
#if (_MI_PAGING_LEVELS < 3)
|
|
|
|
/* Windows does this check but I don't understand why -- it's done above! */
|
|
|
|
ASSERT(MiCheckPdeForPagedPool(Address) != STATUS_WAIT_1);
|
|
|
|
#endif
|
|
|
|
/* Handle this as a user mode fault */
|
|
|
|
goto UserFault;
|
|
|
|
}
|
2012-02-06 14:32:07 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Get the current thread */
|
|
|
|
CurrentThread = PsGetCurrentThread();
|
|
|
|
|
|
|
|
/* What kind of address is this */
|
|
|
|
if (!IsSessionAddress)
|
|
|
|
{
|
|
|
|
/* Use the system working set */
|
|
|
|
WorkingSet = &MmSystemCacheWs;
|
|
|
|
CurrentProcess = NULL;
|
|
|
|
|
|
|
|
/* Make sure we don't have a recursive working set lock */
|
|
|
|
if ((CurrentThread->OwnsProcessWorkingSetExclusive) ||
|
|
|
|
(CurrentThread->OwnsProcessWorkingSetShared) ||
|
|
|
|
(CurrentThread->OwnsSystemWorkingSetExclusive) ||
|
|
|
|
(CurrentThread->OwnsSystemWorkingSetShared) ||
|
|
|
|
(CurrentThread->OwnsSessionWorkingSetExclusive) ||
|
|
|
|
(CurrentThread->OwnsSessionWorkingSetShared))
|
|
|
|
{
|
|
|
|
/* Fail */
|
|
|
|
return STATUS_IN_PAGE_ERROR | 0x10000000;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2012-08-01 07:54:37 +00:00
|
|
|
/* Use the session process and working set */
|
|
|
|
CurrentProcess = HYDRA_PROCESS;
|
|
|
|
WorkingSet = &MmSessionSpace->GlobalVirtualAddress->Vm;
|
|
|
|
|
|
|
|
/* Make sure we don't have a recursive working set lock */
|
|
|
|
if ((CurrentThread->OwnsSessionWorkingSetExclusive) ||
|
|
|
|
(CurrentThread->OwnsSessionWorkingSetShared))
|
|
|
|
{
|
|
|
|
/* Fail */
|
|
|
|
return STATUS_IN_PAGE_ERROR | 0x10000000;
|
|
|
|
}
|
2012-07-21 19:07:11 +00:00
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-02-06 14:32:07 +00:00
|
|
|
/* Acquire the working set lock */
|
2010-07-22 18:26:04 +00:00
|
|
|
KeRaiseIrql(APC_LEVEL, &LockIrql);
|
|
|
|
MiLockWorkingSet(CurrentThread, WorkingSet);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-02-06 09:26:23 +00:00
|
|
|
/* Re-read PTE now that we own the lock */
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
TempPte = *PointerPte;
|
|
|
|
if (TempPte.u.Hard.Valid == 1)
|
|
|
|
{
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Check if this was a write */
|
2018-01-01 22:03:56 +00:00
|
|
|
if (MI_IS_WRITE_ACCESS(FaultCode))
|
2012-07-21 19:07:11 +00:00
|
|
|
{
|
|
|
|
/* Was it to a read-only page that is not copy on write? */
|
|
|
|
Pfn1 = MI_PFN_ELEMENT(PointerPte->u.Hard.PageFrameNumber);
|
|
|
|
if (!(TempPte.u.Long & PTE_READWRITE) &&
|
|
|
|
!(Pfn1->OriginalPte.u.Soft.Protection & MM_READWRITE) &&
|
2015-05-10 19:35:24 +00:00
|
|
|
!MI_IS_PAGE_COPY_ON_WRITE(&TempPte))
|
2012-07-21 19:07:11 +00:00
|
|
|
{
|
|
|
|
/* Case not yet handled */
|
|
|
|
ASSERT(!IsSessionAddress);
|
|
|
|
|
|
|
|
/* Crash with distinguished bugcheck code */
|
|
|
|
KeBugCheckEx(ATTEMPTED_WRITE_TO_READONLY_MEMORY,
|
|
|
|
(ULONG_PTR)Address,
|
|
|
|
TempPte.u.Long,
|
|
|
|
(ULONG_PTR)TrapInformation,
|
|
|
|
12);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-01-02 10:22:22 +00:00
|
|
|
/* Check for execution of non-executable memory */
|
|
|
|
if (MI_IS_INSTRUCTION_FETCH(FaultCode) &&
|
|
|
|
!MI_IS_PAGE_EXECUTABLE(&TempPte))
|
|
|
|
{
|
|
|
|
KeBugCheckEx(ATTEMPTED_EXECUTE_OF_NOEXECUTE_MEMORY,
|
|
|
|
(ULONG_PTR)Address,
|
|
|
|
(ULONG_PTR)TempPte.u.Long,
|
|
|
|
(ULONG_PTR)TrapInformation,
|
|
|
|
2);
|
|
|
|
}
|
|
|
|
|
2012-08-01 07:54:37 +00:00
|
|
|
/* Check for read-only write in session space */
|
|
|
|
if ((IsSessionAddress) &&
|
2018-01-01 22:03:56 +00:00
|
|
|
MI_IS_WRITE_ACCESS(FaultCode) &&
|
2015-05-10 19:35:24 +00:00
|
|
|
!MI_IS_PAGE_WRITEABLE(&TempPte))
|
2012-08-01 07:54:37 +00:00
|
|
|
{
|
|
|
|
/* Sanity check */
|
|
|
|
ASSERT(MI_IS_SESSION_IMAGE_ADDRESS(Address));
|
|
|
|
|
|
|
|
/* Was this COW? */
|
2015-05-10 19:35:24 +00:00
|
|
|
if (!MI_IS_PAGE_COPY_ON_WRITE(&TempPte))
|
2012-08-01 07:54:37 +00:00
|
|
|
{
|
|
|
|
/* Then this is not allowed */
|
|
|
|
KeBugCheckEx(ATTEMPTED_WRITE_TO_READONLY_MEMORY,
|
|
|
|
(ULONG_PTR)Address,
|
|
|
|
(ULONG_PTR)TempPte.u.Long,
|
|
|
|
(ULONG_PTR)TrapInformation,
|
|
|
|
13);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Otherwise, handle COW */
|
|
|
|
ASSERT(FALSE);
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 18:26:04 +00:00
|
|
|
/* Release the working set */
|
|
|
|
MiUnlockWorkingSet(CurrentThread, WorkingSet);
|
|
|
|
KeLowerIrql(LockIrql);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Otherwise, the PDE was probably invalid, and all is good now */
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
return STATUS_SUCCESS;
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-08-29 19:13:08 +00:00
|
|
|
/* Check one kind of prototype PTE */
|
|
|
|
if (TempPte.u.Soft.Prototype)
|
|
|
|
{
|
|
|
|
/* Make sure protected pool is on, and that this is a pool address */
|
|
|
|
if ((MmProtectFreedNonPagedPool) &&
|
|
|
|
(((Address >= MmNonPagedPoolStart) &&
|
|
|
|
(Address < (PVOID)((ULONG_PTR)MmNonPagedPoolStart +
|
|
|
|
MmSizeOfNonPagedPoolInBytes))) ||
|
|
|
|
((Address >= MmNonPagedPoolExpansionStart) &&
|
|
|
|
(Address < MmNonPagedPoolEnd))))
|
|
|
|
{
|
|
|
|
/* Bad boy, bad boy, whatcha gonna do, whatcha gonna do when ARM3 comes for you! */
|
|
|
|
KeBugCheckEx(DRIVER_CAUGHT_MODIFYING_FREED_POOL,
|
|
|
|
(ULONG_PTR)Address,
|
2018-01-01 22:03:56 +00:00
|
|
|
FaultCode,
|
2010-08-29 19:13:08 +00:00
|
|
|
Mode,
|
|
|
|
4);
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-10-04 18:51:07 +00:00
|
|
|
/* Get the prototype PTE! */
|
|
|
|
ProtoPte = MiProtoPteToPte(&TempPte);
|
2012-07-21 19:07:11 +00:00
|
|
|
|
2012-08-01 07:54:37 +00:00
|
|
|
/* Do we need to locate the prototype PTE in session space? */
|
|
|
|
if ((IsSessionAddress) &&
|
|
|
|
(TempPte.u.Soft.PageFileHigh == MI_PTE_LOOKUP_NEEDED))
|
|
|
|
{
|
|
|
|
/* Yep, go find it as well as the VAD for it */
|
|
|
|
ProtoPte = MiCheckVirtualAddress(Address,
|
|
|
|
&ProtectionCode,
|
|
|
|
&Vad);
|
|
|
|
ASSERT(ProtoPte != NULL);
|
|
|
|
}
|
2010-08-29 19:13:08 +00:00
|
|
|
}
|
2010-10-19 18:57:30 +00:00
|
|
|
else
|
2010-12-26 15:23:03 +00:00
|
|
|
{
|
2012-02-06 09:26:23 +00:00
|
|
|
/* We don't implement transition PTEs */
|
2010-10-19 18:57:30 +00:00
|
|
|
ASSERT(TempPte.u.Soft.Transition == 0);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-10-19 18:57:30 +00:00
|
|
|
/* Check for no-access PTE */
|
|
|
|
if (TempPte.u.Soft.Protection == MM_NOACCESS)
|
|
|
|
{
|
2012-02-06 09:26:23 +00:00
|
|
|
/* Bugcheck the system! */
|
2010-10-19 18:57:30 +00:00
|
|
|
KeBugCheckEx(PAGE_FAULT_IN_NONPAGED_AREA,
|
|
|
|
(ULONG_PTR)Address,
|
2018-01-01 22:03:56 +00:00
|
|
|
FaultCode,
|
2010-10-19 18:57:30 +00:00
|
|
|
(ULONG_PTR)TrapInformation,
|
|
|
|
1);
|
|
|
|
}
|
2014-10-22 12:29:31 +00:00
|
|
|
|
|
|
|
/* Check for no protecton at all */
|
|
|
|
if (TempPte.u.Soft.Protection == MM_ZERO_ACCESS)
|
|
|
|
{
|
|
|
|
/* Bugcheck the system! */
|
|
|
|
KeBugCheckEx(PAGE_FAULT_IN_NONPAGED_AREA,
|
|
|
|
(ULONG_PTR)Address,
|
2018-01-01 22:03:56 +00:00
|
|
|
FaultCode,
|
2014-10-22 12:29:31 +00:00
|
|
|
(ULONG_PTR)TrapInformation,
|
|
|
|
0);
|
|
|
|
}
|
2012-07-21 19:07:11 +00:00
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Check for demand page */
|
2018-01-01 22:03:56 +00:00
|
|
|
if (MI_IS_WRITE_ACCESS(FaultCode) &&
|
2012-07-21 19:07:11 +00:00
|
|
|
!(ProtoPte) &&
|
|
|
|
!(IsSessionAddress) &&
|
|
|
|
!(TempPte.u.Hard.Valid))
|
|
|
|
{
|
|
|
|
/* Get the protection code */
|
|
|
|
ASSERT(TempPte.u.Soft.Transition == 0);
|
|
|
|
if (!(TempPte.u.Soft.Protection & MM_READWRITE))
|
2010-10-04 18:51:07 +00:00
|
|
|
{
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Bugcheck the system! */
|
|
|
|
KeBugCheckEx(ATTEMPTED_WRITE_TO_READONLY_MEMORY,
|
|
|
|
(ULONG_PTR)Address,
|
|
|
|
TempPte.u.Long,
|
|
|
|
(ULONG_PTR)TrapInformation,
|
|
|
|
14);
|
2010-10-04 18:51:07 +00:00
|
|
|
}
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-02-06 09:26:23 +00:00
|
|
|
/* Now do the real fault handling */
|
2018-01-01 21:40:43 +00:00
|
|
|
Status = MiDispatchFault(FaultCode,
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
Address,
|
|
|
|
PointerPte,
|
2010-10-04 18:51:07 +00:00
|
|
|
ProtoPte,
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
FALSE,
|
2012-02-06 14:32:07 +00:00
|
|
|
CurrentProcess,
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
TrapInformation,
|
|
|
|
NULL);
|
2010-07-22 18:26:04 +00:00
|
|
|
|
|
|
|
/* Release the working set */
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
ASSERT(KeAreAllApcsDisabled() == TRUE);
|
2010-07-22 18:26:04 +00:00
|
|
|
MiUnlockWorkingSet(CurrentThread, WorkingSet);
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
KeLowerIrql(LockIrql);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-02-06 09:26:23 +00:00
|
|
|
/* We are done! */
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
DPRINT("Fault resolved with status: %lx\n", Status);
|
|
|
|
return Status;
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-02-06 14:32:07 +00:00
|
|
|
/* This is a user fault */
|
2012-02-29 23:11:21 +00:00
|
|
|
UserFault:
|
2012-02-06 14:32:07 +00:00
|
|
|
CurrentThread = PsGetCurrentThread();
|
|
|
|
CurrentProcess = (PEPROCESS)CurrentThread->Tcb.ApcState.Process;
|
|
|
|
|
|
|
|
/* Lock the working set */
|
|
|
|
MiLockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
|
2017-12-27 16:18:00 +00:00
|
|
|
ProtectionCode = MM_INVALID_PROTECTION;
|
|
|
|
|
2012-02-05 17:19:58 +00:00
|
|
|
#if (_MI_PAGING_LEVELS == 4)
|
2012-02-06 15:08:32 +00:00
|
|
|
/* Check if the PXE is valid */
|
|
|
|
if (PointerPxe->u.Hard.Valid == 0)
|
|
|
|
{
|
|
|
|
/* Right now, we only handle scenarios where the PXE is totally empty */
|
|
|
|
ASSERT(PointerPxe->u.Long == 0);
|
2017-12-27 16:18:00 +00:00
|
|
|
|
|
|
|
/* This is only possible for user mode addresses! */
|
|
|
|
ASSERT(PointerPte <= MiHighestUserPte);
|
|
|
|
|
|
|
|
/* Check if we have a VAD */
|
|
|
|
MiCheckVirtualAddress(Address, &ProtectionCode, &Vad);
|
|
|
|
if (ProtectionCode == MM_NOACCESS)
|
|
|
|
{
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
|
|
|
|
2012-02-06 15:08:32 +00:00
|
|
|
/* Resolve a demand zero fault */
|
2017-12-27 16:18:00 +00:00
|
|
|
MiResolveDemandZeroFault(PointerPpe,
|
|
|
|
PointerPxe,
|
2018-03-21 20:22:03 +00:00
|
|
|
MM_EXECUTE_READWRITE,
|
2017-12-27 16:18:00 +00:00
|
|
|
CurrentProcess,
|
|
|
|
MM_NOIRQL);
|
|
|
|
|
2012-02-06 15:08:32 +00:00
|
|
|
/* We should come back with a valid PXE */
|
|
|
|
ASSERT(PointerPxe->u.Hard.Valid == 1);
|
|
|
|
}
|
2012-02-05 17:19:58 +00:00
|
|
|
#endif
|
2012-02-06 14:32:07 +00:00
|
|
|
|
2012-02-05 17:19:58 +00:00
|
|
|
#if (_MI_PAGING_LEVELS >= 3)
|
2012-02-06 15:08:32 +00:00
|
|
|
/* Check if the PPE is valid */
|
|
|
|
if (PointerPpe->u.Hard.Valid == 0)
|
|
|
|
{
|
|
|
|
/* Right now, we only handle scenarios where the PPE is totally empty */
|
|
|
|
ASSERT(PointerPpe->u.Long == 0);
|
2017-12-27 16:18:00 +00:00
|
|
|
|
|
|
|
/* This is only possible for user mode addresses! */
|
|
|
|
ASSERT(PointerPte <= MiHighestUserPte);
|
|
|
|
|
|
|
|
/* Check if we have a VAD, unless we did this already */
|
|
|
|
if (ProtectionCode == MM_INVALID_PROTECTION)
|
|
|
|
{
|
|
|
|
MiCheckVirtualAddress(Address, &ProtectionCode, &Vad);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ProtectionCode == MM_NOACCESS)
|
|
|
|
{
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
|
|
|
|
2012-02-06 15:08:32 +00:00
|
|
|
/* Resolve a demand zero fault */
|
2017-12-27 16:18:00 +00:00
|
|
|
MiResolveDemandZeroFault(PointerPde,
|
|
|
|
PointerPpe,
|
2018-03-21 20:22:03 +00:00
|
|
|
MM_EXECUTE_READWRITE,
|
2017-12-27 16:18:00 +00:00
|
|
|
CurrentProcess,
|
|
|
|
MM_NOIRQL);
|
|
|
|
|
2012-02-06 15:08:32 +00:00
|
|
|
/* We should come back with a valid PPE */
|
|
|
|
ASSERT(PointerPpe->u.Hard.Valid == 1);
|
2021-05-28 14:58:59 +00:00
|
|
|
MiIncrementPageTableReferences(PointerPde);
|
2012-02-06 15:08:32 +00:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2011-11-01 21:03:00 +00:00
|
|
|
/* Check if the PDE is invalid */
|
2010-07-22 18:37:27 +00:00
|
|
|
if (PointerPde->u.Hard.Valid == 0)
|
|
|
|
{
|
|
|
|
/* Right now, we only handle scenarios where the PDE is totally empty */
|
|
|
|
ASSERT(PointerPde->u.Long == 0);
|
|
|
|
|
|
|
|
/* And go dispatch the fault on the PDE. This should handle the demand-zero */
|
2010-11-02 15:16:22 +00:00
|
|
|
#if MI_TRACE_PFNS
|
|
|
|
UserPdeFault = TRUE;
|
|
|
|
#endif
|
2017-12-27 16:18:00 +00:00
|
|
|
/* Check if we have a VAD, unless we did this already */
|
|
|
|
if (ProtectionCode == MM_INVALID_PROTECTION)
|
|
|
|
{
|
|
|
|
MiCheckVirtualAddress(Address, &ProtectionCode, &Vad);
|
|
|
|
}
|
|
|
|
|
Two Part Patch which fixes ARM3 Section Support (not yet enabled). This had been enabled in the past for testing and resulted in bizare crashes during testing. The amount of fixing required should reveal why:
Part 1: Page Fault Path Fixes
[NTOS]: As an optimization, someone seems to have had changed the MiResolveDemandZeroFault prototype not to require a PTE, and to instead take a protection mask directly. While clever, this broke support for ARM3 sections, because the code was now assuming that the protection of the PTE for the input address should be used -- while in NT Sections we instead use what are called ProtoType PTEs. This was very annoying to debug, but since the cause has been fixed, I've reverted back to the old convention in which the PTE is passed-in, and this can be a different PTE than the PTE for the address, as it should be.
[NTOS]: Due to the reverting of the original path, another optimization, in which MiResolveDemandZeroFault was being called directly instead of going through MiDispatchFault and writing an invalid demand-zero PDE has also been removed. PDE faults are now going through the correct, expected path.
[NTOS]: MiResolveDemandZeroFault was always creating Kernel PTEs. It should create User PTEs when necessary.
[NTOS]: MiDeletePte was assuming any prototype PTE is a forked PTE. Forked PTEs only happen when the addresses in the PTE don't match, so check for that too.
Part 2: ARM3 Section Object Fixes
[NTOS]: Fix issue when trying to make both ROS_SECTION_OBJECTs and NT's SECTION co-exist. We relied on the *caller* knowing what kind of section this is, and that can't be a good idea. Now, when the caller requests an ARM3 section vs a ROS section, we use a marker to detect what kind of section this is for later APIs.
[NTOS]: For section VADs, we were storing the ReactOS MEMORY_AREA in the ControlArea... however, the mappings of one individual section object share a single control area, even though they have multiple MEMORY_AREAs (one for each mapping). As such, we overwrote the MEMORY_AREA continously, and at free-time, double or triple-freed the same memory area.
[NTOS]: Moved the MEMORY_AREA to the "Banked" field of the long VAD, instead of the ControlArea. Allocate MMVAD_LONGs for ARM3 sections for now, to support this. Also, after deleting the MEMORY_AREA while parsing VADs, we now use a special marker to detect double-frees, and we also use a special marker to make sure we have a Long VAD as expected.
svn path=/trunk/; revision=56035
2012-03-05 16:41:46 +00:00
|
|
|
if (ProtectionCode == MM_NOACCESS)
|
|
|
|
{
|
|
|
|
#if (_MI_PAGING_LEVELS == 2)
|
|
|
|
/* Could be a page table for paged pool */
|
|
|
|
MiCheckPdeForPagedPool(Address);
|
|
|
|
#endif
|
|
|
|
/* Has the code above changed anything -- is this now a valid PTE? */
|
2012-03-31 22:45:17 +00:00
|
|
|
Status = (PointerPde->u.Hard.Valid == 1) ? STATUS_SUCCESS : STATUS_ACCESS_VIOLATION;
|
Two Part Patch which fixes ARM3 Section Support (not yet enabled). This had been enabled in the past for testing and resulted in bizare crashes during testing. The amount of fixing required should reveal why:
Part 1: Page Fault Path Fixes
[NTOS]: As an optimization, someone seems to have had changed the MiResolveDemandZeroFault prototype not to require a PTE, and to instead take a protection mask directly. While clever, this broke support for ARM3 sections, because the code was now assuming that the protection of the PTE for the input address should be used -- while in NT Sections we instead use what are called ProtoType PTEs. This was very annoying to debug, but since the cause has been fixed, I've reverted back to the old convention in which the PTE is passed-in, and this can be a different PTE than the PTE for the address, as it should be.
[NTOS]: Due to the reverting of the original path, another optimization, in which MiResolveDemandZeroFault was being called directly instead of going through MiDispatchFault and writing an invalid demand-zero PDE has also been removed. PDE faults are now going through the correct, expected path.
[NTOS]: MiResolveDemandZeroFault was always creating Kernel PTEs. It should create User PTEs when necessary.
[NTOS]: MiDeletePte was assuming any prototype PTE is a forked PTE. Forked PTEs only happen when the addresses in the PTE don't match, so check for that too.
Part 2: ARM3 Section Object Fixes
[NTOS]: Fix issue when trying to make both ROS_SECTION_OBJECTs and NT's SECTION co-exist. We relied on the *caller* knowing what kind of section this is, and that can't be a good idea. Now, when the caller requests an ARM3 section vs a ROS section, we use a marker to detect what kind of section this is for later APIs.
[NTOS]: For section VADs, we were storing the ReactOS MEMORY_AREA in the ControlArea... however, the mappings of one individual section object share a single control area, even though they have multiple MEMORY_AREAs (one for each mapping). As such, we overwrote the MEMORY_AREA continously, and at free-time, double or triple-freed the same memory area.
[NTOS]: Moved the MEMORY_AREA to the "Banked" field of the long VAD, instead of the ControlArea. Allocate MMVAD_LONGs for ARM3 sections for now, to support this. Also, after deleting the MEMORY_AREA while parsing VADs, we now use a special marker to detect double-frees, and we also use a special marker to make sure we have a Long VAD as expected.
svn path=/trunk/; revision=56035
2012-03-05 16:41:46 +00:00
|
|
|
|
|
|
|
/* Either this was a bogus VA or we've fixed up a paged pool PDE */
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
return Status;
|
|
|
|
}
|
|
|
|
|
2017-12-27 14:20:52 +00:00
|
|
|
/* Resolve a demand zero fault */
|
|
|
|
MiResolveDemandZeroFault(PointerPte,
|
Two Part Patch which fixes ARM3 Section Support (not yet enabled). This had been enabled in the past for testing and resulted in bizare crashes during testing. The amount of fixing required should reveal why:
Part 1: Page Fault Path Fixes
[NTOS]: As an optimization, someone seems to have had changed the MiResolveDemandZeroFault prototype not to require a PTE, and to instead take a protection mask directly. While clever, this broke support for ARM3 sections, because the code was now assuming that the protection of the PTE for the input address should be used -- while in NT Sections we instead use what are called ProtoType PTEs. This was very annoying to debug, but since the cause has been fixed, I've reverted back to the old convention in which the PTE is passed-in, and this can be a different PTE than the PTE for the address, as it should be.
[NTOS]: Due to the reverting of the original path, another optimization, in which MiResolveDemandZeroFault was being called directly instead of going through MiDispatchFault and writing an invalid demand-zero PDE has also been removed. PDE faults are now going through the correct, expected path.
[NTOS]: MiResolveDemandZeroFault was always creating Kernel PTEs. It should create User PTEs when necessary.
[NTOS]: MiDeletePte was assuming any prototype PTE is a forked PTE. Forked PTEs only happen when the addresses in the PTE don't match, so check for that too.
Part 2: ARM3 Section Object Fixes
[NTOS]: Fix issue when trying to make both ROS_SECTION_OBJECTs and NT's SECTION co-exist. We relied on the *caller* knowing what kind of section this is, and that can't be a good idea. Now, when the caller requests an ARM3 section vs a ROS section, we use a marker to detect what kind of section this is for later APIs.
[NTOS]: For section VADs, we were storing the ReactOS MEMORY_AREA in the ControlArea... however, the mappings of one individual section object share a single control area, even though they have multiple MEMORY_AREAs (one for each mapping). As such, we overwrote the MEMORY_AREA continously, and at free-time, double or triple-freed the same memory area.
[NTOS]: Moved the MEMORY_AREA to the "Banked" field of the long VAD, instead of the ControlArea. Allocate MMVAD_LONGs for ARM3 sections for now, to support this. Also, after deleting the MEMORY_AREA while parsing VADs, we now use a special marker to detect double-frees, and we also use a special marker to make sure we have a Long VAD as expected.
svn path=/trunk/; revision=56035
2012-03-05 16:41:46 +00:00
|
|
|
PointerPde,
|
2018-03-21 20:22:03 +00:00
|
|
|
MM_EXECUTE_READWRITE,
|
2017-12-27 14:20:52 +00:00
|
|
|
CurrentProcess,
|
|
|
|
MM_NOIRQL);
|
2021-05-28 14:58:59 +00:00
|
|
|
#if _MI_PAGING_LEVELS >= 3
|
|
|
|
MiIncrementPageTableReferences(PointerPte);
|
|
|
|
#endif
|
|
|
|
|
2010-11-02 15:16:22 +00:00
|
|
|
#if MI_TRACE_PFNS
|
|
|
|
UserPdeFault = FALSE;
|
2021-06-24 15:01:38 +00:00
|
|
|
/* Update debug info */
|
|
|
|
if (TrapInformation)
|
|
|
|
MiGetPfnEntry(PointerPde->u.Hard.PageFrameNumber)->CallSite = (PVOID)((PKTRAP_FRAME)TrapInformation)->Eip;
|
|
|
|
else
|
|
|
|
MiGetPfnEntry(PointerPde->u.Hard.PageFrameNumber)->CallSite = _ReturnAddress();
|
2010-11-02 15:16:22 +00:00
|
|
|
#endif
|
2010-07-22 18:37:27 +00:00
|
|
|
/* We should come back with APCs enabled, and with a valid PDE */
|
|
|
|
ASSERT(KeAreAllApcsDisabled() == TRUE);
|
|
|
|
ASSERT(PointerPde->u.Hard.Valid == 1);
|
|
|
|
}
|
2012-07-21 19:07:11 +00:00
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Not yet implemented in ReactOS */
|
|
|
|
ASSERT(MI_IS_PAGE_LARGE(PointerPde) == FALSE);
|
|
|
|
}
|
2010-07-22 18:37:27 +00:00
|
|
|
|
2014-05-18 14:59:31 +00:00
|
|
|
/* Now capture the PTE. */
|
2010-07-22 18:37:27 +00:00
|
|
|
TempPte = *PointerPte;
|
2014-05-18 14:59:31 +00:00
|
|
|
|
|
|
|
/* Check if the PTE is valid */
|
|
|
|
if (TempPte.u.Hard.Valid)
|
|
|
|
{
|
|
|
|
/* Check if this is a write on a readonly PTE */
|
2018-01-01 22:03:56 +00:00
|
|
|
if (MI_IS_WRITE_ACCESS(FaultCode))
|
2014-05-18 14:59:31 +00:00
|
|
|
{
|
|
|
|
/* Is this a copy on write PTE? */
|
2015-05-10 19:35:24 +00:00
|
|
|
if (MI_IS_PAGE_COPY_ON_WRITE(&TempPte))
|
2014-05-18 14:59:31 +00:00
|
|
|
{
|
2016-08-19 17:24:53 +00:00
|
|
|
PFN_NUMBER PageFrameIndex, OldPageFrameIndex;
|
|
|
|
PMMPFN Pfn1;
|
|
|
|
|
2017-11-21 22:33:42 +00:00
|
|
|
LockIrql = MiAcquirePfnLock();
|
2016-08-19 17:24:53 +00:00
|
|
|
|
|
|
|
ASSERT(MmAvailablePages > 0);
|
|
|
|
|
2021-03-30 14:20:25 +00:00
|
|
|
MI_SET_USAGE(MI_USAGE_COW);
|
|
|
|
MI_SET_PROCESS(CurrentProcess);
|
|
|
|
|
2016-08-19 17:24:53 +00:00
|
|
|
/* Allocate a new page and copy it */
|
|
|
|
PageFrameIndex = MiRemoveAnyPage(MI_GET_NEXT_PROCESS_COLOR(CurrentProcess));
|
|
|
|
OldPageFrameIndex = PFN_FROM_PTE(&TempPte);
|
|
|
|
|
|
|
|
MiCopyPfn(PageFrameIndex, OldPageFrameIndex);
|
|
|
|
|
|
|
|
/* Dereference whatever this PTE is referencing */
|
|
|
|
Pfn1 = MI_PFN_ELEMENT(OldPageFrameIndex);
|
|
|
|
ASSERT(Pfn1->u3.e1.PrototypePte == 1);
|
|
|
|
ASSERT(!MI_IS_PFN_DELETED(Pfn1));
|
|
|
|
ProtoPte = Pfn1->PteAddress;
|
|
|
|
MiDeletePte(PointerPte, Address, CurrentProcess, ProtoPte);
|
|
|
|
|
|
|
|
/* And make a new shiny one with our page */
|
|
|
|
MiInitializePfn(PageFrameIndex, PointerPte, TRUE);
|
|
|
|
TempPte.u.Hard.PageFrameNumber = PageFrameIndex;
|
|
|
|
TempPte.u.Hard.Write = 1;
|
|
|
|
TempPte.u.Hard.CopyOnWrite = 0;
|
|
|
|
|
|
|
|
MI_WRITE_VALID_PTE(PointerPte, TempPte);
|
|
|
|
|
2021-05-19 22:19:43 +00:00
|
|
|
MiReleasePfnLock(LockIrql);
|
2016-08-19 17:24:53 +00:00
|
|
|
|
|
|
|
/* Return the status */
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
return STATUS_PAGE_FAULT_COPY_ON_WRITE;
|
2014-05-18 14:59:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Is this a read-only PTE? */
|
2015-05-10 19:35:24 +00:00
|
|
|
if (!MI_IS_PAGE_WRITEABLE(&TempPte))
|
2014-05-18 14:59:31 +00:00
|
|
|
{
|
|
|
|
/* Return the status */
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-01-02 10:22:22 +00:00
|
|
|
/* Check for execution of non-executable memory */
|
|
|
|
if (MI_IS_INSTRUCTION_FETCH(FaultCode) &&
|
|
|
|
!MI_IS_PAGE_EXECUTABLE(&TempPte))
|
|
|
|
{
|
|
|
|
/* Return the status */
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
2014-05-18 14:59:31 +00:00
|
|
|
|
|
|
|
/* The fault has already been resolved by a different thread */
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
return STATUS_SUCCESS;
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-10-07 17:27:23 +00:00
|
|
|
/* Quick check for demand-zero */
|
2020-11-01 14:13:36 +00:00
|
|
|
if ((TempPte.u.Long == (MM_READWRITE << MM_PTE_SOFTWARE_PROTECTION_BITS)) ||
|
|
|
|
(TempPte.u.Long == (MM_EXECUTE_READWRITE << MM_PTE_SOFTWARE_PROTECTION_BITS)))
|
2010-10-07 17:27:23 +00:00
|
|
|
{
|
|
|
|
/* Resolve the fault */
|
|
|
|
MiResolveDemandZeroFault(Address,
|
Two Part Patch which fixes ARM3 Section Support (not yet enabled). This had been enabled in the past for testing and resulted in bizare crashes during testing. The amount of fixing required should reveal why:
Part 1: Page Fault Path Fixes
[NTOS]: As an optimization, someone seems to have had changed the MiResolveDemandZeroFault prototype not to require a PTE, and to instead take a protection mask directly. While clever, this broke support for ARM3 sections, because the code was now assuming that the protection of the PTE for the input address should be used -- while in NT Sections we instead use what are called ProtoType PTEs. This was very annoying to debug, but since the cause has been fixed, I've reverted back to the old convention in which the PTE is passed-in, and this can be a different PTE than the PTE for the address, as it should be.
[NTOS]: Due to the reverting of the original path, another optimization, in which MiResolveDemandZeroFault was being called directly instead of going through MiDispatchFault and writing an invalid demand-zero PDE has also been removed. PDE faults are now going through the correct, expected path.
[NTOS]: MiResolveDemandZeroFault was always creating Kernel PTEs. It should create User PTEs when necessary.
[NTOS]: MiDeletePte was assuming any prototype PTE is a forked PTE. Forked PTEs only happen when the addresses in the PTE don't match, so check for that too.
Part 2: ARM3 Section Object Fixes
[NTOS]: Fix issue when trying to make both ROS_SECTION_OBJECTs and NT's SECTION co-exist. We relied on the *caller* knowing what kind of section this is, and that can't be a good idea. Now, when the caller requests an ARM3 section vs a ROS section, we use a marker to detect what kind of section this is for later APIs.
[NTOS]: For section VADs, we were storing the ReactOS MEMORY_AREA in the ControlArea... however, the mappings of one individual section object share a single control area, even though they have multiple MEMORY_AREAs (one for each mapping). As such, we overwrote the MEMORY_AREA continously, and at free-time, double or triple-freed the same memory area.
[NTOS]: Moved the MEMORY_AREA to the "Banked" field of the long VAD, instead of the ControlArea. Allocate MMVAD_LONGs for ARM3 sections for now, to support this. Also, after deleting the MEMORY_AREA while parsing VADs, we now use a special marker to detect double-frees, and we also use a special marker to make sure we have a Long VAD as expected.
svn path=/trunk/; revision=56035
2012-03-05 16:41:46 +00:00
|
|
|
PointerPte,
|
2020-11-01 14:13:36 +00:00
|
|
|
TempPte.u.Soft.Protection,
|
2010-10-07 17:27:23 +00:00
|
|
|
CurrentProcess,
|
|
|
|
MM_NOIRQL);
|
|
|
|
|
2021-06-24 15:01:38 +00:00
|
|
|
#if MI_TRACE_PFNS
|
|
|
|
/* Update debug info */
|
|
|
|
if (TrapInformation)
|
|
|
|
MiGetPfnEntry(PointerPte->u.Hard.PageFrameNumber)->CallSite = (PVOID)((PKTRAP_FRAME)TrapInformation)->Eip;
|
|
|
|
else
|
|
|
|
MiGetPfnEntry(PointerPte->u.Hard.PageFrameNumber)->CallSite = _ReturnAddress();
|
|
|
|
#endif
|
|
|
|
|
2010-10-07 17:27:23 +00:00
|
|
|
/* Return the status */
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
return STATUS_PAGE_FAULT_DEMAND_ZERO;
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Check for zero PTE */
|
|
|
|
if (TempPte.u.Long == 0)
|
[NTOS]: A few key changes to the page fault path:
1) MiCheckVirtualAddress should be called *after* determining if the PTE is a Demand Zero PTE. This is because when memory is allocated with MEM_RESERVE, and then MEM_COMMIT is called later, the VAD does not have the MemCommit flag set to TRUE. As such, MiCheckVirtualAddress returns MM_NOACCESS for the VAD (even though one is found) and the demand zero fault results in an access violation. Double-checked with Windows and this is the right behavior.
2) MiCheckVirtualAddress now supports non-commited reserve VADs (ie: trying to access MEM_RESERVE memory). It used to ASSERT, now it returns MM_NOACCESS so an access violation is raised. Before change #1, this would also happen if MEM_COMMIT was later performed on the ranges, but this is now fixed.
3) When calling MiResolveDemandZeroFault, we should not make the PDE a demand zero PDE. This is senseless. The whole point is that the PDE does exist, and MiInitializePfn needs it to keep track of the page table allocation. Removed the nonsensical line of code which performed cleard the PDE during a demand-zero fault.
I am able to boot to 3rd stage with these changes, so I have seen no regressions. Additionally, with these changes, the as-of-yet-uncommitted VAD-based Virtual Memory code completes 1st stage setup successfully, instead of instantly crashing on boot.
svn path=/trunk/; revision=55894
2012-02-27 23:42:22 +00:00
|
|
|
{
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Check if this address range belongs to a valid allocation (VAD) */
|
|
|
|
ProtoPte = MiCheckVirtualAddress(Address, &ProtectionCode, &Vad);
|
|
|
|
if (ProtectionCode == MM_NOACCESS)
|
|
|
|
{
|
2012-02-29 23:11:21 +00:00
|
|
|
#if (_MI_PAGING_LEVELS == 2)
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Could be a page table for paged pool */
|
|
|
|
MiCheckPdeForPagedPool(Address);
|
2012-02-29 23:11:21 +00:00
|
|
|
#endif
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Has the code above changed anything -- is this now a valid PTE? */
|
|
|
|
Status = (PointerPte->u.Hard.Valid == 1) ? STATUS_SUCCESS : STATUS_ACCESS_VIOLATION;
|
2012-02-29 23:11:21 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Either this was a bogus VA or we've fixed up a paged pool PDE */
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
return Status;
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/*
|
|
|
|
* Check if this is a real user-mode address or actually a kernel-mode
|
|
|
|
* page table for a user mode address
|
|
|
|
*/
|
2021-06-07 12:29:37 +00:00
|
|
|
if (Address <= MM_HIGHEST_USER_ADDRESS
|
|
|
|
#if _MI_PAGING_LEVELS >= 3
|
|
|
|
|| MiIsUserPte(Address)
|
|
|
|
#if _MI_PAGING_LEVELS == 4
|
|
|
|
|| MiIsUserPde(Address)
|
|
|
|
#endif
|
|
|
|
#endif
|
|
|
|
)
|
2012-07-21 19:07:11 +00:00
|
|
|
{
|
|
|
|
/* Add an additional page table reference */
|
2012-09-03 16:29:31 +00:00
|
|
|
MiIncrementPageTableReferences(Address);
|
2012-07-21 19:07:11 +00:00
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2013-08-29 07:33:10 +00:00
|
|
|
/* Is this a guard page? */
|
2013-11-27 00:04:26 +00:00
|
|
|
if ((ProtectionCode & MM_PROTECT_SPECIAL) == MM_GUARDPAGE)
|
2013-08-29 07:33:10 +00:00
|
|
|
{
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
/* The VAD protection cannot be MM_DECOMMIT! */
|
2015-09-01 01:45:59 +00:00
|
|
|
ASSERT(ProtectionCode != MM_DECOMMIT);
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
|
2013-08-29 07:33:10 +00:00
|
|
|
/* Remove the bit */
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
TempPte.u.Soft.Protection = ProtectionCode & ~MM_GUARDPAGE;
|
|
|
|
MI_WRITE_INVALID_PTE(PointerPte, TempPte);
|
2013-08-29 07:33:10 +00:00
|
|
|
|
|
|
|
/* Not supported */
|
|
|
|
ASSERT(ProtoPte == NULL);
|
|
|
|
ASSERT(CurrentThread->ApcNeeded == 0);
|
|
|
|
|
|
|
|
/* Drop the working set lock */
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
ASSERT(KeGetCurrentIrql() == OldIrql);
|
|
|
|
|
|
|
|
/* Handle stack expansion */
|
|
|
|
return MiCheckForUserStackOverflow(Address, TrapInformation);
|
|
|
|
}
|
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Did we get a prototype PTE back? */
|
|
|
|
if (!ProtoPte)
|
|
|
|
{
|
|
|
|
/* Is this PTE actually part of the PDE-PTE self-mapping directory? */
|
|
|
|
if (PointerPde == MiAddressToPde(PTE_BASE))
|
|
|
|
{
|
|
|
|
/* Then it's really a demand-zero PDE (on behalf of user-mode) */
|
2015-05-10 19:35:00 +00:00
|
|
|
#ifdef _M_ARM
|
|
|
|
_WARN("This is probably completely broken!");
|
|
|
|
MI_WRITE_INVALID_PDE((PMMPDE)PointerPte, DemandZeroPde);
|
|
|
|
#else
|
2018-03-21 20:22:03 +00:00
|
|
|
MI_WRITE_INVALID_PDE(PointerPte, DemandZeroPde);
|
2015-05-10 19:35:00 +00:00
|
|
|
#endif
|
2012-07-21 19:07:11 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* No, create a new PTE. First, write the protection */
|
[NTOSKRNL]
Windows / ReactOS uses a software protection field called protection mask, which is stored inside invalid (Software) PTEs to provide information about the desired protection, when a page is made valid by the page fault handler. The mask consists of the values 0-7 specifying the read/write/execute rights, 0 being inaccessible aka MM_ZERO_ACCESS, plus 2 flag-like bits, for uncached and writecombine memory respectively. Both flags together don't make sense, so this combination is used to mark guard pages. Since all these flags only make sense when used together with a proper access (i.e. not MM_ZERO_ACCESS), the combination of these flags together with MM_ZERO_ACCESS was given special meaning: MM_DECOMMIT, which equals MM_GUARDPAGE | MM_ZERO_ACCESS is for decommitted pages, that are not yet erased to zero, MM_NOACCESS, which is the mask for pages that are mapped with PAGE_NOACCESS (this is to make sure that a software PTE of a committed page is never completely 0, which it could be, when MM_ZERO_ACCESS was used), and finally MM_OUTSWAPPED_KSTACK for outswapped kernel stacks. See also https://www.reactos.org/wiki/Techwiki:Memory_Protection_constants.
The next thing to know is that the number of PTEs that are not null is counted for each PDE. So once a page gets committed, a software PTE is written and the reference count is incremented. When the page is made valid by the fault handler, the count is not changed, when the page is decommitted, the MM_DECOMMIT software PTE is written and again the PTE stays non-null and nothing is changed. Only when the range is cleaned up totally, the PTEs get erased and the reference count is decremented. Now it happened that our page fault handler missed to validate the access rights of protection constants. The problem that came up with this is a major one: since a decommitted page is a software PTE with MM_DECOMMIT as the protection mask (which we remember has the MM_GUARDPAGE bit set), the fault handler considered faults on decommitted PTEs as faults on guard pages and simply removed the guard page flag, leaving a completely empty PTE behind! So the decommitted page got erased without decrementing the reference count. This lead to CORE-7445.
- Add protection flags (MM_GUARDPAGE, MM_WRITECOMBINE, MM_OUTSWAPPED_KSTACK)
- Instead of writing 0 to a PTE, use MI_WRITE_INVALID_PTE with MmZeroPte
- Implement MiIsAccessAllowed that checks for read/write/execute access and use it in MiAccessCheck
- Add some more ASSERTs
CORE-7445 #resolve
svn path=/trunk/; revision=61095
2013-11-25 00:18:33 +00:00
|
|
|
TempPte.u.Soft.Protection = ProtectionCode;
|
|
|
|
MI_WRITE_INVALID_PTE(PointerPte, TempPte);
|
2012-07-21 19:07:11 +00:00
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Lock the PFN database since we're going to grab a page */
|
2017-11-21 22:33:42 +00:00
|
|
|
OldIrql = MiAcquirePfnLock();
|
2010-07-22 18:37:27 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Make sure we have enough pages */
|
|
|
|
ASSERT(MmAvailablePages >= 32);
|
2012-02-29 23:11:21 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Try to get a zero page */
|
|
|
|
MI_SET_USAGE(MI_USAGE_PEB_TEB);
|
|
|
|
MI_SET_PROCESS2(CurrentProcess->ImageFileName);
|
|
|
|
Color = MI_GET_NEXT_PROCESS_COLOR(CurrentProcess);
|
|
|
|
PageFrameIndex = MiRemoveZeroPageSafe(Color);
|
|
|
|
if (!PageFrameIndex)
|
|
|
|
{
|
|
|
|
/* Grab a page out of there. Later we should grab a colored zero page */
|
|
|
|
PageFrameIndex = MiRemoveAnyPage(Color);
|
|
|
|
ASSERT(PageFrameIndex);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Release the lock since we need to do some zeroing */
|
2017-11-21 22:33:42 +00:00
|
|
|
MiReleasePfnLock(OldIrql);
|
2010-07-22 18:37:27 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Zero out the page, since it's for user-mode */
|
|
|
|
MiZeroPfn(PageFrameIndex);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Grab the lock again so we can initialize the PFN entry */
|
2017-11-21 22:33:42 +00:00
|
|
|
OldIrql = MiAcquirePfnLock();
|
2012-07-21 19:07:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize the PFN entry now */
|
|
|
|
MiInitializePfn(PageFrameIndex, PointerPte, 1);
|
2010-07-22 18:37:27 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Increment the count of pages in the process */
|
|
|
|
CurrentProcess->NumberOfPrivatePages++;
|
2010-07-22 18:37:27 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* One more demand-zero fault */
|
2021-05-19 22:19:43 +00:00
|
|
|
KeGetCurrentPrcb()->MmDemandZeroCount++;
|
|
|
|
|
|
|
|
/* And we're done with the lock */
|
|
|
|
MiReleasePfnLock(OldIrql);
|
2020-10-20 13:56:21 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Fault on user PDE, or fault on user PTE? */
|
|
|
|
if (PointerPte <= MiHighestUserPte)
|
|
|
|
{
|
|
|
|
/* User fault, build a user PTE */
|
|
|
|
MI_MAKE_HARDWARE_PTE_USER(&TempPte,
|
|
|
|
PointerPte,
|
|
|
|
PointerPte->u.Soft.Protection,
|
|
|
|
PageFrameIndex);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* This is a user-mode PDE, create a kernel PTE for it */
|
|
|
|
MI_MAKE_HARDWARE_PTE(&TempPte,
|
|
|
|
PointerPte,
|
|
|
|
PointerPte->u.Soft.Protection,
|
|
|
|
PageFrameIndex);
|
|
|
|
}
|
2010-07-22 18:37:27 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Write the dirty bit for writeable pages */
|
|
|
|
if (MI_IS_PAGE_WRITEABLE(&TempPte)) MI_MAKE_DIRTY_PAGE(&TempPte);
|
2012-02-06 09:26:23 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* And now write down the PTE, making the address valid */
|
|
|
|
MI_WRITE_VALID_PTE(PointerPte, TempPte);
|
|
|
|
Pfn1 = MI_PFN_ELEMENT(PageFrameIndex);
|
|
|
|
ASSERT(Pfn1->u1.Event == NULL);
|
2010-07-22 18:37:27 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Demand zero */
|
|
|
|
ASSERT(KeGetCurrentIrql() <= APC_LEVEL);
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
return STATUS_PAGE_FAULT_DEMAND_ZERO;
|
2012-02-29 23:11:21 +00:00
|
|
|
}
|
2010-07-22 20:52:23 +00:00
|
|
|
|
2013-08-29 07:33:10 +00:00
|
|
|
/* We should have a valid protection here */
|
2010-10-05 08:14:02 +00:00
|
|
|
ASSERT(ProtectionCode != 0x100);
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2010-07-22 20:52:23 +00:00
|
|
|
/* Write the prototype PTE */
|
|
|
|
TempPte = PrototypePte;
|
|
|
|
TempPte.u.Soft.Protection = ProtectionCode;
|
2015-09-01 01:45:59 +00:00
|
|
|
ASSERT(TempPte.u.Long != 0);
|
2010-07-22 20:52:23 +00:00
|
|
|
MI_WRITE_INVALID_PTE(PointerPte, TempPte);
|
2010-07-22 18:37:27 +00:00
|
|
|
}
|
2012-07-21 19:07:11 +00:00
|
|
|
else
|
|
|
|
{
|
2012-08-03 11:34:35 +00:00
|
|
|
/* Get the protection code and check if this is a proto PTE */
|
2013-09-22 21:19:40 +00:00
|
|
|
ProtectionCode = (ULONG)TempPte.u.Soft.Protection;
|
2012-08-03 11:34:35 +00:00
|
|
|
if (TempPte.u.Soft.Prototype)
|
|
|
|
{
|
|
|
|
/* Do we need to go find the real PTE? */
|
|
|
|
if (TempPte.u.Soft.PageFileHigh == MI_PTE_LOOKUP_NEEDED)
|
|
|
|
{
|
|
|
|
/* Get the prototype pte and VAD for it */
|
|
|
|
ProtoPte = MiCheckVirtualAddress(Address,
|
|
|
|
&ProtectionCode,
|
|
|
|
&Vad);
|
|
|
|
if (!ProtoPte)
|
|
|
|
{
|
|
|
|
ASSERT(KeGetCurrentIrql() <= APC_LEVEL);
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
return STATUS_ACCESS_VIOLATION;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Get the prototype PTE! */
|
|
|
|
ProtoPte = MiProtoPteToPte(&TempPte);
|
|
|
|
|
|
|
|
/* Is it read-only */
|
|
|
|
if (TempPte.u.Proto.ReadOnly)
|
|
|
|
{
|
|
|
|
/* Set read-only code */
|
|
|
|
ProtectionCode = MM_READONLY;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Set unknown protection */
|
|
|
|
ProtectionCode = 0x100;
|
|
|
|
ASSERT(CurrentProcess->CloneRoot != NULL);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2012-07-21 19:07:11 +00:00
|
|
|
}
|
|
|
|
|
2013-08-29 07:33:10 +00:00
|
|
|
/* Do we have a valid protection code? */
|
|
|
|
if (ProtectionCode != 0x100)
|
|
|
|
{
|
|
|
|
/* Run a software access check first, including to detect guard pages */
|
|
|
|
Status = MiAccessCheck(PointerPte,
|
2018-01-01 22:03:56 +00:00
|
|
|
!MI_IS_NOT_PRESENT_FAULT(FaultCode),
|
2013-08-29 07:33:10 +00:00
|
|
|
Mode,
|
|
|
|
ProtectionCode,
|
|
|
|
TrapInformation,
|
|
|
|
FALSE);
|
|
|
|
if (Status != STATUS_SUCCESS)
|
|
|
|
{
|
|
|
|
/* Not supported */
|
|
|
|
ASSERT(CurrentThread->ApcNeeded == 0);
|
|
|
|
|
|
|
|
/* Drop the working set lock */
|
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
|
|
|
ASSERT(KeGetCurrentIrql() == OldIrql);
|
|
|
|
|
|
|
|
/* Did we hit a guard page? */
|
|
|
|
if (Status == STATUS_GUARD_PAGE_VIOLATION)
|
|
|
|
{
|
|
|
|
/* Handle stack expansion */
|
|
|
|
return MiCheckForUserStackOverflow(Address, TrapInformation);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Otherwise, fail back to the caller directly */
|
|
|
|
return Status;
|
|
|
|
}
|
|
|
|
}
|
2010-12-26 15:23:03 +00:00
|
|
|
|
2012-07-21 19:07:11 +00:00
|
|
|
/* Dispatch the fault */
|
2018-01-01 21:40:43 +00:00
|
|
|
Status = MiDispatchFault(FaultCode,
|
2012-07-21 19:07:11 +00:00
|
|
|
Address,
|
|
|
|
PointerPte,
|
|
|
|
ProtoPte,
|
|
|
|
FALSE,
|
|
|
|
CurrentProcess,
|
|
|
|
TrapInformation,
|
|
|
|
Vad);
|
|
|
|
|
|
|
|
/* Return the status */
|
|
|
|
ASSERT(KeGetCurrentIrql() <= APC_LEVEL);
|
2010-07-22 18:26:04 +00:00
|
|
|
MiUnlockProcessWorkingSet(CurrentProcess, CurrentThread);
|
2010-07-22 20:52:23 +00:00
|
|
|
return Status;
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
}
|
|
|
|
|
2012-02-18 23:59:31 +00:00
|
|
|
NTSTATUS
|
|
|
|
NTAPI
|
2012-02-20 06:42:02 +00:00
|
|
|
MmGetExecuteOptions(IN PULONG ExecuteOptions)
|
2012-02-18 23:59:31 +00:00
|
|
|
{
|
2012-02-20 06:42:02 +00:00
|
|
|
PKPROCESS CurrentProcess = &PsGetCurrentProcess()->Pcb;
|
|
|
|
ASSERT(KeGetCurrentIrql() == PASSIVE_LEVEL);
|
2012-02-18 23:59:31 +00:00
|
|
|
|
2012-02-20 06:42:02 +00:00
|
|
|
*ExecuteOptions = 0;
|
2012-02-28 17:50:21 +00:00
|
|
|
|
2012-02-20 06:42:02 +00:00
|
|
|
if (CurrentProcess->Flags.ExecuteDisable)
|
|
|
|
{
|
|
|
|
*ExecuteOptions |= MEM_EXECUTE_OPTION_DISABLE;
|
|
|
|
}
|
2012-02-28 17:50:21 +00:00
|
|
|
|
2012-02-20 06:42:02 +00:00
|
|
|
if (CurrentProcess->Flags.ExecuteEnable)
|
|
|
|
{
|
|
|
|
*ExecuteOptions |= MEM_EXECUTE_OPTION_ENABLE;
|
|
|
|
}
|
2012-02-28 17:50:21 +00:00
|
|
|
|
2012-02-20 06:42:02 +00:00
|
|
|
if (CurrentProcess->Flags.DisableThunkEmulation)
|
|
|
|
{
|
|
|
|
*ExecuteOptions |= MEM_EXECUTE_OPTION_DISABLE_THUNK_EMULATION;
|
|
|
|
}
|
2012-02-28 17:50:21 +00:00
|
|
|
|
2012-02-20 06:42:02 +00:00
|
|
|
if (CurrentProcess->Flags.Permanent)
|
|
|
|
{
|
|
|
|
*ExecuteOptions |= MEM_EXECUTE_OPTION_PERMANENT;
|
|
|
|
}
|
2012-02-28 17:50:21 +00:00
|
|
|
|
2012-02-20 06:42:02 +00:00
|
|
|
if (CurrentProcess->Flags.ExecuteDispatchEnable)
|
|
|
|
{
|
|
|
|
*ExecuteOptions |= MEM_EXECUTE_OPTION_EXECUTE_DISPATCH_ENABLE;
|
|
|
|
}
|
2012-02-28 17:50:21 +00:00
|
|
|
|
2012-02-20 06:42:02 +00:00
|
|
|
if (CurrentProcess->Flags.ImageDispatchEnable)
|
|
|
|
{
|
|
|
|
*ExecuteOptions |= MEM_EXECUTE_OPTION_IMAGE_DISPATCH_ENABLE;
|
|
|
|
}
|
2012-02-28 17:50:21 +00:00
|
|
|
|
2012-02-20 06:42:02 +00:00
|
|
|
return STATUS_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
NTSTATUS
|
|
|
|
NTAPI
|
|
|
|
MmSetExecuteOptions(IN ULONG ExecuteOptions)
|
|
|
|
{
|
2012-02-18 23:59:31 +00:00
|
|
|
PKPROCESS CurrentProcess = &PsGetCurrentProcess()->Pcb;
|
|
|
|
KLOCK_QUEUE_HANDLE ProcessLock;
|
|
|
|
NTSTATUS Status = STATUS_ACCESS_DENIED;
|
|
|
|
ASSERT(KeGetCurrentIrql() == PASSIVE_LEVEL);
|
|
|
|
|
|
|
|
/* Only accept valid flags */
|
|
|
|
if (ExecuteOptions & ~MEM_EXECUTE_OPTION_VALID_FLAGS)
|
|
|
|
{
|
|
|
|
/* Fail */
|
|
|
|
DPRINT1("Invalid no-execute options\n");
|
|
|
|
return STATUS_INVALID_PARAMETER;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Change the NX state in the process lock */
|
2019-12-30 14:34:38 +00:00
|
|
|
KiAcquireProcessLockRaiseToSynch(CurrentProcess, &ProcessLock);
|
2012-02-18 23:59:31 +00:00
|
|
|
|
|
|
|
/* Don't change anything if the permanent flag was set */
|
|
|
|
if (!CurrentProcess->Flags.Permanent)
|
|
|
|
{
|
|
|
|
/* Start by assuming it's not disabled */
|
|
|
|
CurrentProcess->Flags.ExecuteDisable = FALSE;
|
|
|
|
|
|
|
|
/* Now process each flag and turn the equivalent bit on */
|
|
|
|
if (ExecuteOptions & MEM_EXECUTE_OPTION_DISABLE)
|
|
|
|
{
|
|
|
|
CurrentProcess->Flags.ExecuteDisable = TRUE;
|
|
|
|
}
|
|
|
|
if (ExecuteOptions & MEM_EXECUTE_OPTION_ENABLE)
|
|
|
|
{
|
|
|
|
CurrentProcess->Flags.ExecuteEnable = TRUE;
|
|
|
|
}
|
|
|
|
if (ExecuteOptions & MEM_EXECUTE_OPTION_DISABLE_THUNK_EMULATION)
|
|
|
|
{
|
|
|
|
CurrentProcess->Flags.DisableThunkEmulation = TRUE;
|
|
|
|
}
|
|
|
|
if (ExecuteOptions & MEM_EXECUTE_OPTION_PERMANENT)
|
|
|
|
{
|
|
|
|
CurrentProcess->Flags.Permanent = TRUE;
|
|
|
|
}
|
|
|
|
if (ExecuteOptions & MEM_EXECUTE_OPTION_EXECUTE_DISPATCH_ENABLE)
|
|
|
|
{
|
|
|
|
CurrentProcess->Flags.ExecuteDispatchEnable = TRUE;
|
|
|
|
}
|
|
|
|
if (ExecuteOptions & MEM_EXECUTE_OPTION_IMAGE_DISPATCH_ENABLE)
|
|
|
|
{
|
|
|
|
CurrentProcess->Flags.ImageDispatchEnable = TRUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* These are turned on by default if no-execution is also eanbled */
|
|
|
|
if (CurrentProcess->Flags.ExecuteEnable)
|
|
|
|
{
|
|
|
|
CurrentProcess->Flags.ExecuteDispatchEnable = TRUE;
|
|
|
|
CurrentProcess->Flags.ImageDispatchEnable = TRUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* All good */
|
|
|
|
Status = STATUS_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Release the lock and return status */
|
|
|
|
KiReleaseProcessLock(&ProcessLock);
|
|
|
|
return Status;
|
|
|
|
}
|
|
|
|
|
- Implement ARM3 page fault handling.
- Paged pool PTEs are demand zero PTEs while the memory hasn't been accessed -- this is the only type of fault supported.
- Because paged pool PDEs are also demand-paged, added code to handle demand paging of PDEs as well.
- Also, because paged pool is non-resident, but can be accessed from any process, we need a mechanism to sync up the kernel's page directory with the per-process one, on demand. This is done at startup, but other processes may have paged in paged pool that another process knows nothing about when he faults.
- Similar to the hack ReactOS Mm uses, but done properly.
- This is what that shadow system page directory is finally being used for.
- Assert if we get a user-mode fault, a transition fault, or a soft fault, since these shouldn't happen.
- Disable APCs while dispatching faults, and pseudo-use the working set lock.
- Assert if we get write errors on read-only pages, since we don't use those in ARM3 yet.
- Assert if we have a paged out PTE, this shouldn't happen yet.
- Enable test to see if we can touch a paged pool allocation.
svn path=/trunk/; revision=43507
2009-10-15 22:08:26 +00:00
|
|
|
/* EOF */
|