This could happen if BCB was marked dirty previously.
Marking VACB dirty on unpin could lead to a double write of
the VACB, even if clean.
Indeed, now that setting BCB dirty leads to marking VACB
dirty, the VACB can be flushed in between by the lazy-writer.
The BCB state is not reset on VACB flush, contrary to the VACB state.
Thus, on unpin even if the VACB was already flushed, we were
setting back the dirty state, leading the VACB to be flushed again.
This could bring a small performance downgrade. Though it remains
limited since this is mostly used for FS metadata.
Possibly it could lead to metadata corruption, but this is likely
less possible.
CORE-15954
* Add an NDK header to define INIT_FUNCTION/INIT_SECTION globally
* Use _declspec(allocate(x)) and _declspec(code_seg(x)) on MSVC versions that support it
* Use INIT_FUNCTION on functions only and INIT_SECTION on data only (required by MSVC)
* Place INIT_FUNCTION before the return type (required by MSVC)
* Make sure declarations and implementations share the same modifiers (required by MSVC)
* Add a global linker option to suppress warnings about defined but unused INIT section
* Merge INIT section into .text in freeldr
For now, this is just a split between scan and flush that
were both done during lazy scan previously.
Lazy scan shouldn't perform any write operation, but only
queue a write behind operation.
Our implementation is far from the original, as it seems
our lazy scan should queue a write behind operation per
shared cache map. Right now, we only perform global
operation.
This will avoid corruption when a file size is little grown and read afterwards.
Up to now, FSD where reading 0es instead of expected data, causing corruption.
This fixes MS FastFAT not being able to mount a FAT volume in ReactOS, corrupting
the FAT.
This also fixes the CcSetFileSizes kmtest tests.
This is based on a patch by Thomas Faber.
CORE-11819
This incorrect behavior was leading to a call at too high IRQL for paged code.
This was triggered by MS FastFAT.
ReleaseFromLazyWrite call was already correctly called to that regard.
CORE-11819
This avoids performing a double-free (even though that's hidden by the
fact we use lookaside allocations for VACB), and it avoids freeing
a memory address at an uninitialized address.
We don't care about references here, the VACB was just allocated, never
linked and we're its only user.
CORE-15413
Now, we make sure that we update ref count and BCB list membership
with the BCB lock held, in a row.
This will avoid race conditions where the BCB was removed from the
list, then referenced again, leading to inconsistencies in memory
and crashes later on.
This could notably be triggered while building ReactOS on ReactOS
(one would call this a regression).
CORE-15235
We won't reuse a BCB created for mapping, we will now have
our own dedicated BCB.
This allows having a bit more cleaner implementation of CcPinMappedData()
We now handle race conditions when creating BCB to avoid
having duplicated BCB per shared maps.
Also, we already specify whether the memory will be pinned
when creating the BCB, to avoid potential duplications or
BCB misuse.
If so, return such BCB instead of creating a new one. This will
allow (at some point) to be more consistent in case of concurrent
mapping.
This fixes a few CcMapData tests.
Given current ReactOS implementation, a VACB can be pinned
several times, with different BCB. In next commits, a single
BCB will be able to be pinned several times. That would
lead to severe inconsistencies in counting and thus corruption.
This could be triggered when attempting to read/write to really big
files. It was causing an attempt to read 0 bytes in Cc, leading to
asserts failure in the kernel (and corrupted file).
CORE-15067
Currently, our CcMapData() behavior (same goes for CcPinRead()) is broken
and is the total opposite of what Windows kernel does. By default, the later
will let you map a view in memory without even attempting to bring its
data in memory. On first access, there will be a fault and memory will
be read from the hardware and brought to memory. If you want to force read
on mapping/pinning, you have to set the MAP_NO_READ (or PIN_NO_READ) flag
where kernel will fault on your behalf (hence the need for MAP_WAIT/PIN_WAIT).
On ReactOS, by default, on mapping (and thus pinning), we will force a view
read so that data is in memory. The way our cache memory is managed at the
moment seems not to allow to fault on invalid access and if we don't force
read, the memory content will just be zeroed.
So trying to match Windows behavior, by default, now CcMapData() will enforce
the MAP_NO_READ flag and warn once about this behavior change.
It's based on the code that was in CcPinRead() implementation. This
made no sense to have CcPinMappedData() doing nothing while implementing
everything in CcPinRead(). Indeed, drivers (starting with MS drivers)
can map data first and pin it afterwards with CcPinMappedData(). It was
leading to incorrect behavior with our previous noop implementation.
kmtest:NtCreateSection calls CcInitializeCacheMap with a
NULL value for SectionObjectPointers. This will cause an exception when
trying to access it, which in Windows can be handled gracefully.
However accessing it while holding ViewLock means the lock will not be
released, leading to an APC_INDEX_MISMATCH bugcheck.
This solves the problem by allocating SharedCacheMap outside the lock,
then freeing it again under lock if another thread has updated SharedCacheMap
in the mean time. This is also What Windows Does(TM).
This avoids a really nasty race condition in our cache controler where
two concurrents could try to initialize cache on the same file.
This had two nasty effects: first shared map was purely leaked and erased
by the second one. And the private cache map, allocated on the first shared
cache map couldn't be freed and was leading to Mm BSOD (free in a middle of
a block).
This was often triggered while building ReactOS on ReactOS (with multi threads).
With that patch, I cannot crash anylonger while building ReactOS.
CORE-14634
In the lazy writer run, first post items that are queued for this.
Only then, start executing deferred writes if any.
If there were any, reschedule immediately a lazy writer run, to keep
Cc warm and to make it unqueue write faster in case of high IOs situation.
To make second lazy writer run happen faster, we keep our state active to
use short delay (1s) instead of standard idle (3s).
Recent changes seem to show that it's not
required to be exclusive on VACB to be able
to flush it.
This commit goes with f2c44aa and fixes the
last issues going with copying huge files.
There are no longer BSODs (be it in Mm or Cc).
I could, with 750MB RAM extract a 2GB file from
a 53MB archive and copy a 2,5GB file from a VBox
share to the disk. Note that writes are often
deferred, so if copy works, it's not that fast for now.
Note that it also brings some beloved behavior from
Windows: copy times are totally unreliable now when
writes are deferred. Little remaining times when
actively copying, high remaining times when deferred
writes in action. And goes between both... Sorry! ;-)
https://xkcd.com/612/
CORE-9696
CORE-11175
Same mechanism exists in Windows (even their Cc
is way different from ours...) where when Cc is
out of memory (in their case, out of VACB), we
will start scavenge old & unused VACB to free
some of the memory.
It's useful in case we're operating we big files
operations, we may run out of memory where to map
VACB for them, so start to scavenge VACB to free
some of that memory.
With this, I am able to install Qt 4.8.6 with 2,5GB of RAM,
scavenging acting when needed!
CORE-12081
CORE-14582
Adjusting refcount and enabling lazy-write for pinned
VACB makes it actually more efficient, often purging
data to disk, reducing memory stress for the system.
This is required for defering writes.
This commit unfortunately (?) reverts a previous revert.
CORE-12081
CORE-14582
CORE-14313
When no name is set in the file object, try to read the name
from the FCB. We only support FastFAT (ours) FCB for now.
This is clearly a hack, but for a kdbg command, so ;-)
It seems that on process killing, some VACB may be deleted while
still mapped. With current reference counting, they will actually
not be deleted, but leaked, and an ASSERT will be triggered.
CORE-14578