gamemode is set according to the name of the main wad (cf. d_main.c), i.e.:
- doom1.wad: (shareware doom1, ep1 only) gamemode == shareware
- doom.wad: (registered doom1, ep1-3) gamemode == registered
- doomu.wad: (ultimate doom, ep1-4) gamemode == retail
- doom2.wad, plutonia.wad, tnt.wad: gamemode == commercial
most doom.wad's distributed online are, in fact, ultimate doom.
if your ultimate doom wad is correctly named doomu.wad, some switches in
episodes 2-4 won't swap their texture when toggled, because
p_switch.c:P_InitSwitchList() is only checking for registered doom1.
easy way to test: demo2 in either registered or ultimate doom: the player flips
a switch right at the beginning of the demo; if the main wad is called
doomu.wad, the switch won't change its texture.
% games/doom -playdemo demo2
if you rename the wad to doom.wad or alter d_main.c:IdentifyVersion, the switch
will swap its texture like it should.
this function is used when playing demos from external lumps. the game just
exits without this patch.
to test this, download a demo lump from somewhere, and play it with -playdemo %s
where %s is the file's name, without the .lmp extension:
(note that this one is a doom 2 demo, so it requires doom2.wad)
% hget http://doomedsda.us/lmps/945/3/30nm2939.zip | unzip -sv
extracting 30nm2939.LMP
extracting 30nm2939.txt
% mv 30nm2939.LMP 30nm2939.lmp # checking for a lump filename is case sensitive
% games/doom -playdemo 30nm2939
the game exits when the demo ends. also, note that this demo will desync on
map06 (the crusher), because of an unrelated bug (that's another patch :>)
note: filelength() returns vlong, but file lengths for doom lumps are ints.
however, this might be used elsewhere (networking), so i'd leave it this way.
when cookie is domain=example.com, then we implicitely add
dot to the domain name, which made us reject the cookie as the
request domain "example.com" != ".example.com". fix by making
isdomainmatch() skip the implicit dot in pattern before string
comparsion.
when there where multiple syscalls returning out of order,
it would print blank lines between the exits. avoid this
by remembering if the last char written was a newline and
conditionally insert newline on out of order return.
sometimes, ratrace would return before all messages have
been printed. make the writer process the parent so ratrace
wont exit until all readers are finished avoiding the
problem.
we already export mntauth() and mntversion(), so why not stop
being sneaky and just export mntattach() so bindmount() and
devshr can just call it directly with proper arguments being
checked.
we can also avoid handling #M attach specially in namec()
by having the devmnt's attach function do error(Enoattach).
to avoid double caching, attachimage() and setswapchan() clear
the CCACHE flag on the channel but this keeps the read ahread
state of the cache arround (until the chan gets closed), so also
call cclunk() to detach the mcp and free the read ahead state.
avoid the call to cread() when CCACHE flag is clear.
use the actual iounit returned from Ropen/Rcreate to chunk reads and writes
instead of c->mux->msize-IOHDRSZ.
dont preallocate the rpc buffers to msize, most 9p requests are rather small
(except Twrite of course). so we allocate the buffer on demand in mountio()
with some rounding to avoid frequent reallocations.
avoid malloc()/free() while holding mntalloc lock.
this changes devmnt adding mntrahread() function and some helpers
for it to do pipelined sequential read ahead for the mount cache.
basically, cread() calls mntrahread() with Mntrah structure and it
figures out if we where reading sequentially and if thats the case
issues reads of c->iounit size in advance.
the read ahead state (Mntrah) is kept in the mount cache so we can
handle (read ahead) cache invalidation in the presence of writes.
Tflush handling was wrong, we cannot respond to the old
request if we have not actually removed the req from the
in progress block queue.
when reads are issued concurrently, we have to set b->len
before the block is inserted into the inprogress list.
otherwise findblock() is unable to find it and no requests
can be queued on the block. this caused the same offset
to be downloaded multiple times.
set the errstr in getrange() so in case of an error, we dont
get some random previous error string.
as the Fgrp can be shared with other processes, we have to
recheck the fd index after locking the Fgrp in fdclose()
to make sure not to read beyond the bounds of the fd array.
using the user buffer has a race where the user can modify
the buffer from another process before it is copied into the cache.
this allows poisoning the cache for every file where the user
has read access.
instead, we update the cache from kernel memory.
*after* writing, the directory tree gets alphabetically sorted for
path table. this causes data to not be in the same order as it was
written causing seeks when taring up the filesystem.
so instead write the files in alphabetical order as well to better
match the directory sorting.
data on the disk is layed out sequentially and directory information
is at the end of the disk. we want to keep data and metadata separated
so that reading large sequential files will not evict the directory
information from the cache causing long seeks.
for that, we tag the clusters (an 8th for metadata, and the rest
for data) and getbuf() will only evict clusters of the same tag.
doing tests taring up 9front.iso shows the following:
lowering the cluster size back to 128k avoids over half the
reads. 837888 sectors read for 512k vs. 347712 sectors with
128k cluster size.
foo.c includes bar/bar.h, which includes "baz.h"; it wants bar/baz.h
meanwhile, it also includes meh/quux.h, which includes "baz.h"; it wants meh/baz.h