this makes virtual "memdisk" from SYSLINUX accessible to
the kernel, allowing the iso to be loaded via TFTP and
started without any ethernet or disk drivers available.
this driver makes regions of physical memory accessible as a disk.
to use it, ramdiskinit() has to be called before confinit(), so
that conf.mem[] banks can be reserved. currently, only pc and pc64
kernel use it, but otherwise the implementation is portable.
ramdisks are not zeroed when allocated, so that the contents are
preserved across warm reboots.
to not waste memory, physical segments do not allocate Page structures
or populate the segment pte's anymore. theres also a new SG_CHACHED
attribute.
fpurestore() unconditionally changed fpstate to FPinactive when
the kernel used the FPU. but in the FPinit case, the registers are
not saved by mathemu(), resulting in all zero initialized registers
being loaded once userspace uses the FPU so the process would have
wrong MXCR value.
the index overflow check was wrong with using shifted value.
due to linux omiting the final Z(4) in the NTLMv2 reply, and
the need for the windom for LMv2 authentication, here is a new
AuthNTLM ticket request now with length and dom fields.
in ntlmv2, the client will retry the challenge response trying a bunch
of different domain names assuming the same server challenge. so we have
to make retries work with factotum and the auth server.
also, windows 7 with compatlevel=4 sends all zeros LM response.
the dnsquery() library function should not start mouting /srv/dns on
its own. this problem arrises only for ndb/cs as it is started before
ndb/dns.
the issue with mounting /srv/dns before /net is when ndb/cs attempts
to read the list of interfaces, accessing /net/ipifc, which triggers
a rpc to ndb/dns as it is ontop of the mount. this can yield a deadlock
when ndb/dns blocks its 9p loop waiting for requests to complete on
a refresh and the requests are stuck waiting for ndb/cs to translate
a dial string for announce().
dblookup() used to only return the first matching entry. in
case of ipv6, we want all entries returned to get both v4
and v6 addresses... and these might not neccesarily be in
the same entry (see /lib/ndb/common). note also this makes
it behave the same as in cachedb mode which reads in the
whole database.
we do not know if v4 or v6 routing works, so the simplest
is just to query v4 and v6 nameservers in parallel. this is
done by changing serveraddrs() to return one address type,
and we make sure to get at least one v4 and one v6 address
each round.
get rid of the weigthed timeout code... there where too many
assumptions. instead, we give a round 500ms timeout (or 1 second
in patient mode) and honor the maximum query time.