this can happen when the on the final sync when the mailbox
is being freed:
freembox -> mboxdecref -> syncmbox -> wridxfile -> pridx -> insurecache -> msgincref
syncmbox() used to enter the mailbox into the hash tree to
update the qid.vers. this is wrong when we are doing the final
sync before freeing the mailbox as the hash reference has already
been removed by freemailbox().
also avoid adding hash entries for mails for the about to be
freed mailbox in cachehash().
add function to check the refcounts for Mailbox and Message on a fid
use the full 64 bit of the qid.path, so we can use the full 32 bit for the id
instead of only maintaining refcount for the top message, msgincref() now
adds a reference to all its parent messages including self and top message.
then we can check in recursive delmessage() that all the parts have a zero
refcount.
remove the Fid.mtop field, it was never used.
make sure deletion and flag changes only affect the top message.
cachefree(): only look for top message in lru. sub-parts are never
added to the cache.
use the nparts field when reading sub-part of existing message, so
that we parse the index right in case the number of parts somehow
changed.
messages marked as Deleted but still in inbox should be written
to the index.
instead of having application processors spin in mpshutdown()
with mmu on, and be subject to reboot() overriding kernel text
and modifying page tables, park the application processors in
rebootcode idle loop with the mmu off.
to reproduce:
u8int x, y;
x = 0xff;
y = 0xc0;
if((s8int)(x & y) >= 0)
print("help\n");
compiles correctly but prints a warning
warning: test.c:11 useless or misleading comparison: UINT >= 0
the issue is that compar() unconditionally skipped over
all left casts ignoring the case when a cast would sign
extend the value.
the new code only skips over the cast when the original
type with is smaller than the cast result or when they
are equal width and types have same signedness. so the
effective left hand side type is the last truncation
or sign extension.
to get $"1 right, remove Xqdol() and instead implement it in
terms of Xdol() instruction and use the new Xqw() instruction
to quote the resulting list.
a deadlock has been observed with samterm (thanks burnzez),
that shows the mouse ioproc being stuck in sending on the
resize channel, while the mouse consumer is stuck in a
readmouse() loop wanting a rectangle to be drawn by the
user:
recv(v=0x42df50)+0x28 /sys/src/libthread/channel.c:321
readmouse(mc=0x42df50)+0x54 /sys/src/libdraw/mouse.c:34
getrect(.ret=0x41bce0,but=0x4,mc=0x42df50)+0x62 /sys/src/libdraw/getrect.c:49
r=0x41bc70
rc=0x41bc70
getr(rp=0x41bce0)+0x24 /sys/src/cmd/samterm/main.c:244
p=0x6b000004f6
r=0x2
sweeptext(new=0x0,tag=0x2d)+0x12 /sys/src/cmd/samterm/menu.c:208
r=0x2
t=0x42df50
inmesg(type=0x2,count=0x2)+0x1ab /sys/src/cmd/samterm/mesg.c:136
m=0x10000002d
l=0x2d00001b00
i=0x43829000000001
t=0x438290
lp=0x42e050
rcv()+0x7a /sys/src/cmd/samterm/mesg.c:77
threadmain(argv=0x7ffffeffef90)+0x173 /sys/src/cmd/samterm/main.c:63
so avoid blocking in the mouse ioproc by using nbsend()
instead of send() for writing to the resize channel.
linux will send small, unpadded arp packets which may arrive over
wifi, so allow small packets into the bridge and pad any packets that
are too small when going out.
touchscreens signal multiple contact points (X/Y) in
the hid descriptor separated by being nested in separate
collections. the contact point is identified by a
optional contact id. if omited, we use the collection
index and report id.
so we collect all the items (X/Y, buttons, wheel) from
separate collections in Hidslot structures and in the
end combine all the slots together.
buttons are or'ed together while absolute X/Y is applied
when it changed. relative X/Y deltas get added together.
thanks to kivik and Glats for testing.
respond to MSG_GLOBAL_REQUEST with MSG_REQUEST_FAILURE
as stated by rfc4254 when server wants a reply.
failing todo so breaks some proprietary keep-alive schemes.
coproc.c generated the instrucitons anew each time,
requiering a i+d cache flush for each operation.
instead, we can speed this up like this:
given that the coprocessor registers are per cpu, we can
assume that interrupts have already been disabled by
the caller to prevent a process switch to another cpu.
we cache the instructions generated in a static append
only buffer and maintain separate end pointers for each
cpu.
the cache flushes only need to be done when new
operations have been added to the buffer.
- make frame base pointer variable
- in rwreg(), save/restore the interpreter state and allocate a Frame* on the stack
- add overflow checks for frame base pointer to xec() and amleval()
- gc() scans the whole stack from FP to the *real* bottom F0