| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
| |
ugh
|
|
|
|
| |
still not really great, because inline background is a mess too
|
|
|
|
|
|
|
|
|
|
|
|
| |
std/selectors uses OS-specific selector APIs, which sounds good in
theory (faster than poll!), but sucks for portability in practice.
Sure, you can fix portability bugs, but who knows how many there are
on untested platforms... poll is standard, so if it works on one
computer it should work on all other ones. (I hope.)
As a bonus, I rewrote the timeout API for poll, which incidentally
fixes setTimeout across forks. Also, SIGWINCH should now work on all
platforms (as we self-pipe instead of signalfd/kqueue magic).
|
|
|
|
|
|
| |
* fix broken int conversion in dynstream
* fix EPIPE handling in forkserver
* merge fdmap and connectingContainers into loader map
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* refactor parseHeader
* optimize response blob()
* add direct "to cache" mode for loader requests which sets stdout to a
file, and use it for image processing
* move image resizing into a separate process
* mmap cache files in between processing steps when possible
At last, resize is no longer a part of image decoding. Also, it feels
much nicer to keep encoded image data in the same cache as everything
else.
The mmap operations *should* be more efficient than copying the whole
RGBA data through a pipe. In practice, it only makes a difference for
loading (well, now just mmapping) the encoded image into the pager,
where it singlehandedly speeds up image display by 10x on my test image.
For the other steps, the unfortunate fact that "tocache" must delay the
next fork/exec in the pipeline until the entire image is processed seems
to equal out any wins we might have gotten from skipping a single raw
RGBA copy.
I have tried moving the delay before the exec (it's possible with yet
another pipe), but it didn't help much and made the code much
uglier. (Not that tocache didn't, but I can live with this...)
|
|
|
|
| |
+ be a bit more paranoid about double closes
|
| |
|
|
|
|
|
|
|
|
|
|
| |
* stream: and passFd is now client-based, and accessible for buffers
* Bitmap's width & height is now int, not uint64
* no more non-network Bitmap special case in the pager for canvas
I just shoehorned it into the static image model, so it still doesn't
render changes after page load. But at least now it doesn't crash the
browser.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* fix header case sensitivity issues
-> probably still wrong as it discards the original
casing. better than nothing, anyway
* fix fulfill on generic promises
* support standard open() async parameter weirdness
* refactor loader response body reading (so bodyRead is no longer
mandatory)
* actually read response body
still missing: response body getters
|
| |
|
|
|
|
|
|
|
|
|
| |
* xmlhttprequest: fix missing import
* painter: generic tuple workaround
* dynstream: merge module with implementations (so it will work with
vtables)
Not enabling vtables yet since it doesn't work with refc.
|
|
|
|
| |
it was never implemented
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* resize images with stb_image_resize
* use tee for output handle redirection (redirectToFile blocks)
* cache original image files
* accept lseek in sandbox
* misc stbi fixes
For now, I just pulled in stb_image_resize v1. v2 is an extra 150K in
size, not sure if it's worth the cost. (Either way, we can always switch
later if needed, since the API is almost the same.)
Next step: move sixel/kitty encoders to CGI, and cache their output in
memory instead of the intermediate RGBA representation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* multi-processed and sandboxed PNG decoding & encoding (through local
CGI)
* improved request body passing (including support for output id as
response body)
* simplified & faster blob()/text() - now every request starts
suspended, and OngoingData.buf has been replaced with loader's
buffering capability
* image caching: we no longer pull bitmaps from the container after
every single getLines call
Next steps: replace our bespoke PNG decoder with something more usable,
add other decoders, and make them stream.
|
|
|
|
| |
naturally, it's opt-in
|
|
|
|
| |
Operation "modularize Chawan somewhat" part 3
|
|
|
|
|
|
|
|
|
| |
* fix enctype not getting picked up
* fix form data constructor requiring open() syscall (which gets blocked
by our seccomp filter)
* add closing boundary to multipart end
* pass fds instead of path names through WebFile/Blob and send those
through bufwriter/bufreader
|
|
|
|
|
|
|
| |
* prefix to-be-separated modules with js
* remove dynstreams dependency
* untangle from EmptyPromise
* move typeptr into tojs
|
|
|
|
| |
no point in having identical overloads
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we didn't actually free the main JS runtime, probably because
you can't do this without first waiting for JS to unwind the stack.
(This has the unfortunate effect that code now *can* run after quit().
TODO: find a fix for this.)
This isn't a huge problem per se, we only have one of these and the OS
can clean it up. However, it also disabled the JS_FreeRuntime leak
check, which resulted in sieve-like behavior (manual refcounting is
a pain).
So now we choose the other tradeoff: quit no longer runs exitnow, but
it waits for the event loop to run to the end and only then exits the
browser. Then, before exit we free the JS context & runtime, and also
all JS values allocated by config.
Fixes:
* fix `ad' flag not being set for just one siteconf/omnirule
* fix various leaks (since leak check is enabled now)
* use ptr UncheckedArray[JSValue] for QJS bindings that take an array
* allow JSAtom in jsgetprop etc., also disallow int types other than
uint32
* do not set a destructor for globals
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* png: add missing filters, various decoder fixes
* term: fix kitty response interpretation, add support for kitty image
detection
* buffer, pager: initial image display support
Emphasis on "initial"; it only "works" with kitty output and PNG input.
Also, it's excruciatingly slow, and repaints images way too often.
Left undocumented intentionally it for now, until it actually becomes
useful. In the meantime, adventurous users can find out themselves why:
[[siteconf]]
url = "https://.*"
images = true
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use libseccomp, which is now a semi-mandatory dependency on Linux.
(You can still build without it, but only if you pass a scary long flag
to make.)
For this to work I had to disable getTimezoneOffset, which would
otherwise call localtime_r which in turn reads in some files from
/usr/share/zoneinfo. To allow this we would have to give unrestricted
openat(2) access to buffer processes, which is unacceptable.
(Giving websites access to the local timezone is a fingerprinting vector
so if this ever gets fixed then it should be an opt-in config setting.)
This patch also includes misc fixes to buffer cloning, and fixes the
LIBEXECDIR override in the makefile so that it is actually useful.
|
|
|
|
|
|
| |
* separate params with ; (semicolon) instead of , (colon)
* reduce screaming snake case use
* wrap long lines
|
|
|
|
|
| |
it's needed for memcpy; it usually compiles without the include, but
that's not guaranteed.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's the sandboxing system of FreeBSD. Quite pleasant to work with.
(Just trying to figure out the basics with this one before tackling the
abomination that is seccomp.)
Indeed, the only non-trivial part was getting newSelector to work with
Capsicum. Long story short it doesn't, so we use an ugly pointer cast +
assignment. But even that is stdlib's "fault", not Capsicum's.
This also gets rid of that ugly SocketPath global.
|
| |
|
|
|
|
|
|
|
|
| |
This way they are no longer compatible, but we no longer need them to
be compatible anyway.
(This also forces us to throw out the old serialize module, and use
packet writers everywhere.)
|
| |
|
|
|
|
| |
analogous to bufwriter
|
|
|
|
|
| |
Since we know the length of packets, we can also read them in in one
call. Though I really wish we could do this without the StringStream.
|
|
|
|
|
| |
* move mouse handling to term
* do not use File for input just to disable buffering anyway
|
|
|
|
|
|
|
|
|
|
| |
Unsurprisingly enough, calling `write` a million times is never going to
be very fast.
BufferedWriter basically does the same thing as serialize.swrite did,
but queues up writes in batches before sending them.
TODO: give sread a similar treatment
|
|
|
|
|
|
|
|
| |
* extern -> gone, runproc absorbed by pager, others moved into io/
* display -> local/ (where else would we display?)
* xhr -> html/
* move out WindowAttributes from term, so we don't depend on local
from server
|
| |
|
|
|
|
|
| |
a new abstraction that we derive posixstream from; hopefully with time
we can get rid of std/streams
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally we had several loader processes so that the loader did not
need asynchronity for loading several buffers at once. Since then, the
scope of what loader does has been reduced significantly, and with
that loader has become mostly asynchronous.
This patch finishes the above work as follows:
* We only fork a single loader process for the browser. It is a waste of
resources to do otherwise, and would have made future work on a
download manager very difficult.
* loader becomes (almost) fully async. Now the only sync part is a)
processing commands and b) waiting for clients to consume responses.
b) is a bit more problematic than a), but should not cause problems
unless some other horrible bug exists in a client. (TODO: make it
fully async.)
This gives us a noticable improvement in CSS loading speed, since all
resources can now be queried at once (even before the previous ones
are connected).
* Buffers now only get processes when the *connection* is finished. So
headers, status code, etc. are handled by the client, and the buffer
is forked when the loader starts streaming the response body.
As a result, mailcap entries can simply dup2 the first UNIX domain
socket connection as their stdin. This allows us to remove the ugly
(and slow) `canredir' hack, which required us to send file handles on
a tour accross the entire codebase.
* The "cache" has been reworked somewhat:
- Since canredir is gone, buffer-level requests usually start
in a suspended state, and are explicitly resumed only after
the client could decide whether it wants to cache the response.
- Instead of a flag on Request and the URL as the cache key,
we now use a global counter and the special `cache:' scheme.
* misc fixes: referer_from is now actually respected by buffers (not
just the pager), load info display should work slightly better, etc.
|
|
|
|
|
| |
slightly more efficient, but more importantly does not choke on NUL and
weird \r\n
|
|
|
|
|
|
|
|
|
|
| |
This is an ancient bug, but it got much easier to trigger with mouse
scrolling support so it's time to fix it.
(The bug itself was that since both the client and buffer ends of the
controlling stream are blocking, they could get stuck when both were
trying to send() data to the other end but the buffer was full. So now
we set the client end to non-blocking.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Get rid of sostream hack
This is no longer needed, and was in fact causing loadStream to get
stuck with redirects on regular files (i.e. the common case of receiving
<file on stdin without a -T content type override).
* Unify loading from cache and stdin regular file code paths
Until now, loadFromCache was completely sync. This is not a huge
problem, but it's better to make it async *and* not have two separate
procedures for reading regular files. (In fact, loadFromCache had
*another* bug related to its output fd not being added to outputMap.)
* Extra: remove ansi2html select error handling
It was broken, because it didn't handle read events before the
error. Also unnecessary, since recvData breaks from the loop on n == 0.
|
|
|
|
|
|
| |
The API is horrid :( but at least it copies less.
TODO: think of a better API.
|
|
|
|
|
|
|
|
| |
* disallow Stream interface usage on non-blocking PosixStreams
* do not read estream of forkserver byte-by-byte (it's slow)
* do not call writeData with a zero len in formdata
* do not quote numbers in mailcap quoteFile
* remove some unused stuff
|
| |
|
|
|
|
| |
unused (hopefully forever)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At last all BufferSources are unified.
To achieve the same effect as the previous CLONE source type, we now
use the "fromcache" flag in Request. This *forces* the document to be
streamed from the disk; if the file no longer exists for some reason,
an error is returned (i.e. the document is not re-downloaded).
For a document to be cached, it has to be the main document of the
buffer (i.e. no additional resources requested with fetch()), and
also not an x-htmloutput HTML file (for those, the original source is
saved). The result is that toggleSource now always returns the actual
source for e.g. markdown files, not the HTML-transformed version.
Also, it is now possible to view the source of a document that is
still being downloaded.
buffer.sstream has almost been eliminated; it still exists, but only as
a pseudo-buffer to interface with EncoderStream and DecoderStream. It no
longer holds the entire source of a buffer at any point, and is cleared
as soon as the buffer is completely loaded.
|
|
|
|
|
| |
Instead, use a stream: scheme and associate hostnames with file
descriptors directly from the pager.
|