| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
* resolve empty promises without params, not a single undefined param
(we can't just unsafeAddr constants on 1.6.14, and I don't see why we
need to pass undefined?)
* same in client with InternalError
* explicitly convert static `asglobal' to bool (1.6.14 turns unquoted
enums into ints...)
|
|
|
|
| |
no point in having identical overloads
|
|
|
|
|
|
|
| |
* unwind the QJS stack with an uncatchable exception when quit is called
* clean up JS references in JSRuntime free even when the Nim
counterparts are still alive
* simplify some tests
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we didn't actually free the main JS runtime, probably because
you can't do this without first waiting for JS to unwind the stack.
(This has the unfortunate effect that code now *can* run after quit().
TODO: find a fix for this.)
This isn't a huge problem per se, we only have one of these and the OS
can clean it up. However, it also disabled the JS_FreeRuntime leak
check, which resulted in sieve-like behavior (manual refcounting is
a pain).
So now we choose the other tradeoff: quit no longer runs exitnow, but
it waits for the event loop to run to the end and only then exits the
browser. Then, before exit we free the JS context & runtime, and also
all JS values allocated by config.
Fixes:
* fix `ad' flag not being set for just one siteconf/omnirule
* fix various leaks (since leak check is enabled now)
* use ptr UncheckedArray[JSValue] for QJS bindings that take an array
* allow JSAtom in jsgetprop etc., also disallow int types other than
uint32
* do not set a destructor for globals
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use libseccomp, which is now a semi-mandatory dependency on Linux.
(You can still build without it, but only if you pass a scary long flag
to make.)
For this to work I had to disable getTimezoneOffset, which would
otherwise call localtime_r which in turn reads in some files from
/usr/share/zoneinfo. To allow this we would have to give unrestricted
openat(2) access to buffer processes, which is unacceptable.
(Giving websites access to the local timezone is a fingerprinting vector
so if this ever gets fixed then it should be an opt-in config setting.)
This patch also includes misc fixes to buffer cloning, and fixes the
LIBEXECDIR override in the makefile so that it is actually useful.
|
|
|
|
|
|
| |
* separate params with ; (semicolon) instead of , (colon)
* reduce screaming snake case use
* wrap long lines
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's the sandboxing system of FreeBSD. Quite pleasant to work with.
(Just trying to figure out the basics with this one before tackling the
abomination that is seccomp.)
Indeed, the only non-trivial part was getting newSelector to work with
Capsicum. Long story short it doesn't, so we use an ugly pointer cast +
assignment. But even that is stdlib's "fault", not Capsicum's.
This also gets rid of that ugly SocketPath global.
|
|
|
|
| |
as described in <https://todo.sr.ht/~bptato/chawan/6>
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This way they are no longer compatible, but we no longer need them to
be compatible anyway.
(This also forces us to throw out the old serialize module, and use
packet writers everywhere.)
|
|
|
|
| |
analogous to bufwriter
|
|
|
|
| |
this is buffer reading from pager
|
|
|
|
| |
it wouldn't start dump mode if stdout was not a tty but stdin was.
|
|
|
|
|
|
|
|
|
| |
* `s{Enter}' now saves link, and `sS' saves source.
* Changed ;, +, @ to g0, g$, gc so that it's somewhat consistent with
vim (and won't conflict with ; for "repeat jump to char")
* Changed (, ) to -, + so that it doesn't conflict with vi's
"previous/next sentence" (once we have it...)
* Add previously missing keybindings to about:chawan
|
|
|
|
|
| |
* move mouse handling to term
* do not use File for input just to disable buffering anyway
|
|
|
|
|
|
| |
Better (and simpler) than storing them all over the place.
extra: change lmDownload text to match w3m
|
| |
|
|
|
|
|
|
|
|
|
| |
* Parse the default config at runtime. There's no significant
performance difference, but this makes it much less painful to write
config code.
* Add better error reporting
* Make fromJS2 easier to use
* Unquote ChaPaths while parsing config
|
|
|
|
| |
it's an unintended side effect that we do not want
|
|
|
|
|
|
|
|
|
|
| |
Unsurprisingly enough, calling `write` a million times is never going to
be very fast.
BufferedWriter basically does the same thing as serialize.swrite did,
but queues up writes in batches before sending them.
TODO: give sread a similar treatment
|
|
|
|
|
|
| |
* do not immediately quit when all containers are gone
* fix double saving bug
* fix wrong "save to" string
|
|
|
|
| |
useful for debugging
|
|
|
|
|
|
|
|
| |
It was defined in the wrong module, and unnecessarily included
LoaderClientConfig.
Also, referrerPolicy was not being propagated to loader clients because
it was (incorrectly) in BufferConfig instead of LoaderClientConfig.
|
|
|
|
|
|
| |
Containers may also be deleted without a connection. More specifically: by
mailcap, when it launches an external process without opening the output
in a buffer.
|
|
|
|
|
| |
It can happen that a container is deleted before it acquires a buffer
process; add it to the `unreg' array in this case too.
|
|
|
|
|
|
|
|
| |
* extern -> gone, runproc absorbed by pager, others moved into io/
* display -> local/ (where else would we display?)
* xhr -> html/
* move out WindowAttributes from term, so we don't depend on local
from server
|
|
|
|
|
|
| |
only for source for now, rendered document is a bit more complicated
(also, get rid of useless extern/editor module)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sometimes, headers take a while to reach us even after the result has
been sent. e.g.
echo 'Cha-Control: Connected'
sleep 5
echo 'Cha-Control: ControlDone'
^ this froze the UI for 5 seconds, that's certainly not what we want.
Since we don't have a proper buffered reader yet, and I don't want to
write another disgusting hack like BufStream, we just use a state
machine to figure out how much we can read. Sounds bad, but in practice
it works just fine since loader's response patterns are very simple.
|
|
|
|
|
|
|
| |
Better compute the values we need on-demand at the call sites; this way,
we can pass through content type attributes to mailcap too.
(Also, remove a bug where applyResponse was called twice.)
|
|
|
|
|
|
|
|
|
| |
This is what the original replacement logic was supposed to do, except
it was broken. The previous fix might have been worse than the original
bug. Now we do it like this:
* if needed, replace buffer in gotoURL
* deleteContainer swaps back the buffer it replaced, if it still exists
* on connection success, kill the buffer we replaced
|
|
|
|
|
| |
a new abstraction that we derive posixstream from; hopefully with time
we can get rid of std/streams
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally we had several loader processes so that the loader did not
need asynchronity for loading several buffers at once. Since then, the
scope of what loader does has been reduced significantly, and with
that loader has become mostly asynchronous.
This patch finishes the above work as follows:
* We only fork a single loader process for the browser. It is a waste of
resources to do otherwise, and would have made future work on a
download manager very difficult.
* loader becomes (almost) fully async. Now the only sync part is a)
processing commands and b) waiting for clients to consume responses.
b) is a bit more problematic than a), but should not cause problems
unless some other horrible bug exists in a client. (TODO: make it
fully async.)
This gives us a noticable improvement in CSS loading speed, since all
resources can now be queried at once (even before the previous ones
are connected).
* Buffers now only get processes when the *connection* is finished. So
headers, status code, etc. are handled by the client, and the buffer
is forked when the loader starts streaming the response body.
As a result, mailcap entries can simply dup2 the first UNIX domain
socket connection as their stdin. This allows us to remove the ugly
(and slow) `canredir' hack, which required us to send file handles on
a tour accross the entire codebase.
* The "cache" has been reworked somewhat:
- Since canredir is gone, buffer-level requests usually start
in a suspended state, and are explicitly resumed only after
the client could decide whether it wants to cache the response.
- Instead of a flag on Request and the URL as the cache key,
we now use a global counter and the special `cache:' scheme.
* misc fixes: referer_from is now actually respected by buffers (not
just the pager), load info display should work slightly better, etc.
|
|
|
|
|
|
|
|
| |
the 0x40 bitmask implies one more state than the 0 bitmask, since state
3 with 0 is unused[0]. so we must add 7, not 6
[0] it's reserved for "move", but movement is indicated differently in
the protocol we use so unused
|
|
|
|
|
|
| |
middle button to close is from w3m
btn5/6 is normally a horizontal scroll wheel, so scrollLeft/Right makes
more sense than prev/next
|
| |
|
|
|
|
|
|
|
|
|
| |
Only report when bytesRead has changed, otherwise we get unnecessary
load requests. (This means -2 return value no longer exists; it did
not work correctly anyway.)
Also, fix the race condition that broke onload returns when onload
happened before client requested load.
|
|
|
|
|
|
| |
* the uint8array thing is probably from txiki.js, but we never used it
* upstream now has JS_GetClassID, importing that instead... (so this
commit won't build :/)
|
|
|
|
|
|
|
|
|
|
| |
This is an ancient bug, but it got much easier to trigger with mouse
scrolling support so it's time to fix it.
(The bug itself was that since both the client and buffer ends of the
controlling stream are blocking, they could get stuck when both were
trying to send() data to the other end but the buffer was full. So now
we set the client end to non-blocking.)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
* reduce onload result size to a single int
* clean up mess that was the container onload handler
This fixes automatic refresh in console. Before, the client would
only request a screen update after receiving the number of bytes read,
but before the screen was actually reshaped (which obviously resulted
in a race condition). Now, "I've reshaped the document" is a separate
response (and is the only occasion where the screen is updated before
the final render).
|
|
|
|
|
|
|
|
|
| |
Some terminal emulators (AKA vte) refuse to set ws_xpixel and ws_ypixel
in the TIOCGWINSZ ioctl, so we now query for CSI 14 t as well. (Also CSI
18 t for good measure, just in case we can't ioctl for some reason.)
Also added some fallback (optionally forced) config values for width,
height, ppc, and ppl. (This is especially useful in dump mode.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handling text/plain as ANSI colored text was problematic for two
reasons:
* You couldn't actually look at the real source of HTML pages or text
files that used ANSI colors in the source. In general, I only want
ANSI colors when piping something into my pager, not when viewing any
random file.
* More importantly, it introduced a separate rendering mode for
plaintext documents, which resulted in the problem that only some
buffers had DOMs. This made it impossible to add functionality
that would operate on the buffer's DOM, to e.g. implement w3m's
MARK_URL. Also, it locked us into the horribly inefficient line-based
rendering model of entire documents.
Now we solve the problem in two separate parts:
* text/x-ansi is used automatically for documents received through
stdin. A text/x-ansi handler ansi2html converts ANSI formatting to
HTML. text/x-ansi is also used for .ans, .asc file extensions.
* text/plain is a separate input mode in buffer, which places all text
in a single <plaintext> tag. Crucially, this does not invoke the HTML
parser; that would eat NUL characters, which we should avoid.
One blind spot still remains: copiousoutput used to display ANSI colors,
and now it doesn't. To solve this, users can put the x-ansioutput
extension field to their mailcap entries, which behaves like
x-htmloutput except it first pipes the output into ansi2html.
|
|
|
|
|
|
| |
The API is horrid :( but at least it copies less.
TODO: think of a better API.
|
| |
|
|
|
|
|
|
|
|
| |
* disallow Stream interface usage on non-blocking PosixStreams
* do not read estream of forkserver byte-by-byte (it's slow)
* do not call writeData with a zero len in formdata
* do not quote numbers in mailcap quoteFile
* remove some unused stuff
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At last all BufferSources are unified.
To achieve the same effect as the previous CLONE source type, we now
use the "fromcache" flag in Request. This *forces* the document to be
streamed from the disk; if the file no longer exists for some reason,
an error is returned (i.e. the document is not re-downloaded).
For a document to be cached, it has to be the main document of the
buffer (i.e. no additional resources requested with fetch()), and
also not an x-htmloutput HTML file (for those, the original source is
saved). The result is that toggleSource now always returns the actual
source for e.g. markdown files, not the HTML-transformed version.
Also, it is now possible to view the source of a document that is
still being downloaded.
buffer.sstream has almost been eliminated; it still exists, but only as
a pseudo-buffer to interface with EncoderStream and DecoderStream. It no
longer holds the entire source of a buffer at any point, and is cleared
as soon as the buffer is completely loaded.
|
|
|
|
|
| |
Instead, use a stream: scheme and associate hostnames with file
descriptors directly from the pager.
|
|
|
|
|
| |
Move forkBuffer into forkserver (why was it in container anyway), remove
unused mainproc variable, etc.
|
|
|
|
|
| |
* pass 0 so e.g. git does not hang
* use sigtstp so e.g. cgi scripts can clean up if needed
|