| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* png: add missing filters, various decoder fixes
* term: fix kitty response interpretation, add support for kitty image
detection
* buffer, pager: initial image display support
Emphasis on "initial"; it only "works" with kitty output and PNG input.
Also, it's excruciatingly slow, and repaints images way too often.
Left undocumented intentionally it for now, until it actually becomes
useful. In the meantime, adventurous users can find out themselves why:
[[siteconf]]
url = "https://.*"
images = true
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use libseccomp, which is now a semi-mandatory dependency on Linux.
(You can still build without it, but only if you pass a scary long flag
to make.)
For this to work I had to disable getTimezoneOffset, which would
otherwise call localtime_r which in turn reads in some files from
/usr/share/zoneinfo. To allow this we would have to give unrestricted
openat(2) access to buffer processes, which is unacceptable.
(Giving websites access to the local timezone is a fingerprinting vector
so if this ever gets fixed then it should be an opt-in config setting.)
This patch also includes misc fixes to buffer cloning, and fixes the
LIBEXECDIR override in the makefile so that it is actually useful.
|
|
|
|
|
|
| |
* separate params with ; (semicolon) instead of , (colon)
* reduce screaming snake case use
* wrap long lines
|
|
|
|
|
|
|
|
|
| |
For some reason, halfPageDown decremented height instead of incrementing
it, which caused some rather weird behavior where halfPageUp +
halfPageDown would put the cursor in a different position than it was
before.
Also, we must increment *before* dividing to mimic vi behavior properly.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
* fix mismatch between return value & read value that would either crash
or freeze the browser depending on its mood
* add an assertion to detect the above footgun
* fix some resource leaks
* fix iteration over a table that called a function which altered the
table in buffer's cancel()
* if user cancels before anything is loaded, destroy the container too
|
|
|
|
|
|
|
|
| |
This way they are no longer compatible, but we no longer need them to
be compatible anyway.
(This also forces us to throw out the old serialize module, and use
packet writers everywhere.)
|
|
|
|
|
|
|
| |
* send title to pager as soon as it's available
* expose `title' to DOM
* rename undocumented `getTitle' js function to `title' getter in
Container
|
|
|
|
| |
this is buffer reading from pager
|
|
|
|
|
|
|
|
|
| |
* `s{Enter}' now saves link, and `sS' saves source.
* Changed ;, +, @ to g0, g$, gc so that it's somewhat consistent with
vim (and won't conflict with ; for "repeat jump to char")
* Changed (, ) to -, + so that it doesn't conflict with vi's
"previous/next sentence" (once we have it...)
* Add previously missing keybindings to about:chawan
|
|
|
|
| |
Useful when browsing plaintext files; w3m has it too.
|
|
|
|
| |
it's an unintended side effect that we do not want
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has its own problems, but application/octet-stream has the horrible
consequence that opening any local file with an unrecognized type
automatically quits the browser.
(FWIW, w3m also falls back to text/plain, so it's not such an unreasonable
default.)
The proper solution would be to a) fix the bug that makes the browser
auto-quit and b) show a "what to do" prompt for unrecognized file types
(and allow users to override it, preferably on a per-protocol basis.)
|
|
|
|
|
|
| |
* also set fromX to corrected target x if target x is less than corrected x;
this is mainly so that setCursorX(-1) works as expected
* return w from cursorFirstX() even if cursorx is <= the last character
|
|
|
|
|
|
|
|
| |
It was defined in the wrong module, and unnecessarily included
LoaderClientConfig.
Also, referrerPolicy was not being propagated to loader clients because
it was (incorrectly) in BufferConfig instead of LoaderClientConfig.
|
|
|
|
|
| |
It can happen that a container is deleted before it acquires a buffer
process; add it to the `unreg' array in this case too.
|
|
|
|
|
|
|
|
| |
* extern -> gone, runproc absorbed by pager, others moved into io/
* display -> local/ (where else would we display?)
* xhr -> html/
* move out WindowAttributes from term, so we don't depend on local
from server
|
|
|
|
|
|
| |
only for source for now, rendered document is a bit more complicated
(also, get rid of useless extern/editor module)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sometimes, headers take a while to reach us even after the result has
been sent. e.g.
echo 'Cha-Control: Connected'
sleep 5
echo 'Cha-Control: ControlDone'
^ this froze the UI for 5 seconds, that's certainly not what we want.
Since we don't have a proper buffered reader yet, and I don't want to
write another disgusting hack like BufStream, we just use a state
machine to figure out how much we can read. Sounds bad, but in practice
it works just fine since loader's response patterns are very simple.
|
|
|
|
|
|
|
| |
Better compute the values we need on-demand at the call sites; this way,
we can pass through content type attributes to mailcap too.
(Also, remove a bug where applyResponse was called twice.)
|
| |
|
|
|
|
|
| |
cetStatus is only called for soft status updates, not alerts (we have
cetAlert for that)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally we had several loader processes so that the loader did not
need asynchronity for loading several buffers at once. Since then, the
scope of what loader does has been reduced significantly, and with
that loader has become mostly asynchronous.
This patch finishes the above work as follows:
* We only fork a single loader process for the browser. It is a waste of
resources to do otherwise, and would have made future work on a
download manager very difficult.
* loader becomes (almost) fully async. Now the only sync part is a)
processing commands and b) waiting for clients to consume responses.
b) is a bit more problematic than a), but should not cause problems
unless some other horrible bug exists in a client. (TODO: make it
fully async.)
This gives us a noticable improvement in CSS loading speed, since all
resources can now be queried at once (even before the previous ones
are connected).
* Buffers now only get processes when the *connection* is finished. So
headers, status code, etc. are handled by the client, and the buffer
is forked when the loader starts streaming the response body.
As a result, mailcap entries can simply dup2 the first UNIX domain
socket connection as their stdin. This allows us to remove the ugly
(and slow) `canredir' hack, which required us to send file handles on
a tour accross the entire codebase.
* The "cache" has been reworked somewhat:
- Since canredir is gone, buffer-level requests usually start
in a suspended state, and are explicitly resumed only after
the client could decide whether it wants to cache the response.
- Instead of a flag on Request and the URL as the cache key,
we now use a global counter and the special `cache:' scheme.
* misc fixes: referer_from is now actually respected by buffers (not
just the pager), load info display should work slightly better, etc.
|
|
|
|
|
|
|
|
|
| |
Only report when bytesRead has changed, otherwise we get unnecessary
load requests. (This means -2 return value no longer exists; it did
not work correctly anyway.)
Also, fix the race condition that broke onload returns when onload
happened before client requested load.
|
| |
|
|
|
|
|
|
| |
setCursorX only moves the screen backwards if the intended X position is
lower than the actual X position. Pass it -1 so that this is true even
with zero-width lines.
|
|
|
|
|
|
|
|
|
|
| |
This is an ancient bug, but it got much easier to trigger with mouse
scrolling support so it's time to fix it.
(The bug itself was that since both the client and buffer ends of the
controlling stream are blocking, they could get stuck when both were
trying to send() data to the other end but the buffer was full. So now
we set the client end to non-blocking.)
|
| |
|
|
|
|
|
|
|
|
|
| |
* fix cursor jumping back to the start of the line (instead of the end
of the line) when it is outside the viewport and a leftwards update is
requested
* save setxsave too when line is not loaded yet
* always set needslines in onMatch when hlon (this was causing a blank
screen when incremental search was jumping around in large documents)
|
| |
|
|
|
|
|
|
| |
* rename buffer enums
* fix isAscii for char 0x80
* remove dead code from URL
|
|
|
|
|
|
|
|
| |
reshape must do a render from zero, as it's a last resort for users to
fixup the page on a rendering bug.
switchCharset must reset prevStyled for obvious reasons (it refers to
a dead document).
|
|
|
|
|
|
|
|
|
|
|
|
| |
* reduce onload result size to a single int
* clean up mess that was the container onload handler
This fixes automatic refresh in console. Before, the client would
only request a screen update after receiving the number of bytes read,
but before the screen was actually reshaped (which obviously resulted
in a race condition). Now, "I've reshaped the document" is a separate
response (and is the only occasion where the screen is updated before
the final render).
|
|
|
|
|
|
|
|
|
| |
Some terminal emulators (AKA vte) refuse to set ws_xpixel and ws_ypixel
in the TIOCGWINSZ ioctl, so we now query for CSI 14 t as well. (Also CSI
18 t for good measure, just in case we can't ioctl for some reason.)
Also added some fallback (optionally forced) config values for width,
height, ppc, and ppl. (This is especially useful in dump mode.)
|
|
|
|
|
|
|
|
|
|
| |
Aside from being a wrapper of Request, it was just storing the -I
charset, except even that didn't actually work. Whoops.
This fixes -I effectively not doing anything; now it's a forced override
that even disables BOM sniffing. (If the user wants to decode a file
using a certain encoding, it seems wise to assume that they really
meant it.)
|
|
|
|
|
|
|
|
|
|
| |
This fixes a bug where setContentType would call setHTML twice, which
messed up charsets and probably a couple more things. As a bonus, it
allows us to pass around the content type less.
In fact, buffer does not have to know its exact content type, just
whether it is in HTML mode or not. So that's all we tell it now;
only container still keeps track of the content type (as it should).
|
|
|
|
|
|
| |
The API is horrid :( but at least it copies less.
TODO: think of a better API.
|
|
|
|
|
| |
* set loaderPid in clones too
* handle URL in container the same way as in buffer
|
|
|
|
| |
no need for every new buffer to query the window size
|
|
|
|
|
|
| |
This is required by the standard. (Without this, lots of websites have
incorrect background colors, because they set the body height to 100%
of the viewport.)
|
| |
|
|
|
|
|
|
|
|
| |
* disallow Stream interface usage on non-blocking PosixStreams
* do not read estream of forkserver byte-by-byte (it's slow)
* do not call writeData with a zero len in formdata
* do not quote numbers in mailcap quoteFile
* remove some unused stuff
|
|
|
|
| |
also spawn less processes in some cases
|
|
|
|
| |
it broke line info in console since it's never fully loaded
|
| |
|
| |
|
|
|
|
| |
useful for filtering stuff through commands like rdrview
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At last all BufferSources are unified.
To achieve the same effect as the previous CLONE source type, we now
use the "fromcache" flag in Request. This *forces* the document to be
streamed from the disk; if the file no longer exists for some reason,
an error is returned (i.e. the document is not re-downloaded).
For a document to be cached, it has to be the main document of the
buffer (i.e. no additional resources requested with fetch()), and
also not an x-htmloutput HTML file (for those, the original source is
saved). The result is that toggleSource now always returns the actual
source for e.g. markdown files, not the HTML-transformed version.
Also, it is now possible to view the source of a document that is
still being downloaded.
buffer.sstream has almost been eliminated; it still exists, but only as
a pseudo-buffer to interface with EncoderStream and DecoderStream. It no
longer holds the entire source of a buffer at any point, and is cleared
as soon as the buffer is completely loaded.
|