| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
* actually download & compile modules (but don't run them yet)
* fix a bug in XHR (on some older Nim versions, move() doesn't
actually move)
|
|
|
|
| |
Dispatch manually with fromJS instead.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
* xmlhttprequest: fix missing import
* painter: generic tuple workaround
* dynstream: merge module with implementations (so it will work with
vtables)
Not enabling vtables yet since it doesn't work with refc.
|
|
|
|
|
|
|
|
|
| |
* buffer, pager, config: add meta-refresh value, which makes it possible
to follow http-equiv=refresh META tags.
* config: clean up redundant format mode parser
* timeout: accept varargs for params to pass on to functions
* pager: add "options" dict to JS gotoURL
* twtstr: remove redundant startsWithNoCase
|
|
|
|
|
|
|
|
|
|
|
| |
fixes the following bug:
* click a link that redirects somewhere
* go back
* discard buffer (that had the link)
* discard the new buffer
then you would find yourself in a zombie buffer you previously discarded
|
|
|
|
|
| |
This fixes a bug where cloning buffers with images would crash the
browser.
|
|
|
|
|
|
| |
Fixes the bug where getting redirected to a buffer that the pager then
deleted (e.g. image display, site no longer available, etc.) would land
you in a buffer detached from the main tree.
|
|
|
|
|
|
|
|
|
|
|
| |
Saves bandwidth; it's especially useful over SSH. Still not sure if this
is the right solution, since it now needs two select cycles instead
of one, and it does yet another copy of the image. (Unnecessarily,
because stbi cannot stream its output, and stbiw cannot stream its
input.)
Also, to save memory, we now discard decoded images of buffers that are
not being viewed.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
With many limitations:
* slightly randomized expiry, so it's harder to fingerprint
* only images. so e.g. CSS is still left uncached
* it's per-buffer and non-persistent, so images are still redownloaded
for every new page load
so it's more of an image sharing between placements than true caching.
|
| |
|
|
|
|
|
|
|
| |
* merge select into container
* avoid unnecessary redraws in draw() for parts of the screen that
haven't been updated
* various image redraw fixes
|
|
|
|
|
|
| |
* basic repaint algorithm for sixel (instead of brute force "clear the
whole screen")
* do not re-send kitty images already on the screen
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* multi-processed and sandboxed PNG decoding & encoding (through local
CGI)
* improved request body passing (including support for output id as
response body)
* simplified & faster blob()/text() - now every request starts
suspended, and OngoingData.buf has been replaced with loader's
buffering capability
* image caching: we no longer pull bitmaps from the container after
every single getLines call
Next steps: replace our bespoke PNG decoder with something more usable,
add other decoders, and make them stream.
|
|
|
|
| |
naturally, it's opt-in
|
|
|
|
|
|
| |
* refactor form submission
* add options to specify form handling per protocol
* block cross-protocol POST requests
|
|
|
|
| |
Operation "modularize Chawan somewhat" part 3
|
|
|
|
|
|
|
| |
* make Client an instance of Window (for less special casing)
* misc work on Request & fetch
* improve origin comparison (opaque origins of same URLs are now
considered the same)
|
| |
|
|
|
|
|
|
|
|
|
| |
* fix enctype not getting picked up
* fix form data constructor requiring open() syscall (which gets blocked
by our seccomp filter)
* add closing boundary to multipart end
* pass fds instead of path names through WebFile/Blob and send those
through bufwriter/bufreader
|
|
|
|
|
|
| |
Use a LUContext to only load required CharRanges once per pager.
Also, add kana & hangul vi word break categories for convenience.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of using the built-in (and outdated, and buggy) tables, we now
use libunicode from QJS. This shaves some bytes off the executable,
though far less than I had imagined it would.
Also, a surprising effect of this change: because libunicode's tables
aren't glitched out, kanji properly gets classified as alpha. I found
this greatly annoying because `w' in Japanese text would now jump
through whole sentences. As a band-aid solution I added an extra
Han category, but I wish we had a more robust solution that could
differentiate between *all* scripts.
TODO: I suspect that separately loading the tables for every rune in
breaksViWordCat is rather inefficient. Using some context object (at
least per operation) would probably be beneficial.
|
|
|
|
|
|
|
| |
* prefix to-be-separated modules with js
* remove dynstreams dependency
* untangle from EmptyPromise
* move typeptr into tojs
|
|
|
|
| |
no point in having identical overloads
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* png: add missing filters, various decoder fixes
* term: fix kitty response interpretation, add support for kitty image
detection
* buffer, pager: initial image display support
Emphasis on "initial"; it only "works" with kitty output and PNG input.
Also, it's excruciatingly slow, and repaints images way too often.
Left undocumented intentionally it for now, until it actually becomes
useful. In the meantime, adventurous users can find out themselves why:
[[siteconf]]
url = "https://.*"
images = true
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use libseccomp, which is now a semi-mandatory dependency on Linux.
(You can still build without it, but only if you pass a scary long flag
to make.)
For this to work I had to disable getTimezoneOffset, which would
otherwise call localtime_r which in turn reads in some files from
/usr/share/zoneinfo. To allow this we would have to give unrestricted
openat(2) access to buffer processes, which is unacceptable.
(Giving websites access to the local timezone is a fingerprinting vector
so if this ever gets fixed then it should be an opt-in config setting.)
This patch also includes misc fixes to buffer cloning, and fixes the
LIBEXECDIR override in the makefile so that it is actually useful.
|
|
|
|
|
|
| |
* separate params with ; (semicolon) instead of , (colon)
* reduce screaming snake case use
* wrap long lines
|
|
|
|
|
|
|
|
|
| |
For some reason, halfPageDown decremented height instead of incrementing
it, which caused some rather weird behavior where halfPageUp +
halfPageDown would put the cursor in a different position than it was
before.
Also, we must increment *before* dividing to mimic vi behavior properly.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
* fix mismatch between return value & read value that would either crash
or freeze the browser depending on its mood
* add an assertion to detect the above footgun
* fix some resource leaks
* fix iteration over a table that called a function which altered the
table in buffer's cancel()
* if user cancels before anything is loaded, destroy the container too
|
|
|
|
|
|
|
|
| |
This way they are no longer compatible, but we no longer need them to
be compatible anyway.
(This also forces us to throw out the old serialize module, and use
packet writers everywhere.)
|
|
|
|
|
|
|
| |
* send title to pager as soon as it's available
* expose `title' to DOM
* rename undocumented `getTitle' js function to `title' getter in
Container
|
|
|
|
| |
this is buffer reading from pager
|
|
|
|
|
|
|
|
|
| |
* `s{Enter}' now saves link, and `sS' saves source.
* Changed ;, +, @ to g0, g$, gc so that it's somewhat consistent with
vim (and won't conflict with ; for "repeat jump to char")
* Changed (, ) to -, + so that it doesn't conflict with vi's
"previous/next sentence" (once we have it...)
* Add previously missing keybindings to about:chawan
|
|
|
|
| |
Useful when browsing plaintext files; w3m has it too.
|
|
|
|
| |
it's an unintended side effect that we do not want
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has its own problems, but application/octet-stream has the horrible
consequence that opening any local file with an unrecognized type
automatically quits the browser.
(FWIW, w3m also falls back to text/plain, so it's not such an unreasonable
default.)
The proper solution would be to a) fix the bug that makes the browser
auto-quit and b) show a "what to do" prompt for unrecognized file types
(and allow users to override it, preferably on a per-protocol basis.)
|
|
|
|
|
|
| |
* also set fromX to corrected target x if target x is less than corrected x;
this is mainly so that setCursorX(-1) works as expected
* return w from cursorFirstX() even if cursorx is <= the last character
|
|
|
|
|
|
|
|
| |
It was defined in the wrong module, and unnecessarily included
LoaderClientConfig.
Also, referrerPolicy was not being propagated to loader clients because
it was (incorrectly) in BufferConfig instead of LoaderClientConfig.
|
|
|
|
|
| |
It can happen that a container is deleted before it acquires a buffer
process; add it to the `unreg' array in this case too.
|
|
|
|
|
|
|
|
| |
* extern -> gone, runproc absorbed by pager, others moved into io/
* display -> local/ (where else would we display?)
* xhr -> html/
* move out WindowAttributes from term, so we don't depend on local
from server
|
|
|
|
|
|
| |
only for source for now, rendered document is a bit more complicated
(also, get rid of useless extern/editor module)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sometimes, headers take a while to reach us even after the result has
been sent. e.g.
echo 'Cha-Control: Connected'
sleep 5
echo 'Cha-Control: ControlDone'
^ this froze the UI for 5 seconds, that's certainly not what we want.
Since we don't have a proper buffered reader yet, and I don't want to
write another disgusting hack like BufStream, we just use a state
machine to figure out how much we can read. Sounds bad, but in practice
it works just fine since loader's response patterns are very simple.
|
|
|
|
|
|
|
| |
Better compute the values we need on-demand at the call sites; this way,
we can pass through content type attributes to mailcap too.
(Also, remove a bug where applyResponse was called twice.)
|