| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
used on 32-bit platforms
|
|
|
|
|
| |
Now we use QuickJS-NG, which is better maintained than QJS and has
column tracking.
|
|
|
|
|
|
| |
Also, kill twidth and its friends; we haven't been using it for a
while now. (In the future, a solution with PUA chars might be worth
exploring.)
|
|
|
|
| |
required for poll
|
|
|
|
|
|
|
|
|
|
|
|
| |
std/selectors uses OS-specific selector APIs, which sounds good in
theory (faster than poll!), but sucks for portability in practice.
Sure, you can fix portability bugs, but who knows how many there are
on untested platforms... poll is standard, so if it works on one
computer it should work on all other ones. (I hope.)
As a bonus, I rewrote the timeout API for poll, which incidentally
fixes setTimeout across forks. Also, SIGWINCH should now work on all
platforms (as we self-pipe instead of signalfd/kqueue magic).
|
|
|
|
|
| |
* reduce copies & allocations
* simplify SGR generation
|
|
|
|
|
|
| |
* simplify uint parser
* use uint parser for signed ints too (to simplify overflow handling)
* use openArray[char] where possible
|
|
|
|
| |
looks like it's also necessary for musl
|
|
|
|
| |
fcntl has some cursed commands that we really don't want to allow
|
|
|
|
| |
Make sure U+FFFD is returned when a bounds check fails.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
std/unicode has the following issues:
* Rune is an int32, which implies overflow checking. Also, it is
distinct, so you have to convert it manually to do arithmetic.
* QJS libunicode and Chagashi work with uint32, interfacing with these
required pointless type conversions.
* fastRuneAt is a template, meaning it's pasted into every call
site. Also, it decodes to UCS-4, so it generates two branches that
aren't even used. Overall this lead to quite some code bloat.
* fastRuneAt and lastRune have frustratingly different
interfaces. Writing code to handle both cases is error prone.
* On older Nim versions which we still support, std/unicode takes
strings, not openArray[char]'s.
Replace it with "twtuni", which includes some improved versions of
the few procedures from std/unicode that we actually use.
|
|
|
|
|
| |
Wait, why does std fastRuneAt try to decode UCS-32?
Hmm...
|
| |
|
|
|
|
|
| |
WSL needs it. It was already allowed on Android, so this just makes the
sandboxes converge a little.
|
|
|
|
|
|
|
|
| |
Do it like parseEnumNoCase0, so we no longer instantiate a gazillion
different binary searches for the same type.
While we're at it, make matchNameProduction's searchInMap use uint32
too.
|
|
|
|
|
|
|
| |
Until recently, glibc used to implement it as fstatat. So don't trap
for fstatat (and for consistency, fstat), but return EPERM.
Just to be sure, rewrite sixel & stbi to never call fread.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* fix header case sensitivity issues
-> probably still wrong as it discards the original
casing. better than nothing, anyway
* fix fulfill on generic promises
* support standard open() async parameter weirdness
* refactor loader response body reading (so bodyRead is no longer
mandatory)
* actually read response body
still missing: response body getters
|
|
|
|
| |
Dispatch manually with fromJS instead.
|
|
|
|
| |
+ slightly optimize getContentType
|
|
|
|
| |
Nim 1.6 does not like it.
|
|
|
|
| |
called on armhf
|
| |
|
|
|
|
| |
was only used by the PNG decoder which got replaced by stbi
|
|
|
|
|
|
|
| |
* cssvalues, twtstr: unify enum parsing code paths, parse enums by
bisearch instead of hash tables
* mediaquery: refactor (long overdue), fix range comparison syntax
parsing, make ident comparisons case-insensitive (as they should be)
|
| |
|
|
|
|
|
|
|
|
|
| |
* buffer, pager, config: add meta-refresh value, which makes it possible
to follow http-equiv=refresh META tags.
* config: clean up redundant format mode parser
* timeout: accept varargs for params to pass on to functions
* pager: add "options" dict to JS gotoURL
* twtstr: remove redundant startsWithNoCase
|
|
|
|
|
|
| |
* fix various parsing bugs
* rewrite state machine
* other small optimizations
|
|
|
|
|
| |
Still not perfect, because it crashes on missing /tmp dir so you have to
manually set it...
|
| |
|
| |
|
|
|
|
| |
and enable it by default.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Saves bandwidth; it's especially useful over SSH. Still not sure if this
is the right solution, since it now needs two select cycles instead
of one, and it does yet another copy of the image. (Unnecessarily,
because stbi cannot stream its output, and stbiw cannot stream its
input.)
Also, to save memory, we now discard decoded images of buffers that are
not being viewed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* resize images with stb_image_resize
* use tee for output handle redirection (redirectToFile blocks)
* cache original image files
* accept lseek in sandbox
* misc stbi fixes
For now, I just pulled in stb_image_resize v1. v2 is an extra 150K in
size, not sure if it's worth the cost. (Either way, we can always switch
later if needed, since the API is almost the same.)
Next step: move sixel/kitty encoders to CGI, and cache their output in
memory instead of the intermediate RGBA representation.
|
|
|
|
|
|
|
| |
Now we have decoders for gif, jpeg, bmp. Also, the in-house PNG decoder
has been replaced in favor of the stbi implementation; this means we
no longer depend on zlib, since stbi comes with a built in inflate
implementation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* multi-processed and sandboxed PNG decoding & encoding (through local
CGI)
* improved request body passing (including support for output id as
response body)
* simplified & faster blob()/text() - now every request starts
suspended, and OngoingData.buf has been replaced with loader's
buffering capability
* image caching: we no longer pull bitmaps from the container after
every single getLines call
Next steps: replace our bespoke PNG decoder with something more usable,
add other decoders, and make them stream.
|
|
|
|
| |
Operation "modularize Chawan somewhat" part 3
|
| |
|
|
|
|
| |
seems to get called for signal handlers
|
|
|
|
|
|
|
| |
* make Client an instance of Window (for less special casing)
* misc work on Request & fetch
* improve origin comparison (opaque origins of same URLs are now
considered the same)
|
| |
|
|
|
|
|
|
|
| |
* add $LOGNAME to the tmp directory name, so that tmpdirs of separate
users don't conflict
* use separate directory for sockets, so that we do not have to give
buffers access to all cached pages
|
|
|
|
|
|
| |
Use a LUContext to only load required CharRanges once per pager.
Also, add kana & hangul vi word break categories for convenience.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of using the built-in (and outdated, and buggy) tables, we now
use libunicode from QJS. This shaves some bytes off the executable,
though far less than I had imagined it would.
Also, a surprising effect of this change: because libunicode's tables
aren't glitched out, kanji properly gets classified as alpha. I found
this greatly annoying because `w' in Japanese text would now jump
through whole sentences. As a band-aid solution I added an extra
Han category, but I wish we had a more robust solution that could
differentiate between *all* scripts.
TODO: I suspect that separately loading the tables for every rune in
breaksViWordCat is rather inefficient. Using some context object (at
least per operation) would probably be beneficial.
|
| |
|
|
|
|
| |
openssl needs it
|
|
|
|
|
| |
Unsigned operations and conversions to unsigned types always wrap/narrow
without checks, so no need to manually mask/cast/etc. them.
|
|
|
|
|
|
|
|
|
|
| |
std's version is known to be broken on versions we still support, and it
makes no sense to use different decoders anyway.
(This does introduce a bit of a dependency hell, because js/base64
depends on js/javascript which tries to bring in the entire QuickJS
runtime. So we move that out into twtstr, and manually convert a
Result[string, string] to DOMException in js/base64.)
|