| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
| |
It works fine AFAICT, just missing VP8 deblocking filters, so lossy
WebP images don't look great.
I have extended the API a bit to allow reading from stdin, not just
paths. Otherwise, it's the same as matanui159/jebp.
TODO: add loop filters
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Now we have decoders for gif, jpeg, bmp. Also, the in-house PNG decoder
has been replaced in favor of the stbi implementation; this means we
no longer depend on zlib, since stbi comes with a built in inflate
implementation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* multi-processed and sandboxed PNG decoding & encoding (through local
CGI)
* improved request body passing (including support for output id as
response body)
* simplified & faster blob()/text() - now every request starts
suspended, and OngoingData.buf has been replaced with loader's
buffering capability
* image caching: we no longer pull bitmaps from the container after
every single getLines call
Next steps: replace our bespoke PNG decoder with something more usable,
add other decoders, and make them stream.
|
|
|
|
| |
Operation "modularize Chawan somewhat" part 3
|
|
|
|
| |
We have a markdown converter, so why not use it?
|
|
|
|
|
|
|
| |
* prefix to-be-separated modules with js
* remove dynstreams dependency
* untangle from EmptyPromise
* move typeptr into tojs
|
|
|
|
| |
Thanks @onemoresuza for noticing.
|
|
|
|
|
| |
The 100kb or so doesn't hurt as much as not having manual pages at all
without pandoc (+ not auto-updating them through make all) does.
|
|
|
|
|
|
|
|
|
|
| |
std's version is known to be broken on versions we still support, and it
makes no sense to use different decoders anyway.
(This does introduce a bit of a dependency hell, because js/base64
depends on js/javascript which tries to bring in the entire QuickJS
runtime. So we move that out into twtstr, and manually convert a
Result[string, string] to DOMException in js/base64.)
|
| |
|
|
|
|
| |
(Sadly some layout tests still fail.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use libseccomp, which is now a semi-mandatory dependency on Linux.
(You can still build without it, but only if you pass a scary long flag
to make.)
For this to work I had to disable getTimezoneOffset, which would
otherwise call localtime_r which in turn reads in some files from
/usr/share/zoneinfo. To allow this we would have to give unrestricted
openat(2) access to buffer processes, which is unacceptable.
(Giving websites access to the local timezone is a fingerprinting vector
so if this ever gets fixed then it should be an opt-in config setting.)
This patch also includes misc fixes to buffer cloning, and fixes the
LIBEXECDIR override in the makefile so that it is actually useful.
|
|
|
|
|
| |
-O3 takes ages to compile on slower computers, and it makes roughly zero
difference in performance
|
|
|
|
|
| |
Otherwise, if you set OBJDIR with a non-privileged user and then install
with sudo, then it will fail to copy the manpages.
|
|
|
|
| |
it's an unnecessary abstraction here
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This way they are no longer compatible, but we no longer need them to
be compatible anyway.
(This also forces us to throw out the old serialize module, and use
packet writers everywhere.)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Depending on Perl just for this is silly.
Now we use libregexp for filtering basically the same things as
w3mman2html did. This required another patch to QuickJS to avoid
pulling in the entire JS engine, but in return, we can now run regexes
without a dummy JS context global variable.
Also, man.nim now tries to find a man command on the system even if it's
not in /usr/bin/man.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handling text/plain as ANSI colored text was problematic for two
reasons:
* You couldn't actually look at the real source of HTML pages or text
files that used ANSI colors in the source. In general, I only want
ANSI colors when piping something into my pager, not when viewing any
random file.
* More importantly, it introduced a separate rendering mode for
plaintext documents, which resulted in the problem that only some
buffers had DOMs. This made it impossible to add functionality
that would operate on the buffer's DOM, to e.g. implement w3m's
MARK_URL. Also, it locked us into the horribly inefficient line-based
rendering model of entire documents.
Now we solve the problem in two separate parts:
* text/x-ansi is used automatically for documents received through
stdin. A text/x-ansi handler ansi2html converts ANSI formatting to
HTML. text/x-ansi is also used for .ans, .asc file extensions.
* text/plain is a separate input mode in buffer, which places all text
in a single <plaintext> tag. Crucially, this does not invoke the HTML
parser; that would eat NUL characters, which we should avoid.
One blind spot still remains: copiousoutput used to display ANSI colors,
and now it doesn't. To solve this, users can put the x-ansioutput
extension field to their mailcap entries, which behaves like
x-htmloutput except it first pipes the output into ansi2html.
|
|
|
|
|
|
|
| |
We do not use threads at the moment, so there's no need to link to
pthreads either.
(Also, add nim.cfg to the cha target in the Makefile.)
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
derived from w3mman2html.cgi, there are only a few minor differences:
* different man page opener command
* use man:, man-k:, man-l: instead of query string to specify action
* no form input (C-lC-uman:pageC-m is faster anyway)
TODO rewrite in Nim so we don't have to depend on Perl...
|
|
|
|
|
|
|
|
|
|
| |
* Fix incorrect internal definition of the fragment percent-encode set
* urlenc, urldec: these are simple utility programs mainly for use
with shell local CGI scripts. (Sadly the printf + xargs solution is
not portable.)
* Pass libexec directory as an env var to local CGI scripts
* Update trans.cgi to use urldec and add an example for combining
it with selections
|
|
|
|
| |
Also for reducing compilation time.
|
|
|
|
|
|
|
|
|
| |
Speeds up compilation somewhat. Included in the repository because
it's not that huge.
misc changes:
* use seq, not set for UCS-16 sets (it takes up less space)
* remove unnecessary noSideEffects casts
|
|
|
|
| |
why not
|
|
|
|
|
|
|
|
| |
* Rewrite in Nim
* This time, do not use a state machine (it was a very bad idea)
* Do not emit <br> for every line; use CSS instead
* Avoid double-newline caused by margins using CSS
* Properly support list items
|
|
|
|
| |
hopefully this works
|
| |
|
|
|
|
| |
also in ftp: clean up resources before exit
|
|
|
|
|
| |
reimplementing it portably in Nim seems incredibly annoying, so we
just use C
|
| |
|
|
|
|
| |
as done in upstream
|
|
|
|
|
|
|
| |
* Makefile: fix parallel build, add new binaries to install target
* twtstr: split out libunicode-related stuff to luwrap
* config: quote default gopher2html URL env var for unquote
* adapter/: get rid of types/url dependency, use CURL url in all cases
|
|
|
|
|
| |
Avoid computing e.g. charwidth data for http which does not need it
at all.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now it is (technically) no longer mandatory to link to libcurl.
Also, Chawan is at last completely protocol and network backend
agnostic :)
* Implement multipart requests in local CGI
* Implement simultaneous download of CGI data
* Add REQUEST_HEADERS env var with all headers
* cssparser: add a missing check in consumeEscape
|
|
|
|
| |
Also, move default urimethodmap config to res.
|
| |
|
| |
|
| |
|
|
|
|
| |
error codes are WIP, not final yet...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add MAPPED_URI_* as environment variables when a request is coming
from urimethodmap
It costs us compatibility with w3m, but it seems to be a massive
improvement over smuggling in the URL as a query string and then
writing an ad-hoc parser for every single urimethodmap script.
The variables are set for every urimethodmap request, to avoid
accidental leaking of global environment variables.
* Move about: to adapters (an obvious improvement over the previous
solution)
|
| |
|