| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Depending on Perl just for this is silly.
Now we use libregexp for filtering basically the same things as
w3mman2html did. This required another patch to QuickJS to avoid
pulling in the entire JS engine, but in return, we can now run regexes
without a dummy JS context global variable.
Also, man.nim now tries to find a man command on the system even if it's
not in /usr/bin/man.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handling text/plain as ANSI colored text was problematic for two
reasons:
* You couldn't actually look at the real source of HTML pages or text
files that used ANSI colors in the source. In general, I only want
ANSI colors when piping something into my pager, not when viewing any
random file.
* More importantly, it introduced a separate rendering mode for
plaintext documents, which resulted in the problem that only some
buffers had DOMs. This made it impossible to add functionality
that would operate on the buffer's DOM, to e.g. implement w3m's
MARK_URL. Also, it locked us into the horribly inefficient line-based
rendering model of entire documents.
Now we solve the problem in two separate parts:
* text/x-ansi is used automatically for documents received through
stdin. A text/x-ansi handler ansi2html converts ANSI formatting to
HTML. text/x-ansi is also used for .ans, .asc file extensions.
* text/plain is a separate input mode in buffer, which places all text
in a single <plaintext> tag. Crucially, this does not invoke the HTML
parser; that would eat NUL characters, which we should avoid.
One blind spot still remains: copiousoutput used to display ANSI colors,
and now it doesn't. To solve this, users can put the x-ansioutput
extension field to their mailcap entries, which behaves like
x-htmloutput except it first pipes the output into ansi2html.
|
|
|
|
|
|
|
| |
We do not use threads at the moment, so there's no need to link to
pthreads either.
(Also, add nim.cfg to the cha target in the Makefile.)
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
derived from w3mman2html.cgi, there are only a few minor differences:
* different man page opener command
* use man:, man-k:, man-l: instead of query string to specify action
* no form input (C-lC-uman:pageC-m is faster anyway)
TODO rewrite in Nim so we don't have to depend on Perl...
|
|
|
|
|
|
|
|
|
|
| |
* Fix incorrect internal definition of the fragment percent-encode set
* urlenc, urldec: these are simple utility programs mainly for use
with shell local CGI scripts. (Sadly the printf + xargs solution is
not portable.)
* Pass libexec directory as an env var to local CGI scripts
* Update trans.cgi to use urldec and add an example for combining
it with selections
|
|
|
|
| |
Also for reducing compilation time.
|
|
|
|
|
|
|
|
|
| |
Speeds up compilation somewhat. Included in the repository because
it's not that huge.
misc changes:
* use seq, not set for UCS-16 sets (it takes up less space)
* remove unnecessary noSideEffects casts
|
|
|
|
| |
why not
|
|
|
|
|
|
|
|
| |
* Rewrite in Nim
* This time, do not use a state machine (it was a very bad idea)
* Do not emit <br> for every line; use CSS instead
* Avoid double-newline caused by margins using CSS
* Properly support list items
|
|
|
|
| |
hopefully this works
|
| |
|
|
|
|
| |
also in ftp: clean up resources before exit
|
|
|
|
|
| |
reimplementing it portably in Nim seems incredibly annoying, so we
just use C
|
| |
|
|
|
|
| |
as done in upstream
|
|
|
|
|
|
|
| |
* Makefile: fix parallel build, add new binaries to install target
* twtstr: split out libunicode-related stuff to luwrap
* config: quote default gopher2html URL env var for unquote
* adapter/: get rid of types/url dependency, use CURL url in all cases
|
|
|
|
|
| |
Avoid computing e.g. charwidth data for http which does not need it
at all.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now it is (technically) no longer mandatory to link to libcurl.
Also, Chawan is at last completely protocol and network backend
agnostic :)
* Implement multipart requests in local CGI
* Implement simultaneous download of CGI data
* Add REQUEST_HEADERS env var with all headers
* cssparser: add a missing check in consumeEscape
|
|
|
|
| |
Also, move default urimethodmap config to res.
|
| |
|
| |
|
| |
|
|
|
|
| |
error codes are WIP, not final yet...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add MAPPED_URI_* as environment variables when a request is coming
from urimethodmap
It costs us compatibility with w3m, but it seems to be a massive
improvement over smuggling in the URL as a query string and then
writing an ad-hoc parser for every single urimethodmap script.
The variables are set for every urimethodmap request, to avoid
accidental leaking of global environment variables.
* Move about: to adapters (an obvious improvement over the previous
solution)
|
| |
|
|
|
|
|
| |
No need to leave gemini support in the bonus folder.
Still TODO: proxy support.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now we use a (much simplified) gopher2html binary in libexec,
instead of converting gopher directories to HTML in loader/gopher.
This has two advantages:
* Less ugly conversion logic in the loader module; we can just
convert the file line by line. (The previous converter also had
some correctness issues, that is fixed now as well.)
* If the user desires, they can replace the gopher converter with
another binary using the mailcap mechanism.
The disadvantages are:
* For now, source display is broken. This is a problem with all
mailcap filters in general, and should be fixed in the future. (That
said, the previous version also only displayed the converted HTML
source, which was not really useful anyway.)
* The proper directory structure is required for this to work;
OTOH plenty of work has been done so that this is as frictionless as
possible, so it should not really be a problem.
|
|
|
|
|
|
| |
* Add a default urimethodmap that points finger: to cha-finger
* Install cha-finger to /usr/local/libexec/cha/cgi-bin by default
* cha-finger: use ALL_PROXY if given, die if curl is not installed
|
| |
|
| |
|
| |
|
|
|
|
| |
We do only use BigInt, so the flag is no longer necessary.
|
| |
|
|
|
|
|
|
|
| |
* Get rid of useless targets
* Use real recipes instead of command runner targets
* When given, use environment variables
* Document Makefile stuff in doc/build.md
|
|
|
|
|
|
|
| |
* mkdir manpage directories too (not just prefix/bin)
* use 0644 file mode instead of the nonsensical 0655
See https://todo.sr.ht/~bptato/chawan/1
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
yay
|
|
|
|
|
|
|
|
|
|
|
| |
Add w3m-style local CGI support.
It is not quite as powerful as w3m's local CGI, because it lacks an
equivalent to W3m-control. Not sure if it's worth adding; we certainly
shouldn't allow passing JS in headers, but a custom language for
headers does not sound like a great idea either...
eh, idk. also, TODO add multipart
|
| |
|
|
|
|
|
|
|
| |
BigInt is standard and widely available in browsers. We have no
reason to exclude it.
(BigFloat/BigDecimal are not, so we do not add them for now.)
|
| |
|
| |
|
|
|
|
|
|
| |
pandoc can only generate manpage tables from markdown tables, but the
markdown pipe table syntax is horrible. So instead of rewriting our markdown
documentation to use that syntax, just programmatically rewrite it.
|
|
|
|
| |
still needs some work
|
| |
|