| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
* move out half width <-> full width converters
* snake_case -> camelCase
* improve toScreamingSnakeCase slicing
|
| |
|
|
|
|
| |
Only ignore when prev/next chars are not alnum.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
derived from w3mman2html.cgi, there are only a few minor differences:
* different man page opener command
* use man:, man-k:, man-l: instead of query string to specify action
* no form input (C-lC-uman:pageC-m is faster anyway)
TODO rewrite in Nim so we don't have to depend on Perl...
|
|
|
|
|
|
|
|
|
|
| |
* Fix incorrect internal definition of the fragment percent-encode set
* urlenc, urldec: these are simple utility programs mainly for use
with shell local CGI scripts. (Sadly the printf + xargs solution is
not portable.)
* Pass libexec directory as an env var to local CGI scripts
* Update trans.cgi to use urldec and add an example for combining
it with selections
|
| |
|
| |
|
|
|
|
| |
why not
|
|
|
|
| |
the "long-term goal" is already achieved :)
|
|
|
|
|
|
|
|
| |
* Rewrite in Nim
* This time, do not use a state machine (it was a very bad idea)
* Do not emit <br> for every line; use CSS instead
* Avoid double-newline caused by margins using CSS
* Properly support list items
|
| |
|
|
|
|
| |
It was never reached anyway.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
also in ftp: clean up resources before exit
|
|
|
|
| |
fixes error on reloading stdin
|
|
|
|
|
|
|
| |
* Makefile: fix parallel build, add new binaries to install target
* twtstr: split out libunicode-related stuff to luwrap
* config: quote default gopher2html URL env var for unquote
* adapter/: get rid of types/url dependency, use CURL url in all cases
|
| |
|
|
|
|
|
| |
Avoid computing e.g. charwidth data for http which does not need it
at all.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now it is (technically) no longer mandatory to link to libcurl.
Also, Chawan is at last completely protocol and network backend
agnostic :)
* Implement multipart requests in local CGI
* Implement simultaneous download of CGI data
* Add REQUEST_HEADERS env var with all headers
* cssparser: add a missing check in consumeEscape
|
|
|
|
| |
Also, move default urimethodmap config to res.
|
| |
|
| |
|
| |
|
|
|
|
| |
error codes are WIP, not final yet...
|
| |
|
|
|
|
|
| |
Much simpler & more efficient than the ugly regex parsing we used
to have.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add MAPPED_URI_* as environment variables when a request is coming
from urimethodmap
It costs us compatibility with w3m, but it seems to be a massive
improvement over smuggling in the URL as a query string and then
writing an ad-hoc parser for every single urimethodmap script.
The variables are set for every urimethodmap request, to avoid
accidental leaking of global environment variables.
* Move about: to adapters (an obvious improvement over the previous
solution)
|
|
|
|
|
| |
No need to leave gemini support in the bonus folder.
Still TODO: proxy support.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now we use a (much simplified) gopher2html binary in libexec,
instead of converting gopher directories to HTML in loader/gopher.
This has two advantages:
* Less ugly conversion logic in the loader module; we can just
convert the file line by line. (The previous converter also had
some correctness issues, that is fixed now as well.)
* If the user desires, they can replace the gopher converter with
another binary using the mailcap mechanism.
The disadvantages are:
* For now, source display is broken. This is a problem with all
mailcap filters in general, and should be fixed in the future. (That
said, the previous version also only displayed the converted HTML
source, which was not really useful anyway.)
* The proper directory structure is required for this to work;
OTOH plenty of work has been done so that this is as frictionless as
possible, so it should not really be a problem.
|
|
* Add a default urimethodmap that points finger: to cha-finger
* Install cha-finger to /usr/local/libexec/cha/cgi-bin by default
* cha-finger: use ALL_PROXY if given, die if curl is not installed
|