| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
| |
We must first check if there is really no node to attach the comment
to...
|
| |
|
|
|
|
| |
hopefully this works
|
| |
|
| |
|
|
|
|
|
|
| |
This was documented, but not implemented until now.
Also, improve the loader module's protocol documentation.
|
| |
|
|
|
|
| |
also in ftp: clean up resources before exit
|
| |
|
| |
|
|
|
|
|
| |
This way we can at least view HTML source of x-htmloutput filtered
buffers. TODO: make it render the actual source instead.
|
|
|
|
| |
so we do not have to import unicode
|
|
|
|
|
|
|
| |
It has roughly zero utility, but maybe it's a good demonstration
of local CGI?
(TODO: add libfetch FTP too, that might actually be useful.)
|
|
|
|
|
| |
reimplementing it portably in Nim seems incredibly annoying, so we
just use C
|
|
|
|
|
| |
It may fail if the buffer process could not successfully create a server
socket.
|
| |
|
|
|
|
| |
UMM resolution takes the first entry.
|
| |
|
| |
|
|
|
|
| |
(gihub ticket #166)
|
|
|
|
| |
'return' statement
|
| |
|
| |
|
|
|
|
| |
as done in upstream
|
|
|
|
| |
fixes error on reloading stdin
|
| |
|
|
|
|
|
|
|
| |
* better path handling done
* empirically, we no longer crash on / -> M-c
* LD_PRELOAD is good enough, especially now when the main binary no
longer links to libcurl
|
| |
|
|
|
|
|
| |
multipart through local CGI is now supported as well.
(also, fix Cha-Control description inaccuracy)
|
|
|
|
|
| |
We must save fd in the constructor, because the stream type may be
changed while loading.
|
|
|
|
|
|
|
| |
* Makefile: fix parallel build, add new binaries to install target
* twtstr: split out libunicode-related stuff to luwrap
* config: quote default gopher2html URL env var for unquote
* adapter/: get rid of types/url dependency, use CURL url in all cases
|
| |
|
|
|
|
|
| |
Avoid computing e.g. charwidth data for http which does not need it
at all.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now it is (technically) no longer mandatory to link to libcurl.
Also, Chawan is at last completely protocol and network backend
agnostic :)
* Implement multipart requests in local CGI
* Implement simultaneous download of CGI data
* Add REQUEST_HEADERS env var with all headers
* cssparser: add a missing check in consumeEscape
|
|
|
|
| |
Also, move default urimethodmap config to res.
|
| |
|
| |
|
| |
|
|
|
|
| |
error codes are WIP, not final yet...
|
| |
|
|
|
|
|
| |
Much simpler & more efficient than the ugly regex parsing we used
to have.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add MAPPED_URI_* as environment variables when a request is coming
from urimethodmap
It costs us compatibility with w3m, but it seems to be a massive
improvement over smuggling in the URL as a query string and then
writing an ad-hoc parser for every single urimethodmap script.
The variables are set for every urimethodmap request, to avoid
accidental leaking of global environment variables.
* Move about: to adapters (an obvious improvement over the previous
solution)
|
|
|
|
|
| |
* start from 1
* divide by total - 1, since we are counting the rounding error between each line
|
| |
|
| |
|
| |
|
|
|
|
| |
Also case-sensitive, but for now that is the same as normal matching...
|
|
|
|
|
|
|
| |
Probably not fully correct, but it's a good start.
Includes proprietary extension -cha-half-width, which converts
full-width characters to half-width ones.
|
|
|
|
|
|
|
|
|
|
| |
Instead, position them at the end of their block's layout pass.
Without this, they could be positioned too early, as the grandparent's
position being resolved does not guarantee that the parent's position
has already been resolved as well.
(Unlike the comment suggests, flushMargins is not appropriate there.)
|
|
|
|
|
| |
* Actually calculate rounding error
* Skip a loop over lines by accumulating rounding error in finishLine
|