| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
The "redirect stderr to stdout" scheme broke with groff/man-db, as
it was spitting out warnings during execution. So now we handle stderr
and stdout separately.
|
|
|
|
|
|
|
| |
* do not skip first 5 chars (this is legacy from when we used query
strings)
* allow practically anything but control chars (so we can use
parameters)
|
|
|
|
|
|
|
| |
* run processBackspace on the first line, because groff likes to print
formatting there too
* check man references like SAMEPAGE(1) with isCommand because it's
commonly found in footers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is horrible.
-s means completely different things on various systems. -l does not
exist on various systems. Nothing is standardized, except that man
should take at least one parameter and that -k should perform a search.
(Seriously, that's all.)
So what we do is:
* add a separate env var for overriding apropos
* for man:, never use -s to specify sections
* for man-k:, fall back to man, EXCEPT on FreeBSD which does not have a
working section specifier on man -k (neither -S nor MANSECT does
anything)
* for man-l:, just pass the path wholesale to man and hope it does
something useful.
Also, we now set MANCOLOR to 1 so FreeBSD man gives us formatting as
well.
|
|
|
|
|
|
|
| |
* do not use query string for arguments
* accept symlinks as man binaries
* improve error message reporting
* run all regexes on the original line
|
|
|
|
|
|
|
|
|
|
|
|
| |
Depending on Perl just for this is silly.
Now we use libregexp for filtering basically the same things as
w3mman2html did. This required another patch to QuickJS to avoid
pulling in the entire JS engine, but in return, we can now run regexes
without a dummy JS context global variable.
Also, man.nim now tries to find a man command on the system even if it's
not in /usr/bin/man.
|
|
|
|
| |
e.g. `man 2 -k blah' should not override the section
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally we had several loader processes so that the loader did not
need asynchronity for loading several buffers at once. Since then, the
scope of what loader does has been reduced significantly, and with
that loader has become mostly asynchronous.
This patch finishes the above work as follows:
* We only fork a single loader process for the browser. It is a waste of
resources to do otherwise, and would have made future work on a
download manager very difficult.
* loader becomes (almost) fully async. Now the only sync part is a)
processing commands and b) waiting for clients to consume responses.
b) is a bit more problematic than a), but should not cause problems
unless some other horrible bug exists in a client. (TODO: make it
fully async.)
This gives us a noticable improvement in CSS loading speed, since all
resources can now be queried at once (even before the previous ones
are connected).
* Buffers now only get processes when the *connection* is finished. So
headers, status code, etc. are handled by the client, and the buffer
is forked when the loader starts streaming the response body.
As a result, mailcap entries can simply dup2 the first UNIX domain
socket connection as their stdin. This allows us to remove the ugly
(and slow) `canredir' hack, which required us to send file handles on
a tour accross the entire codebase.
* The "cache" has been reworked somewhat:
- Since canredir is gone, buffer-level requests usually start
in a suspended state, and are explicitly resumed only after
the client could decide whether it wants to cache the response.
- Instead of a flag on Request and the URL as the cache key,
we now use a global counter and the special `cache:' scheme.
* misc fixes: referer_from is now actually respected by buffers (not
just the pager), load info display should work slightly better, etc.
|
| |
|
|
|
|
| |
or it breaks on BSD
|
|
|
|
|
|
|
| |
buffering output kind of defeats the purpose of the entire loader select
machinery
(we don't buffer streams either for the same reason)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Get rid of sostream hack
This is no longer needed, and was in fact causing loadStream to get
stuck with redirects on regular files (i.e. the common case of receiving
<file on stdin without a -T content type override).
* Unify loading from cache and stdin regular file code paths
Until now, loadFromCache was completely sync. This is not a huge
problem, but it's better to make it async *and* not have two separate
procedures for reading regular files. (In fact, loadFromCache had
*another* bug related to its output fd not being added to outputMap.)
* Extra: remove ansi2html select error handling
It was broken, because it didn't handle read events before the
error. Also unnecessary, since recvData breaks from the loop on n == 0.
|
|
|
|
| |
whoops
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handling text/plain as ANSI colored text was problematic for two
reasons:
* You couldn't actually look at the real source of HTML pages or text
files that used ANSI colors in the source. In general, I only want
ANSI colors when piping something into my pager, not when viewing any
random file.
* More importantly, it introduced a separate rendering mode for
plaintext documents, which resulted in the problem that only some
buffers had DOMs. This made it impossible to add functionality
that would operate on the buffer's DOM, to e.g. implement w3m's
MARK_URL. Also, it locked us into the horribly inefficient line-based
rendering model of entire documents.
Now we solve the problem in two separate parts:
* text/x-ansi is used automatically for documents received through
stdin. A text/x-ansi handler ansi2html converts ANSI formatting to
HTML. text/x-ansi is also used for .ans, .asc file extensions.
* text/plain is a separate input mode in buffer, which places all text
in a single <plaintext> tag. Crucially, this does not invoke the HTML
parser; that would eat NUL characters, which we should avoid.
One blind spot still remains: copiousoutput used to display ANSI colors,
and now it doesn't. To solve this, users can put the x-ansioutput
extension field to their mailcap entries, which behaves like
x-htmloutput except it first pipes the output into ansi2html.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
extract_hostname is no more, hooray.
+ add standard error reporting
|
|
|
|
|
|
| |
* move out half width <-> full width converters
* snake_case -> camelCase
* improve toScreamingSnakeCase slicing
|
| |
|
|
|
|
| |
Only ignore when prev/next chars are not alnum.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
derived from w3mman2html.cgi, there are only a few minor differences:
* different man page opener command
* use man:, man-k:, man-l: instead of query string to specify action
* no form input (C-lC-uman:pageC-m is faster anyway)
TODO rewrite in Nim so we don't have to depend on Perl...
|
|
|
|
|
|
|
|
|
|
| |
* Fix incorrect internal definition of the fragment percent-encode set
* urlenc, urldec: these are simple utility programs mainly for use
with shell local CGI scripts. (Sadly the printf + xargs solution is
not portable.)
* Pass libexec directory as an env var to local CGI scripts
* Update trans.cgi to use urldec and add an example for combining
it with selections
|
| |
|
| |
|
|
|
|
| |
why not
|
|
|
|
| |
the "long-term goal" is already achieved :)
|
|
|
|
|
|
|
|
| |
* Rewrite in Nim
* This time, do not use a state machine (it was a very bad idea)
* Do not emit <br> for every line; use CSS instead
* Avoid double-newline caused by margins using CSS
* Properly support list items
|
| |
|
|
|
|
| |
It was never reached anyway.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
also in ftp: clean up resources before exit
|
|
|
|
| |
fixes error on reloading stdin
|
|
|
|
|
|
|
| |
* Makefile: fix parallel build, add new binaries to install target
* twtstr: split out libunicode-related stuff to luwrap
* config: quote default gopher2html URL env var for unquote
* adapter/: get rid of types/url dependency, use CURL url in all cases
|
| |
|
|
|
|
|
| |
Avoid computing e.g. charwidth data for http which does not need it
at all.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now it is (technically) no longer mandatory to link to libcurl.
Also, Chawan is at last completely protocol and network backend
agnostic :)
* Implement multipart requests in local CGI
* Implement simultaneous download of CGI data
* Add REQUEST_HEADERS env var with all headers
* cssparser: add a missing check in consumeEscape
|
|
|
|
| |
Also, move default urimethodmap config to res.
|
| |
|
| |
|
| |
|