| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This splits out sftp into a separate binary that *does* depend on
libcurl. However, ftp now uses the same socket code as gopher.
ftps is dropped, because I've never even tested it. Maybe I'll add
it back when we have working OpenSSL bindings.
This is still "doing the easy part first", now I have no clue how to
handle sftp because my initial plan ("just use the sftp binary") doesn't
work - sftp batch mode doesn't accept passwords. libssh2 remains the
sole candidate, but that's what libcurl wraps anyway.
|
|
|
|
|
|
|
|
| |
I'm thinking of making libcurl entirely optional; let's start with the
easiest part.
I've added a SOCKS5 client for ALL_PROXY support; I know curl supported
others too, but whatever.
|
|
|
|
|
|
|
| |
data URIs can get megabytes long; however, you can only stuff so many
bytes into the envp. (This was thwarting my efforts to view pandoc-
generated standalone HTML in Chawan.) So put `data:' back into the
loader process.
|
| |
|
| |
|
|
|
|
|
|
|
| |
it still sucks, but it is at least slightly more usable.
this also fixes a bug in dirlist where sort would mess up item name
association
|
|
|
|
|
|
|
|
|
|
|
|
| |
Depending on Perl just for this is silly.
Now we use libregexp for filtering basically the same things as
w3mman2html did. This required another patch to QuickJS to avoid
pulling in the entire JS engine, but in return, we can now run regexes
without a dummy JS context global variable.
Also, man.nim now tries to find a man command on the system even if it's
not in /usr/bin/man.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally we had several loader processes so that the loader did not
need asynchronity for loading several buffers at once. Since then, the
scope of what loader does has been reduced significantly, and with
that loader has become mostly asynchronous.
This patch finishes the above work as follows:
* We only fork a single loader process for the browser. It is a waste of
resources to do otherwise, and would have made future work on a
download manager very difficult.
* loader becomes (almost) fully async. Now the only sync part is a)
processing commands and b) waiting for clients to consume responses.
b) is a bit more problematic than a), but should not cause problems
unless some other horrible bug exists in a client. (TODO: make it
fully async.)
This gives us a noticable improvement in CSS loading speed, since all
resources can now be queried at once (even before the previous ones
are connected).
* Buffers now only get processes when the *connection* is finished. So
headers, status code, etc. are handled by the client, and the buffer
is forked when the loader starts streaming the response body.
As a result, mailcap entries can simply dup2 the first UNIX domain
socket connection as their stdin. This allows us to remove the ugly
(and slow) `canredir' hack, which required us to send file handles on
a tour accross the entire codebase.
* The "cache" has been reworked somewhat:
- Since canredir is gone, buffer-level requests usually start
in a suspended state, and are explicitly resumed only after
the client could decide whether it wants to cache the response.
- Instead of a flag on Request and the URL as the cache key,
we now use a global counter and the special `cache:' scheme.
* misc fixes: referer_from is now actually respected by buffers (not
just the pager), load info display should work slightly better, etc.
|
| |
|
|
|
|
|
|
| |
extract_hostname is no more, hooray.
+ add standard error reporting
|
|
|
|
|
|
|
|
|
|
| |
derived from w3mman2html.cgi, there are only a few minor differences:
* different man page opener command
* use man:, man-k:, man-l: instead of query string to specify action
* no form input (C-lC-uman:pageC-m is faster anyway)
TODO rewrite in Nim so we don't have to depend on Perl...
|
|
|
|
| |
why not
|
| |
|
| |
|
|
|
|
| |
hopefully this works
|
|
|