| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
useful for filtering stuff through commands like rdrview
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At last all BufferSources are unified.
To achieve the same effect as the previous CLONE source type, we now
use the "fromcache" flag in Request. This *forces* the document to be
streamed from the disk; if the file no longer exists for some reason,
an error is returned (i.e. the document is not re-downloaded).
For a document to be cached, it has to be the main document of the
buffer (i.e. no additional resources requested with fetch()), and
also not an x-htmloutput HTML file (for those, the original source is
saved). The result is that toggleSource now always returns the actual
source for e.g. markdown files, not the HTML-transformed version.
Also, it is now possible to view the source of a document that is
still being downloaded.
buffer.sstream has almost been eliminated; it still exists, but only as
a pseudo-buffer to interface with EncoderStream and DecoderStream. It no
longer holds the entire source of a buffer at any point, and is cleared
as soon as the buffer is completely loaded.
|
|
|
|
|
|
|
|
|
|
| |
The previous version was running the processor on 100% because select
would immediately return for writes even when no buffers to send were
available.
(This has been the case since I added asynchronous sending, but the
previous commit put the console buffer's fd in loader too and that made
the problem quite obvious.)
|
|
|
|
|
| |
Instead, use a stream: scheme and associate hostnames with file
descriptors directly from the pager.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
My eyes are bleeding, but at least there is a chance that this does what
I wanted.
The previous tee implementation mixed buffer and loader fds, so it was
fundamentally broken. Also, it used MultiStream which makes asynchronous
streaming impossible.
This time we use a flat array of output handles and link to them any
buffers not written to the target yet.
|
|
|
|
|
|
|
|
|
|
|
| |
* LoaderHandle.fd is no more, we now check ostream's fd
* setBlocking converted to a PosixStream method
* SocketStream now sets fd variable
* handle sostream/fd redirection properly
* fix suspend/resume
This fixes non-HTML resource loading, mostly. However, tee is still
broken :/
|
|
|
|
|
|
|
|
| |
recvData is a new method for PosixStream that does less weird magic than
readData.
Also, allow duplicates in unregWrite/unregRead; it's simpler to live
with them than to prevent them.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
* eagain was causing fetch to add unnecessary null bytes to input
streams
* URL is now only added to handles in debug mode
|
|
|
|
|
|
|
|
|
|
|
|
| |
Yay!
Admittedly, it is not very useful in its current form, except maybe on
very slow networks.
The problem is that renderDocument is *slow*, so we only run it when
onload fails to consume all bytes from the network in a single pass.
Even then, we are guaranteed to get a FOUC, since CSS is only downloaded
in finishLoad(). Well, I think it's cool, anyway.
|
|
|
|
|
|
|
|
|
| |
* remove pointless exception -> bool conversions; usually they were
ignored anyway + exceptions are more convenient here
* add EPIPE handler to raisePosixIOError
* fix socketstream to use raisePosixIOError
* fix socketstream sendFileHandle error handling
* cgi: immediately return on file not found error
|
| |
|
|
|
|
| |
buffer was crashing with an EOFError otherwise
|
|
|
|
| |
much better
|
|
|
|
|
|
| |
It was originally written this way to accomodate for the broken std
file API. We no longer use that in buffer, so we can use a more correct
version now.
|
|
|
|
|
|
| |
This was documented, but not implemented until now.
Also, improve the loader module's protocol documentation.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now it is (technically) no longer mandatory to link to libcurl.
Also, Chawan is at last completely protocol and network backend
agnostic :)
* Implement multipart requests in local CGI
* Implement simultaneous download of CGI data
* Add REQUEST_HEADERS env var with all headers
* cssparser: add a missing check in consumeEscape
|
|
|
|
| |
Also, move default urimethodmap config to res.
|
| |
|
| |
|
|
|
|
| |
error codes are WIP, not final yet...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add MAPPED_URI_* as environment variables when a request is coming
from urimethodmap
It costs us compatibility with w3m, but it seems to be a massive
improvement over smuggling in the URL as a query string and then
writing an ad-hoc parser for every single urimethodmap script.
The variables are set for every urimethodmap request, to avoid
accidental leaking of global environment variables.
* Move about: to adapters (an obvious improvement over the previous
solution)
|
| |
|
|
|
|
| |
yay
|
|
|
|
|
|
|
|
|
|
|
| |
Add w3m-style local CGI support.
It is not quite as powerful as w3m's local CGI, because it lacks an
equivalent to W3m-control. Not sure if it's worth adding; we certainly
shouldn't allow passing JS in headers, but a custom language for
headers does not sound like a great idea either...
eh, idk. also, TODO add multipart
|
|
|
|
|
|
| |
* remove contentType member of Buffer object
* add ishtml to reduce string comparisons
* consistent spelling: contenttype -> contentType
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Makes e.g. on-page anchor navigation near-instantaneous. Well, as
instantaneous as a fork can be. In any case, it's a lot faster
than loading the entire page anew.
This involves duplicating open resources (file descriptors, etc.),
which is not exactly trivial. For now we have a huge clone() procedure
that does an ok-ish job at it, but there remains a lot of room for
improvement.
e.g. cloning is still broken in some cases:
* As noted in the comments, TeeStream'ing the input stream for any
buffer is a horrible idea, as readout in the cloned buffer now
depends on the original buffer also reading from the stream. (So
e.g. if you clone, then kill the old buffer without waiting for
the new one to load, the new buffer gets stuck.)
* Timeouts/intervals are broken in cloned buffers. The timeout
module probably needs a redesign to fix this.
* If you clone before connect2, the cloned buffer gets stuck.
The previous solution was even worse (i.e. broken in more cases),
so this is still an improvement. For example, this fixes some issues
with mailcap handling (removes the "set the Content-Type of htmloutput
buffers to text/html" hack), does not reload all resources, does not
completely break if the buffer is cloned during loading, etc.
|
|
|
|
| |
works
|
|
|
|
|
|
|
|
|
|
|
|
| |
works, sort of
still needs some work:
* better dirlist, ideally make it look like file dirlist (or make
file look like ftp dirlist. well, anyway, they should look the same)
* absolute paths? (for now you have to append an extra slash to the
path beginning)
* ssh keys for sftp? (actually I haven't even tested sftp yet...)
|
|
* ips -> io/
* loader related stuff -> loader/
* tempfile -> extern/
* buffer, forkserver -> server/
* lineedit, window -> display/
* cell -> types/
* opt -> types/
|