| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
This way they are no longer compatible, but we no longer need them to
be compatible anyway.
(This also forces us to throw out the old serialize module, and use
packet writers everywhere.)
|
|
|
|
|
|
|
|
|
|
| |
Unsurprisingly enough, calling `write` a million times is never going to
be very fast.
BufferedWriter basically does the same thing as serialize.swrite did,
but queues up writes in batches before sending them.
TODO: give sread a similar treatment
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally we had several loader processes so that the loader did not
need asynchronity for loading several buffers at once. Since then, the
scope of what loader does has been reduced significantly, and with
that loader has become mostly asynchronous.
This patch finishes the above work as follows:
* We only fork a single loader process for the browser. It is a waste of
resources to do otherwise, and would have made future work on a
download manager very difficult.
* loader becomes (almost) fully async. Now the only sync part is a)
processing commands and b) waiting for clients to consume responses.
b) is a bit more problematic than a), but should not cause problems
unless some other horrible bug exists in a client. (TODO: make it
fully async.)
This gives us a noticable improvement in CSS loading speed, since all
resources can now be queried at once (even before the previous ones
are connected).
* Buffers now only get processes when the *connection* is finished. So
headers, status code, etc. are handled by the client, and the buffer
is forked when the loader starts streaming the response body.
As a result, mailcap entries can simply dup2 the first UNIX domain
socket connection as their stdin. This allows us to remove the ugly
(and slow) `canredir' hack, which required us to send file handles on
a tour accross the entire codebase.
* The "cache" has been reworked somewhat:
- Since canredir is gone, buffer-level requests usually start
in a suspended state, and are explicitly resumed only after
the client could decide whether it wants to cache the response.
- Instead of a flag on Request and the URL as the cache key,
we now use a global counter and the special `cache:' scheme.
* misc fixes: referer_from is now actually respected by buffers (not
just the pager), load info display should work slightly better, etc.
|
|
|
|
|
|
|
|
| |
Ensure that a) dead outputs do not continue to get more data from
istream and b) if all outputs are dead, istream is immediately closed.
Also, remove that pointless loop in loadStreamRegular (it did nothing
that handleRead did not).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Get rid of sostream hack
This is no longer needed, and was in fact causing loadStream to get
stuck with redirects on regular files (i.e. the common case of receiving
<file on stdin without a -T content type override).
* Unify loading from cache and stdin regular file code paths
Until now, loadFromCache was completely sync. This is not a huge
problem, but it's better to make it async *and* not have two separate
procedures for reading regular files. (In fact, loadFromCache had
*another* bug related to its output fd not being added to outputMap.)
* Extra: remove ansi2html select error handling
It was broken, because it didn't handle read events before the
error. Also unnecessary, since recvData breaks from the loop on n == 0.
|
|
|
|
|
|
|
|
| |
Cache mailcap entry output too, then delete it when the buffer can no
longer read from it.
(Maybe it would be useful to instead preserve it and allow viewSource
for HTML output too? Hmm.)
|
|
|
|
|
|
|
|
| |
cha -d <some-file was crashing loader, because it was trying to register
the regular file in the selector.
this patch fixes the problem, but the control flow of loader looks like
spaghetti now
|
|
|
|
|
|
|
|
| |
* factor out pushBuffer to make loadFromCache async
* fix incorrect cache path
* replace rewind with loadFromCache (it does the same thing except
actually works)
* remove rewindImpl callback, rewind in buffer instead
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At last all BufferSources are unified.
To achieve the same effect as the previous CLONE source type, we now
use the "fromcache" flag in Request. This *forces* the document to be
streamed from the disk; if the file no longer exists for some reason,
an error is returned (i.e. the document is not re-downloaded).
For a document to be cached, it has to be the main document of the
buffer (i.e. no additional resources requested with fetch()), and
also not an x-htmloutput HTML file (for those, the original source is
saved). The result is that toggleSource now always returns the actual
source for e.g. markdown files, not the HTML-transformed version.
Also, it is now possible to view the source of a document that is
still being downloaded.
buffer.sstream has almost been eliminated; it still exists, but only as
a pseudo-buffer to interface with EncoderStream and DecoderStream. It no
longer holds the entire source of a buffer at any point, and is cleared
as soon as the buffer is completely loaded.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The previous version was running the processor on 100% because select
would immediately return for writes even when no buffers to send were
available.
(This has been the case since I added asynchronous sending, but the
previous commit put the console buffer's fd in loader too and that made
the problem quite obvious.)
|
|
|
|
|
| |
Instead, use a stream: scheme and associate hostnames with file
descriptors directly from the pager.
|
|
|
|
|
|
|
|
|
|
|
|
| |
My eyes are bleeding, but at least there is a chance that this does what
I wanted.
The previous tee implementation mixed buffer and loader fds, so it was
fundamentally broken. Also, it used MultiStream which makes asynchronous
streaming impossible.
This time we use a flat array of output handles and link to them any
buffers not written to the target yet.
|
|
|
|
|
|
|
|
|
|
|
| |
* LoaderHandle.fd is no more, we now check ostream's fd
* setBlocking converted to a PosixStream method
* SocketStream now sets fd variable
* handle sostream/fd redirection properly
* fix suspend/resume
This fixes non-HTML resource loading, mostly. However, tee is still
broken :/
|
|
|
|
|
|
| |
* eagain was causing fetch to add unnecessary null bytes to input
streams
* URL is now only added to handles in debug mode
|
|
|
|
|
|
|
|
|
|
|
|
| |
Yay!
Admittedly, it is not very useful in its current form, except maybe on
very slow networks.
The problem is that renderDocument is *slow*, so we only run it when
onload fails to consume all bytes from the network in a single pass.
Even then, we are guaranteed to get a FOUC, since CSS is only downloaded
in finishLoad(). Well, I think it's cool, anyway.
|
|
|
|
|
|
|
|
|
| |
* remove pointless exception -> bool conversions; usually they were
ignored anyway + exceptions are more convenient here
* add EPIPE handler to raisePosixIOError
* fix socketstream to use raisePosixIOError
* fix socketstream sendFileHandle error handling
* cgi: immediately return on file not found error
|
| |
|
|
|
|
|
|
| |
This was documented, but not implemented until now.
Also, improve the loader module's protocol documentation.
|
|
|
|
|
| |
We must save fd in the constructor, because the stream type may be
changed while loading.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now it is (technically) no longer mandatory to link to libcurl.
Also, Chawan is at last completely protocol and network backend
agnostic :)
* Implement multipart requests in local CGI
* Implement simultaneous download of CGI data
* Add REQUEST_HEADERS env var with all headers
* cssparser: add a missing check in consumeEscape
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Makes e.g. on-page anchor navigation near-instantaneous. Well, as
instantaneous as a fork can be. In any case, it's a lot faster
than loading the entire page anew.
This involves duplicating open resources (file descriptors, etc.),
which is not exactly trivial. For now we have a huge clone() procedure
that does an ok-ish job at it, but there remains a lot of room for
improvement.
e.g. cloning is still broken in some cases:
* As noted in the comments, TeeStream'ing the input stream for any
buffer is a horrible idea, as readout in the cloned buffer now
depends on the original buffer also reading from the stream. (So
e.g. if you clone, then kill the old buffer without waiting for
the new one to load, the new buffer gets stuck.)
* Timeouts/intervals are broken in cloned buffers. The timeout
module probably needs a redesign to fix this.
* If you clone before connect2, the cloned buffer gets stuck.
The previous solution was even worse (i.e. broken in more cases),
so this is still an improvement. For example, this fixes some issues
with mailcap handling (removes the "set the Content-Type of htmloutput
buffers to text/html" hack), does not reload all resources, does not
completely break if the buffer is cloned during loading, etc.
|
|
* ips -> io/
* loader related stuff -> loader/
* tempfile -> extern/
* buffer, forkserver -> server/
* lineedit, window -> display/
* cell -> types/
* opt -> types/
|