| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Yay!
Admittedly, it is not very useful in its current form, except maybe on
very slow networks.
The problem is that renderDocument is *slow*, so we only run it when
onload fails to consume all bytes from the network in a single pass.
Even then, we are guaranteed to get a FOUC, since CSS is only downloaded
in finishLoad(). Well, I think it's cool, anyway.
|
|
|
|
|
|
|
|
|
| |
* remove pointless exception -> bool conversions; usually they were
ignored anyway + exceptions are more convenient here
* add EPIPE handler to raisePosixIOError
* fix socketstream to use raisePosixIOError
* fix socketstream sendFileHandle error handling
* cgi: immediately return on file not found error
|
| |
|
|
|
|
|
|
| |
This was documented, but not implemented until now.
Also, improve the loader module's protocol documentation.
|
|
|
|
|
| |
We must save fd in the constructor, because the stream type may be
changed while loading.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now it is (technically) no longer mandatory to link to libcurl.
Also, Chawan is at last completely protocol and network backend
agnostic :)
* Implement multipart requests in local CGI
* Implement simultaneous download of CGI data
* Add REQUEST_HEADERS env var with all headers
* cssparser: add a missing check in consumeEscape
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Makes e.g. on-page anchor navigation near-instantaneous. Well, as
instantaneous as a fork can be. In any case, it's a lot faster
than loading the entire page anew.
This involves duplicating open resources (file descriptors, etc.),
which is not exactly trivial. For now we have a huge clone() procedure
that does an ok-ish job at it, but there remains a lot of room for
improvement.
e.g. cloning is still broken in some cases:
* As noted in the comments, TeeStream'ing the input stream for any
buffer is a horrible idea, as readout in the cloned buffer now
depends on the original buffer also reading from the stream. (So
e.g. if you clone, then kill the old buffer without waiting for
the new one to load, the new buffer gets stuck.)
* Timeouts/intervals are broken in cloned buffers. The timeout
module probably needs a redesign to fix this.
* If you clone before connect2, the cloned buffer gets stuck.
The previous solution was even worse (i.e. broken in more cases),
so this is still an improvement. For example, this fixes some issues
with mailcap handling (removes the "set the Content-Type of htmloutput
buffers to text/html" hack), does not reload all resources, does not
completely break if the buffer is cloned during loading, etc.
|
|
* ips -> io/
* loader related stuff -> loader/
* tempfile -> extern/
* buffer, forkserver -> server/
* lineedit, window -> display/
* cell -> types/
* opt -> types/
|