| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At last all BufferSources are unified.
To achieve the same effect as the previous CLONE source type, we now
use the "fromcache" flag in Request. This *forces* the document to be
streamed from the disk; if the file no longer exists for some reason,
an error is returned (i.e. the document is not re-downloaded).
For a document to be cached, it has to be the main document of the
buffer (i.e. no additional resources requested with fetch()), and
also not an x-htmloutput HTML file (for those, the original source is
saved). The result is that toggleSource now always returns the actual
source for e.g. markdown files, not the HTML-transformed version.
Also, it is now possible to view the source of a document that is
still being downloaded.
buffer.sstream has almost been eliminated; it still exists, but only as
a pseudo-buffer to interface with EncoderStream and DecoderStream. It no
longer holds the entire source of a buffer at any point, and is cleared
as soon as the buffer is completely loaded.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The previous version was running the processor on 100% because select
would immediately return for writes even when no buffers to send were
available.
(This has been the case since I added asynchronous sending, but the
previous commit put the console buffer's fd in loader too and that made
the problem quite obvious.)
|
|
|
|
|
| |
Instead, use a stream: scheme and associate hostnames with file
descriptors directly from the pager.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
My eyes are bleeding, but at least there is a chance that this does what
I wanted.
The previous tee implementation mixed buffer and loader fds, so it was
fundamentally broken. Also, it used MultiStream which makes asynchronous
streaming impossible.
This time we use a flat array of output handles and link to them any
buffers not written to the target yet.
|
|
|
|
|
|
|
|
|
|
|
| |
* LoaderHandle.fd is no more, we now check ostream's fd
* setBlocking converted to a PosixStream method
* SocketStream now sets fd variable
* handle sostream/fd redirection properly
* fix suspend/resume
This fixes non-HTML resource loading, mostly. However, tee is still
broken :/
|
|
|
|
|
|
|
|
| |
recvData is a new method for PosixStream that does less weird magic than
readData.
Also, allow duplicates in unregWrite/unregRead; it's simpler to live
with them than to prevent them.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
* eagain was causing fetch to add unnecessary null bytes to input
streams
* URL is now only added to handles in debug mode
|
|
|
|
|
|
|
|
|
|
|
|
| |
Yay!
Admittedly, it is not very useful in its current form, except maybe on
very slow networks.
The problem is that renderDocument is *slow*, so we only run it when
onload fails to consume all bytes from the network in a single pass.
Even then, we are guaranteed to get a FOUC, since CSS is only downloaded
in finishLoad(). Well, I think it's cool, anyway.
|
|
|
|
|
|
|
|
|
| |
* remove pointless exception -> bool conversions; usually they were
ignored anyway + exceptions are more convenient here
* add EPIPE handler to raisePosixIOError
* fix socketstream to use raisePosixIOError
* fix socketstream sendFileHandle error handling
* cgi: immediately return on file not found error
|
| |
|
|
|
|
|
|
|
|
| |
* static function names can now be defined using the syntax
`Class:functionName' (or just use `Class' to take the default name
* fix URL.canParse with 1 argument only
* do not store JSFuncGenerator for constructors; just put the function
node in BoundFunctions
|
|
|
|
| |
They only had type definitions, no need to put them in separate modules.
|
|
|
|
|
|
|
|
|
|
| |
* Fix incorrect internal definition of the fragment percent-encode set
* urlenc, urldec: these are simple utility programs mainly for use
with shell local CGI scripts. (Sadly the printf + xargs solution is
not portable.)
* Pass libexec directory as an env var to local CGI scripts
* Update trans.cgi to use urldec and add an example for combining
it with selections
|
| |
|
|
|
|
| |
This breaks string conversions.
|
|
|
|
| |
buffer was crashing with an EOFError otherwise
|
|
|
|
| |
much better
|
|
|
|
| |
(still no module support in buffer...)
|
| |
|
|
|
|
|
|
| |
It was originally written this way to accomodate for the broken std
file API. We no longer use that in buffer, so we can use a more correct
version now.
|
|
|
|
|
|
| |
This was documented, but not implemented until now.
Also, improve the loader module's protocol documentation.
|
|
|
|
|
| |
We must save fd in the constructor, because the stream type may be
changed while loading.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now it is (technically) no longer mandatory to link to libcurl.
Also, Chawan is at last completely protocol and network backend
agnostic :)
* Implement multipart requests in local CGI
* Implement simultaneous download of CGI data
* Add REQUEST_HEADERS env var with all headers
* cssparser: add a missing check in consumeEscape
|
|
|
|
| |
Also, move default urimethodmap config to res.
|
| |
|
| |
|
|
|
|
| |
error codes are WIP, not final yet...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add MAPPED_URI_* as environment variables when a request is coming
from urimethodmap
It costs us compatibility with w3m, but it seems to be a massive
improvement over smuggling in the URL as a query string and then
writing an ad-hoc parser for every single urimethodmap script.
The variables are set for every urimethodmap request, to avoid
accidental leaking of global environment variables.
* Move about: to adapters (an obvious improvement over the previous
solution)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now we use a (much simplified) gopher2html binary in libexec,
instead of converting gopher directories to HTML in loader/gopher.
This has two advantages:
* Less ugly conversion logic in the loader module; we can just
convert the file line by line. (The previous converter also had
some correctness issues, that is fixed now as well.)
* If the user desires, they can replace the gopher converter with
another binary using the mailcap mechanism.
The disadvantages are:
* For now, source display is broken. This is a problem with all
mailcap filters in general, and should be fixed in the future. (That
said, the previous version also only displayed the converted HTML
source, which was not really useful anyway.)
* The proper directory structure is required for this to work;
OTOH plenty of work has been done so that this is as frictionless as
possible, so it should not really be a problem.
|
|
|
|
| |
just ask libcurl to decode
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
so that it does not choke on files with an apos in them.
(We could also htmlEscape it, but this should be enough since we
percent-encode the paths already.)
|
|
|
|
| |
still non-functional
|
| |
|
|
|
|
| |
yay
|
|
|
|
|
|
|
|
|
|
|
| |
Add w3m-style local CGI support.
It is not quite as powerful as w3m's local CGI, because it lacks an
equivalent to W3m-control. Not sure if it's worth adding; we certainly
shouldn't allow passing JS in headers, but a custom language for
headers does not sound like a great idea either...
eh, idk. also, TODO add multipart
|
| |
|
|
|
|
| |
also, use blob() for images
|
|
|
|
|
|
| |
* remove contentType member of Buffer object
* add ishtml to reduce string comparisons
* consistent spelling: contenttype -> contentType
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Makes e.g. on-page anchor navigation near-instantaneous. Well, as
instantaneous as a fork can be. In any case, it's a lot faster
than loading the entire page anew.
This involves duplicating open resources (file descriptors, etc.),
which is not exactly trivial. For now we have a huge clone() procedure
that does an ok-ish job at it, but there remains a lot of room for
improvement.
e.g. cloning is still broken in some cases:
* As noted in the comments, TeeStream'ing the input stream for any
buffer is a horrible idea, as readout in the cloned buffer now
depends on the original buffer also reading from the stream. (So
e.g. if you clone, then kill the old buffer without waiting for
the new one to load, the new buffer gets stuck.)
* Timeouts/intervals are broken in cloned buffers. The timeout
module probably needs a redesign to fix this.
* If you clone before connect2, the cloned buffer gets stuck.
The previous solution was even worse (i.e. broken in more cases),
so this is still an improvement. For example, this fixes some issues
with mailcap handling (removes the "set the Content-Type of htmloutput
buffers to text/html" hack), does not reload all resources, does not
completely break if the buffer is cloned during loading, etc.
|