about summary refs log tree commit diff stats
path: root/src/loader/loaderhandle.nim
Commit message (Collapse)AuthorAgeFilesLines
* Remove CLONE BufferSource; cache document sources in tmpdirbptato2024-02-121-10/+11
| | | | | | | | | | | | | | | | | | | | | | | At last all BufferSources are unified. To achieve the same effect as the previous CLONE source type, we now use the "fromcache" flag in Request. This *forces* the document to be streamed from the disk; if the file no longer exists for some reason, an error is returned (i.e. the document is not re-downloaded). For a document to be cached, it has to be the main document of the buffer (i.e. no additional resources requested with fetch()), and also not an x-htmloutput HTML file (for those, the original source is saved). The result is that toggleSource now always returns the actual source for e.g. markdown files, not the HTML-transformed version. Also, it is now possible to view the source of a document that is still being downloaded. buffer.sstream has almost been eliminated; it still exists, but only as a pseudo-buffer to interface with EncoderStream and DecoderStream. It no longer holds the entire source of a buffer at any point, and is cleared as soon as the buffer is completely loaded.
* simplify newLoaderBufferbptato2024-02-111-4/+3
|
* loader: significantly more efficient loadingbptato2024-02-111-6/+6
| | | | | | | | | | The previous version was running the processor on 100% because select would immediately return for writes even when no buffers to send were available. (This has been the case since I added asynchronous sending, but the previous commit put the console buffer's fd in loader too and that made the problem quite obvious.)
* Get rid of LOAD_PIPE BufferSourcebptato2024-02-111-2/+0
| | | | | Instead, use a stream: scheme and associate hostnames with file descriptors directly from the pager.
* loader: fix teebptato2024-02-101-46/+86
| | | | | | | | | | | | My eyes are bleeding, but at least there is a chance that this does what I wanted. The previous tee implementation mixed buffer and loader fds, so it was fundamentally broken. Also, it used MultiStream which makes asynchronous streaming impossible. This time we use a flat array of output handles and link to them any buffers not written to the target yet.
* loader: fixes & cleanupbptato2024-02-101-56/+10
| | | | | | | | | | | * LoaderHandle.fd is no more, we now check ostream's fd * setBlocking converted to a PosixStream method * SocketStream now sets fd variable * handle sostream/fd redirection properly * fix suspend/resume This fixes non-HTML resource loading, mostly. However, tee is still broken :/
* loader: fix eagain in fetch, only add URL to handle in debugbptato2024-02-081-5/+7
| | | | | | * eagain was causing fetch to add unnecessary null bytes to input streams * URL is now only added to handles in debug mode
* Incremental renderingbptato2024-02-071-25/+85
| | | | | | | | | | | | Yay! Admittedly, it is not very useful in its current form, except maybe on very slow networks. The problem is that renderDocument is *slow*, so we only run it when onload fails to consume all bytes from the network in a single pass. Even then, we are guaranteed to get a FOUC, since CSS is only downloaded in finishLoad(). Well, I think it's cool, anyway.
* loader: clean up error handlingbptato2024-01-261-40/+23
| | | | | | | | | * remove pointless exception -> bool conversions; usually they were ignored anyway + exceptions are more convenient here * add EPIPE handler to raisePosixIOError * fix socketstream to use raisePosixIOError * fix socketstream sendFileHandle error handling * cgi: immediately return on file not found error
* Use std/* imports everywherebptato2024-01-071-2/+2
|
* Implement local CGI error message handlingbptato2023-12-151-1/+5
| | | | | | This was documented, but not implemented until now. Also, improve the loader module's protocol documentation.
* loaderhandle: fix ConversionDefect in getFdbptato2023-12-131-2/+7
| | | | | We must save fd in the constructor, because the stream type may be changed while loading.
* Move http out of main binarybptato2023-12-131-0/+4
| | | | | | | | | | | | Now it is (technically) no longer mandatory to link to libcurl. Also, Chawan is at last completely protocol and network backend agnostic :) * Implement multipart requests in local CGI * Implement simultaneous download of CGI data * Add REQUEST_HEADERS env var with all headers * cssparser: add a missing check in consumeEscape
* buffer: make clone fork()bptato2023-09-231-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Makes e.g. on-page anchor navigation near-instantaneous. Well, as instantaneous as a fork can be. In any case, it's a lot faster than loading the entire page anew. This involves duplicating open resources (file descriptors, etc.), which is not exactly trivial. For now we have a huge clone() procedure that does an ok-ish job at it, but there remains a lot of room for improvement. e.g. cloning is still broken in some cases: * As noted in the comments, TeeStream'ing the input stream for any buffer is a horrible idea, as readout in the cloned buffer now depends on the original buffer also reading from the stream. (So e.g. if you clone, then kill the old buffer without waiting for the new one to load, the new buffer gets stuck.) * Timeouts/intervals are broken in cloned buffers. The timeout module probably needs a redesign to fix this. * If you clone before connect2, the cloned buffer gets stuck. The previous solution was even worse (i.e. broken in more cases), so this is still an improvement. For example, this fixes some issues with mailcap handling (removes the "set the Content-Type of htmloutput buffers to text/html" hack), does not reload all resources, does not completely break if the buffer is cloned during loading, etc.
* move around more modulesbptato2023-09-141-0/+73
* ips -> io/ * loader related stuff -> loader/ * tempfile -> extern/ * buffer, forkserver -> server/ * lineedit, window -> display/ * cell -> types/ * opt -> types/