about summary refs log tree commit diff stats
path: root/src/local/client.nim
Commit message (Collapse)AuthorAgeFilesLines
* Add capsicum supportbptato2024-03-281-2/+5
| | | | | | | | | | | | | It's the sandboxing system of FreeBSD. Quite pleasant to work with. (Just trying to figure out the basics with this one before tackling the abomination that is seccomp.) Indeed, the only non-trivial part was getting newSelector to work with Capsicum. Long story short it doesn't, so we use an ugly pointer cast + assignment. But even that is stdlib's "fault", not Capsicum's. This also gets rid of that ugly SocketPath global.
* config: improve input systembptato2024-03-261-8/+10
| | | | as described in <https://todo.sr.ht/~bptato/chawan/6>
* client, forkserver: remove useless codebptato2024-03-241-1/+0
|
* buffer: fix clonebptato2024-03-241-3/+3
|
* io: derive DynStream from RootObj (not Stream)bptato2024-03-241-26/+29
| | | | | | | | This way they are no longer compatible, but we no longer need them to be compatible anyway. (This also forces us to throw out the old serialize module, and use packet writers everywhere.)
* io: add bufreaderbptato2024-03-211-1/+1
| | | | analogous to bufwriter
* buffer: also buffer input readsbptato2024-03-211-1/+5
| | | | this is buffer reading from pager
* client: fix dump detectionbptato2024-03-201-7/+7
| | | | it wouldn't start dump mode if stdout was not a tty but stdin was.
* pager: add "save link", "save source"; change & document some keybindingsbptato2024-03-201-2/+2
| | | | | | | | | * `s{Enter}' now saves link, and `sS' saves source. * Changed ;, +, @ to g0, g$, gc so that it's somewhat consistent with vim (and won't conflict with ; for "repeat jump to char") * Changed (, ) to -, + so that it doesn't conflict with vi's "previous/next sentence" (once we have it...) * Add previously missing keybindings to about:chawan
* client: refactor inputbptato2024-03-181-114/+26
| | | | | * move mouse handling to term * do not use File for input just to disable buffering anyway
* config: parse mime.types/mailcap/urimethodmap inside parseConfigbptato2024-03-181-1/+1
| | | | | | Better (and simpler) than storing them all over the place. extra: change lmDownload text to match w3m
* pager: remove useless codebptato2024-03-181-1/+1
|
* config: clean up/simplifybptato2024-03-171-6/+8
| | | | | | | | | * Parse the default config at runtime. There's no significant performance difference, but this makes it much less painful to write config code. * Add better error reporting * Make fromJS2 easier to use * Unquote ChaPaths while parsing config
* client: fix "Hit any key" bug on load failurebptato2024-03-171-5/+16
| | | | it's an unintended side effect that we do not want
* io: add BuferedWriterbptato2024-03-161-2/+3
| | | | | | | | | | Unsurprisingly enough, calling `write` a million times is never going to be very fast. BufferedWriter basically does the same thing as serialize.swrite did, but queues up writes in batches before sending them. TODO: give sread a similar treatment
* client, pager: various file saving fixesbptato2024-03-161-1/+3
| | | | | | * do not immediately quit when all containers are gone * fix double saving bug * fix wrong "save to" string
* config: add start.console-buffer optionbptato2024-03-161-2/+3
| | | | useful for debugging
* Clean up BufferConfigbptato2024-03-151-2/+1
| | | | | | | | It was defined in the wrong module, and unnecessarily included LoaderClientConfig. Also, referrerPolicy was not being propagated to loader clients because it was (incorrectly) in BufferConfig instead of LoaderClientConfig.
* client: check if container was found before deleting itbptato2024-03-141-2/+2
| | | | | | Containers may also be deleted without a connection. More specifically: by mailcap, when it launches an external process without opening the output in a buffer.
* pager: unregister containers properly when headers are pendingbptato2024-03-141-6/+14
| | | | | It can happen that a container is deleted before it acquires a buffer process; add it to the `unreg' array in this case too.
* Move around some modulesbptato2024-03-141-4/+4
| | | | | | | | * extern -> gone, runproc absorbed by pager, others moved into io/ * display -> local/ (where else would we display?) * xhr -> html/ * move out WindowAttributes from term, so we don't depend on local from server
* pager: add "open in editor" keybinding (sE)bptato2024-03-141-2/+3
| | | | | | only for source for now, rendered document is a bit more complicated (also, get rid of useless extern/editor module)
* client: fix blocking reads on container connectionbptato2024-03-121-32/+16
| | | | | | | | | | | | | | | | Sometimes, headers take a while to reach us even after the result has been sent. e.g. echo 'Cha-Control: Connected' sleep 5 echo 'Cha-Control: ControlDone' ^ this froze the UI for 5 seconds, that's certainly not what we want. Since we don't have a proper buffered reader yet, and I don't want to write another disgusting hack like BufStream, we just use a state machine to figure out how much we can read. Sounds bad, but in practice it works just fine since loader's response patterns are very simple.
* loader: remove applyHeadersbptato2024-03-121-2/+3
| | | | | | | Better compute the values we need on-demand at the call sites; this way, we can pass through content type attributes to mailcap too. (Also, remove a bug where applyResponse was called twice.)
* pager: fix replacement logicbptato2024-03-121-0/+1
| | | | | | | | | This is what the original replacement logic was supposed to do, except it was broken. The previous fix might have been worse than the original bug. Now we do it like this: * if needed, replace buffer in gotoURL * deleteContainer swaps back the buffer it replaced, if it still exists * on connection success, kill the buffer we replaced
* io: add dynstreambptato2024-03-121-1/+1
| | | | | a new abstraction that we derive posixstream from; hopefully with time we can get rid of std/streams
* loader: rework process modelbptato2024-03-111-42/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Originally we had several loader processes so that the loader did not need asynchronity for loading several buffers at once. Since then, the scope of what loader does has been reduced significantly, and with that loader has become mostly asynchronous. This patch finishes the above work as follows: * We only fork a single loader process for the browser. It is a waste of resources to do otherwise, and would have made future work on a download manager very difficult. * loader becomes (almost) fully async. Now the only sync part is a) processing commands and b) waiting for clients to consume responses. b) is a bit more problematic than a), but should not cause problems unless some other horrible bug exists in a client. (TODO: make it fully async.) This gives us a noticable improvement in CSS loading speed, since all resources can now be queried at once (even before the previous ones are connected). * Buffers now only get processes when the *connection* is finished. So headers, status code, etc. are handled by the client, and the buffer is forked when the loader starts streaming the response body. As a result, mailcap entries can simply dup2 the first UNIX domain socket connection as their stdin. This allows us to remove the ugly (and slow) `canredir' hack, which required us to send file handles on a tour accross the entire codebase. * The "cache" has been reworked somewhat: - Since canredir is gone, buffer-level requests usually start in a suspended state, and are explicitly resumed only after the client could decide whether it wants to cache the response. - Instead of a flag on Request and the URL as the cache key, we now use a global counter and the special `cache:' scheme. * misc fixes: referer_from is now actually respected by buffers (not just the pager), load info display should work slightly better, etc.
* client: fix thumb button confusionbptato2024-03-111-18/+18
| | | | | | | | the 0x40 bitmask implies one more state than the 0 bitmask, since state 3 with 0 is unused[0]. so we must add 7, not 6 [0] it's reserved for "move", but movement is indicated differently in the protocol we use so unused
* client: bind middle button to discardBuffer, use button5/6 as scrollbptato2024-03-111-4/+13
| | | | | | middle button to close is from w3m btn5/6 is normally a horizontal scroll wheel, so scrollLeft/Right makes more sense than prev/next
* client: only accept "press" input type for scroll wheelbptato2024-03-111-2/+6
|
* buffer: improve/fix onload return valuesbptato2024-03-031-5/+4
| | | | | | | | | Only report when bytesRead has changed, otherwise we get unnecessary load requests. (This means -2 return value no longer exists; it did not work correctly anyway.) Also, fix the race condition that broke onload returns when onload happened before client requested load.
* quickjs: reduce diff with upstreambptato2024-03-021-2/+3
| | | | | | * the uint8array thing is probably from txiki.js, but we never used it * upstream now has JS_GetClassID, importing that instead... (so this commit won't build :/)
* buffer, client: fix deadlock with send() callsbptato2024-02-291-1/+16
| | | | | | | | | | This is an ancient bug, but it got much easier to trigger with mouse scrolling support so it's time to fix it. (The bug itself was that since both the client and buffer ends of the controlling stream are blocking, they could get stuck when both were trying to send() data to the other end but the buffer was full. So now we set the client end to non-blocking.)
* Add mouse supportbptato2024-02-291-8/+138
|
* buffer: clean up onload, fix console updatebptato2024-02-261-2/+1
| | | | | | | | | | | | * reduce onload result size to a single int * clean up mess that was the container onload handler This fixes automatic refresh in console. Before, the client would only request a screen update after receiving the number of bytes read, but before the screen was actually reshaped (which obviously resulted in a race condition). Now, "I've reshaped the document" is a separate response (and is the only occasion where the screen is updated before the final render).
* term: improve pixels-per-column/line detectionbptato2024-02-251-5/+2
| | | | | | | | | Some terminal emulators (AKA vte) refuse to set ws_xpixel and ws_ypixel in the TIOCGWINSZ ioctl, so we now query for CSI 14 t as well. (Also CSI 18 t for good measure, just in case we can't ioctl for some reason.) Also added some fallback (optionally forced) config values for width, height, ppc, and ppl. (This is especially useful in dump mode.)
* Separate ANSI text decoding from main binarybptato2024-02-251-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Handling text/plain as ANSI colored text was problematic for two reasons: * You couldn't actually look at the real source of HTML pages or text files that used ANSI colors in the source. In general, I only want ANSI colors when piping something into my pager, not when viewing any random file. * More importantly, it introduced a separate rendering mode for plaintext documents, which resulted in the problem that only some buffers had DOMs. This made it impossible to add functionality that would operate on the buffer's DOM, to e.g. implement w3m's MARK_URL. Also, it locked us into the horribly inefficient line-based rendering model of entire documents. Now we solve the problem in two separate parts: * text/x-ansi is used automatically for documents received through stdin. A text/x-ansi handler ansi2html converts ANSI formatting to HTML. text/x-ansi is also used for .ans, .asc file extensions. * text/plain is a separate input mode in buffer, which places all text in a single <plaintext> tag. Crucially, this does not invoke the HTML parser; that would eat NUL characters, which we should avoid. One blind spot still remains: copiousoutput used to display ANSI colors, and now it doesn't. To solve this, users can put the x-ansioutput extension field to their mailcap entries, which behaves like x-htmloutput except it first pipes the output into ansi2html.
* Replace Chakasu with Chagashibptato2024-02-221-1/+1
| | | | | | The API is horrid :( but at least it copies less. TODO: think of a better API.
* client: fix EOF error for estreambptato2024-02-181-0/+2
|
* Various refactorings & fixesbptato2024-02-141-17/+28
| | | | | | | | * disallow Stream interface usage on non-blocking PosixStreams * do not read estream of forkserver byte-by-byte (it's slow) * do not call writeData with a zero len in formdata * do not quote numbers in mailcap quoteFile * remove some unused stuff
* Remove CLONE BufferSource; cache document sources in tmpdirbptato2024-02-121-3/+0
| | | | | | | | | | | | | | | | | | | | | | | At last all BufferSources are unified. To achieve the same effect as the previous CLONE source type, we now use the "fromcache" flag in Request. This *forces* the document to be streamed from the disk; if the file no longer exists for some reason, an error is returned (i.e. the document is not re-downloaded). For a document to be cached, it has to be the main document of the buffer (i.e. no additional resources requested with fetch()), and also not an x-htmloutput HTML file (for those, the original source is saved). The result is that toggleSource now always returns the actual source for e.g. markdown files, not the HTML-transformed version. Also, it is now possible to view the source of a document that is still being downloaded. buffer.sstream has almost been eliminated; it still exists, but only as a pseudo-buffer to interface with EncoderStream and DecoderStream. It no longer holds the entire source of a buffer at any point, and is cleared as soon as the buffer is completely loaded.
* Get rid of LOAD_PIPE BufferSourcebptato2024-02-111-3/+3
| | | | | Instead, use a stream: scheme and associate hostnames with file descriptors directly from the pager.
* forkserver: clean upbptato2024-01-291-4/+2
| | | | | Move forkBuffer into forkserver (why was it in container anyway), remove unused mainproc variable, etc.
* client: stop entire process group on suspend()bptato2024-01-291-1/+1
| | | | | * pass 0 so e.g. git does not hang * use sigtstp so e.g. cgi scripts can clean up if needed
* Remove std/terminal dependencybptato2024-01-171-1/+0
| | | | It is mostly unnecessary, and conflicts with our use of termcap anyway.
* js: merge some type modules into jstypesbptato2024-01-111-1/+1
| | | | They only had type definitions, no need to put them in separate modules.
* Use std/* imports everywherebptato2024-01-071-11/+11
|
* Set cgiDir for client loader processbptato2024-01-061-1/+3
|
* Compile with styleCheck:usagesbptato2023-12-281-3/+3
| | | | much better
* client: nil check connectSocketStream resultbptato2023-12-141-0/+5
| | | | | It may fail if the buffer process could not successfully create a server socket.