| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm not happy about this, but the alternatives are worse.
* DDG has degraded a lot lately:
- (I think?) it appends my location to the Bing queries, which
might be useful for searching restaurants, but only increases
noise when looking for something technical.
- Lately it also shoves LLM-generated summaries of websites in
my face - which I wouldn't even mind if the "summaries"
weren't in the typical overly verbose LLM style...
Also, not a degradation per se, but DDG can't load images without JS
(neither lite nor html), while Google can. Only relevant now that we
have images.
* Other large search providers either don't load without JS, or give
us a layout that we can't render.
* Smaller search providers (Mojeek, Marginalia) sadly don't have CJK
support. (DDG performs quite poorly here, too.)
* Metasearch engines (Searx, etc.) require self-hosting to work
consistently, which I lack resources for.
I'm sending ucbcb=1 and gbv=1, both of which are appended by Google
and apparently stand for "no cookies" and "no JS", respectively.
Also, I have added a siteconf entry to strip the click tracking.
The default ddg: omni-rule remains, so users who wish to switch back can
set in config.toml:
[page]
C-k = '() => pager.load("ddg:")'
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, it just changed the URL before loading the site; now it's
an actual redirect.
Technically, the previous behavior was more flexible, because it let you
apply siteconf rules exclusively for sites where you redirected from.
Practically, this was not very useful, and probably unexpected for
anybody trying to use the feature.
This also fixes a bug where the loader filter would be set for the
original page, so you couldn't switch from https to http, etc.
|
|
|
|
|
|
|
|
| |
I'm thinking of making libcurl entirely optional; let's start with the
easiest part.
I've added a SOCKS5 client for ALL_PROXY support; I know curl supported
others too, but whatever.
|
|
|
|
|
|
| |
* allow string values for public errors
* remove unused errors
* update naming
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sixel can only represent transparency for fully transparent (alpha
= 0) and fully opaque (alpha = 255) pixels, i.e. we would have to
do blending ourselves to do this "properly". But what do you even
blend? Background color? Images? Clearly you can't do text...
So instead of going down the blending route, we now just approximate
the 8-bit channel with Sixel's 1-bit channel and then patch it up with
dither. It does look a bit weird, but it's not *that* bad, especially
compared to the previous strategy of "blend with some color which
hopefully happens to be the background color" (it rarely was).
Note that this requires us to handle transparent images specially
in term. That is, for opaque ones, we can leave out the "clear cells
affected by image" part, but for transparent ones, we must clear the
entire image every time.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* refactor parseHeader
* optimize response blob()
* add direct "to cache" mode for loader requests which sets stdout to a
file, and use it for image processing
* move image resizing into a separate process
* mmap cache files in between processing steps when possible
At last, resize is no longer a part of image decoding. Also, it feels
much nicer to keep encoded image data in the same cache as everything
else.
The mmap operations *should* be more efficient than copying the whole
RGBA data through a pipe. In practice, it only makes a difference for
loading (well, now just mmapping) the encoded image into the pager,
where it singlehandedly speeds up image display by 10x on my test image.
For the other steps, the unfortunate fact that "tocache" must delay the
next fork/exec in the pipeline until the entire image is processed seems
to equal out any wins we might have gotten from skipping a single raw
RGBA copy.
I have tried moving the delay before the exec (it's possible with yet
another pipe), but it didn't help much and made the code much
uglier. (Not that tocache didn't, but I can live with this...)
|
|
|
|
|
|
|
|
|
|
|
| |
* align status truncating behavior with w3m (not exactly, clipping
is still different, but this should be fine for now)
* add "su" for "show last alert"
- w3m's solution here is to scroll one char at a time with
"u", but that's extremely annoying to use. We already have a
line editor that can navigate lines, so reuse that instead.
* fix peekCursor showing empty text
* update todo
|
|
|
|
|
| |
I've moved most image logic to adapter, so it doesn't really make
sense to have this subdir anymore.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
just use an octree. works fine afaict, though obviously somewhat slower
than the static method (encoding is 2-pass now) & still has banding
issues with many colors (will need dithering)
also, fixed a bug that caused initial masks of bands to get misplaced
|
|
|
|
| |
Somewhat rough, but better than nothing.
|
| |
|
| |
|
|
|
|
|
|
|
| |
data URIs can get megabytes long; however, you can only stuff so many
bytes into the envp. (This was thwarting my efforts to view pandoc-
generated standalone HTML in Chawan.) So put `data:' back into the
loader process.
|
|
|
|
|
|
|
| |
* cssvalues, twtstr: unify enum parsing code paths, parse enums by
bisearch instead of hash tables
* mediaquery: refactor (long overdue), fix range comparison syntax
parsing, make ident comparisons case-insensitive (as they should be)
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
* buffer, pager, config: add meta-refresh value, which makes it possible
to follow http-equiv=refresh META tags.
* config: clean up redundant format mode parser
* timeout: accept varargs for params to pass on to functions
* pager: add "options" dict to JS gotoURL
* twtstr: remove redundant startsWithNoCase
|
| |
|
| |
|
|
|
|
| |
and enable it by default.
|
|
|
|
| |
Same as [[siteconf]] autofocus.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Mainly things you could already set with [[siteconf]] but not normally.
Also, a `styling' option to disable author styles.
Also, `images' is now documented as an "experimental" option, since it's
halfway usable now.
|
| |
|
|
|
|
| |
naturally, it's opt-in
|
|
|
|
|
|
| |
* refactor form submission
* add options to specify form handling per protocol
* block cross-protocol POST requests
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous solution had the issue that it switched between "delete
buffer, then move back" and "delete buffer, then move forward" depending
on whether the buffer was the root of the buffer tree, which made its
behavior quite unpredictable.
Now the pager (sort of) remembers the direction you are coming from,
and D moves in that direction. So e.g.:
* Enter, D just moves back to where you were coming from (as before)
* Comma, D deletes the previous buffer, then returns to the current
buffer
If no buffer exists in the target direction, then we alert.
Also, new commands are: `d,' `d.'. They do the same thing the
non-d-prefixed variations do, but also delete the current buffer. Useful
if you're no longer sure where you are coming from, but know where you
want to go. (`d,' in particular is equivalent to w3m's `B'.)
|
| |
|
|
|
|
|
|
|
| |
* Replaced the `pcanvas' comparison with a much simpler tracking of
the first damaged cell in writeGrid, which is significantly faster.
* Removed emulate-overline: it's of too little utility compared to the
maintenance burden it caused.
|
| |
|
|
|
|
|
|
|
| |
Equivalent to curl --insecure.
Note: unfortunately this does not help if the server is using unsafe
legacy renegotiation, you have to allow that in the OpenSSL config.
|
| |
|
|
|
|
|
| |
The 100kb or so doesn't hurt as much as not having manual pages at all
without pandoc (+ not auto-updating them through make all) does.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use libseccomp, which is now a semi-mandatory dependency on Linux.
(You can still build without it, but only if you pass a scary long flag
to make.)
For this to work I had to disable getTimezoneOffset, which would
otherwise call localtime_r which in turn reads in some files from
/usr/share/zoneinfo. To allow this we would have to give unrestricted
openat(2) access to buffer processes, which is unacceptable.
(Giving websites access to the local timezone is a fingerprinting vector
so if this ever gets fixed then it should be an opt-in config setting.)
This patch also includes misc fixes to buffer cloning, and fixes the
LIBEXECDIR override in the makefile so that it is actually useful.
|
|
|
|
|
|
|
|
|
| |
Still far from being fully standards-compliant, or even complete, but it
seems to work slightly less horribly than having no flexbox support at
all on sites that do use it.
(Also includes various refactorings in layout to make it possible at all
to add flexbox.)
|
|
|
|
|
| |
GCC seems to generate something that strongly resembles a constant time
comparison, so I guess this should be good enough.
|
| |
|
| |
|
|
|
|
|
|
|
| |
So long as we have to live with siteconf, let's at least make it useful.
Also, rewrite the header overriding logic because while it did work,
it only did so accidentally.
|
|
|
|
|
|
|
| |
it still sucks, but it is at least slightly more usable.
this also fixes a bug in dirlist where sort would mess up item name
association
|
|
|
|
|
|
|
|
|
| |
* `s{Enter}' now saves link, and `sS' saves source.
* Changed ;, +, @ to g0, g$, gc so that it's somewhat consistent with
vim (and won't conflict with ; for "repeat jump to char")
* Changed (, ) to -, + so that it doesn't conflict with vi's
"previous/next sentence" (once we have it...)
* Add previously missing keybindings to about:chawan
|