• 1 Post
  • 13 Comments
Joined 2 years ago
cake
Cake day: July 28th, 2023

help-circle

  • Don’t seem to be any disk reads on request at a glance, though that might just be due to read caching on OS level. There’s a spike on first page refresh/load after dropping the read cache, so that could indicate reading the file in every time there’s a fresh page load. Would have to open the browser with call tracing to be sure, which I’ll probably try out later today.

    For my other devices I use unbound hosted on the router, so this is the first time encountering said issue for me as well.


  • You’re using software to do something it wasn’t designed to do

    As such, Chrome isn’t exactly following the best practices either – if you want to reinvent the wheel at least improve upon the original instead of making it run worse. True, it’s not the intended method of use, but resource-wise it shouldn’t cause issues – at this point one would’ve needed active work to make it run this poorly.

    Why would you even think to do something like this?

    As I said, due to company VPN enforcing their own DNS for intranet resources etc. Technically I could override it with a single rule in configuration, but this would also technically be a breach of guidelines as opposed to the more moderate rules-lawyery approach I attempt here.

    If it was up to me the employer should just add some blocklist to their own forwarder for the benefit of everyone working there…

    But guess I’ll settle for local dnsmasq on the laptop for now. Thanks for the discussion 👌🏼


  • TLDR: looks like you’re right, although Chrome shouldn’t be struggling with that amount of hosts to chug through. This ended up being an interesting rabbit hole.

    My home network already uses unbound with proper blocklist configured, but I can’t use the same setup directly with my work computer as the VPN sets it’s own DNS. I can only override this with a local resolver on the work laptop, and I’d really like to get by with just systemd-resolved instead of having to add dnsmasq or similar for this. None of the other tools I use struggle with this setup, as they use the system IP stack.

    Might well be that chromium has a bit more sophisticated a network stack (than just using the system provided libraries), and I remember the docs indicating something about that being the case. In any way, it’s not like the code is (or should be) paging through the whole file every time there’s a query – either it forwards it to another resolver, or does it locally, but in any case there will be a cache. That cache will then end up being those queried domains in order of access, after which having a long /etc/hosts won’t matter. Worst case scenario after paging in the hosts file initially is 3-5 ms (per query) for comparing through the 100k-700k lines before hitting a wall, and that only needs to happen once regardless of where the actual resolving takes place. At a glance chrome net stack should cache queries into the hosts file as well. So at the very least it doesn’t really make sense for it to struggle for 5-10 seconds on every consecutive refresh of the page with a warm DNS cache in memory…

    …or that’s how it should happen. Your comment inspired me to test it a bit more, and lo: after trying out a hosts file with 10 000 000 bogus entries chrome was brought completely to it’s knees. However, that amount of string comparisons is absolutely nothing in practice – Python with its measly linked lists and slow interpreter manages comparing against every row in 300 ms, a crude C implementation manages it in 23 ms (approx. 2 ms with 1 million rows, both a lot more than what I have appended to the hosts file). So the file being long should have nothing to do with it unless there’s something very wrong with the implementation. Comparing against /etc/hosts should be cheap as it doesn’t support wildcard entires – as such the comparisons are just simple 1:1 check against first matching row. I’ll continue investigating and see if there’s a quick change to be made in how the hosts are read in. Fixing this shouldn’t cause any issues for other use cases from what I see.

    For reference, if you want to check the performance for 10 million comparisons on your own hardware:

    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    #include <sys/time.h>
    
    
    int main(void) {
    	struct timeval start_t;
    	struct timeval end_t;
    
    	char **strs = malloc(sizeof(char *) * 10000000);
    	for (int i = 0; i < 10000000; i++) {
    		char *urlbuf = malloc(sizeof(char) * 50);
    		sprintf(urlbuf, "%d.bogus.local", i);
    		strs[i] = urlbuf;
    	}
    
    	printf("Checking comparisons through array of 10M strings.\n");
    	gettimeofday(&start_t, NULL);
    
    	for (int i = 0; i < 10000000; i++) {
    		strcmp(strs[i], "test.url.local");
    	}
    
    	gettimeofday(&end_t, NULL);
    
    	long duration = (end_t.tv_usec - start_t.tv_usec) / 1000;
    	printf("Spent %ld ms on the operation.\n", duration);
    
    	for (int i = 0; i < 10000000; i++) {
    		free(strs[i]);
    	}
    	free(strs);
    }
    



  • Actually it is, we do use both network cells and other public beacons for navigation when GPS is unavailable. It’s just not available everywhere – you need a map available of cell locations and usually this mandates open datasets for companies to use. Navigation works underground in e.g. Helsinki metro as a personal anecdote. We don’t need strict triangulation underground as cells are already so small. The metro tunnel is filled with picocells in practice (smaller than 200m coverage area cross-section).

    We also use the cell network to push rough satellite locations to cellphones, in A-GPS, or more generally A-GNSSb as the same functionality is available for other systems as well. This way the phone can pinpoint the required satellites much faster, which is the main reason you can get such quick and accurate readings from your phone after starting to check your location.

    Edit: AFAIK location services also enrich the information with databases of publicly visible WiFi SSIDs, using their visibility as a beacon. Scanning WiFi hotspots typically consumes less power than getting a GPS signal that’s as accurate, and is also often more reliable in urban settings and higher latitudes where the satellites aren’t as visible (though the constellations have enough satellites nowadays that this issue isn’t nearly as bad as it used to be)




  • If you’re using powdered detergent, make sure yours doesn’t have zeolite as the water softening agent. It will deposit in the machine, and starts eventually covering the fabrics with a talcum-like powdery substance. It gets especially bad if you either have to use a lot of detergent because of hard water, or are overusing the detergent.

    Zeolite was brought in to replace phosphates due to environmental concerns, but it has its own problems with the washing results.

    One other thing that often ruins the freshness of clothes for me is overly scented/perfumed detergent. The smell can get quite overwhelming, and contribute to a chemical-y smell and feel.


  • Just in case someone misreads this, add the vinegar as the softener, so it’s not in the first load that contains the detergent. The detergent is a base, and relies upon that fact to get rid of some of the stains, and vinegar as an acid will neutralize that. Vinegar is meant to be in the rinse cycle when washing laundry, where it can help get rid of any extra detergent by neutralizing it and do any other magic it does.

    Also, though I don’t usually encounter them often, do note that vinegar can wash away zinc and silver oxides used for some sterile clothing, and can supposedly damage lyocell.

    But overall I second these suggestions. Most times the amounts listed for detergent are far too big, and you can often get by with less.


  • And we’ve nowadays taken it even further, in spoken Finnish we’ve even got rid of the “hän” and mostly use “se”, which is the Finnish word for “it”. The same pronoun is used for people in all forms, animals, items, institutions and so on, and in practice the only case for “hän” is people trying to remind others they consider their pets human.

    Context will tell which one it is.


  • My best guess is the dates on which the feature was added, which can also be seen on CanIUse. Firefox added OPFS support in March this year, and much of the userbase (AFAIK e.g. Firefox ESR) is still lacking the feature – in any case it’s a very recent change on Firefox. However, webkit/Safari has had OPFS for over two years by now. I was personally unaware of the support having been added to Firefox as well, last time I checked the discussion they told they weren’t going to implement the API.

    By no means is this an acceptable excuse in my opinion, this kind of check should always be done by checking the existence of the feature, not the UA string. Though it might be that the check is still performed in the correct way as Safari users stuck on older version are also encountering the issue. But if they’re fine with using OPFS, where you need to export the files separately to access them outside the browser context (as the storage is private), there’s no reason to complain about recent Firefox versions that support this feature.

    But, the same point still stands, kind of. The main underlying problem is Google forcing new standards through Chromium, without waiting for industry consensus and a proper standard. Then, as 80% of the userbase already has the feature everyone else is forced to get on board. I still don’t really see Adobe as the main culprit here, despite the apparent incompetence in writing compatibility checks, but Google with their monopolistic practices with the Chromium project. Adobe isn’t innocent and has done the industry a lot of harm in the form of being one of the original pushers of subscription software, but I don’t think this instance should be attributed to malice rather than incompetence.

    Edit: So, a bit of additional advice for someone trying to get this to work: in case the UA spoofing doesn’t help, check the Firefox version in use – it has to be 111 or newer, as 111 was the release where File System API support was added. Firefox ESR probably doesn’t have it available. Also check that FS API / OPFS doesn’t need to be enabled through some flag or configuration parameter, and that it’s not blocked by some plugin.


  • Well, in this case it might even be a technological limitation, which can be solved with a workaround but leads to a poor user experience.

    Firefox, for security reasons, doesn’t allow opening local files for writing. That means, it’s not possible to make a web application that can autosave to your machine after you open a file, meaning you have to download a new version of the file every time you save. You can get around this issue by importing the files in question to the browser’s local storage, or by using cloud storage via an API, but local saving is a feature that people have come to expect and missing it will lead to complaints from the users.

    The missing API is called File System Access API and has been available on Chrome for years. I’ve personally had to write my web apps around this limitation multiple times, since I want to support Firefox. By no means is this a valid reason to exclude Firefox in my opinion, but I can also easily see why a company would want to not bother with user feedback on ctrl+s not working in their web application.