

Not able to check the details ATM, but there’s a systemd timer which refreshes the pacman cache every week (I think)
That could probably be modified to run every 3 days and output the latest update’s timestamp to the envvar you wanted?
Not able to check the details ATM, but there’s a systemd timer which refreshes the pacman cache every week (I think)
That could probably be modified to run every 3 days and output the latest update’s timestamp to the envvar you wanted?
What’s your usecase for the journals? That might help direct the discussion.
For work I use Outlook with caldavsynchronizer, but I’ve stepped away from those kind of Journals and now I’m tracking things in Logseq
For time tracking for work I’m using other tools too.
To be fair, the link’s just to git comments, so the headline captures the main point.
There’s BeyondCompare and Meld if you want a GUI, but, if I understand this correctly, rmlint
and fdupes
might be helpful here
I’ve done similar in the past - I prefer commandline for this…
What I’d do is create a “final destination” folder on the 4TB drive and then other working folders for each hdd / cd / dvd that you’re working through
Ie
/mnt/4TB/finaldestination /mnt/4TB/source1 /mnt/4TB/source2 …
Obviously finaldestination is empty to start with so it could just be a direct copy of your first hdd - so make that the largest drive.
(I’m saying copy here, presuming you want to keep the old drives for now, just in case you accidentally delete the wrong stuff on the 4TB drive)
Maybe clean up any obvious stuff
Remove that first drive
Mount the next and copy the data to /mnt/4TB/source2
Now use rmlint
or fdupes
and do a dry-run between source2 and finaldestination and get a feel whether they’re similar or not, so then you’ll know whether to just move it all to finaldestination or maybe then use the gui tools.
You might completely empty /mnt4TB/source2, or it might still have something in, depends on how you feel it’s going.
Repeat for the rest, working on smaller & smaller drives, comparing with the finaldestination first and then moving the data.
Slow? Yep. Satisfying that you know there’s only 1 version there? Yep.
Then do a backup 😉
My choice is Arch Linux purely because it’s bleeding edge
I’ve no idea if Arch actually has newer drivers than Debian / Fedora, but if they are you’ll (usually) get better support from the developers of whatever application / package - or in your case - drivers that you’re facing.
It’s more involved than “just” installing Debian, etc… but reading through the Arch Linux wiki as you install will (should) ensure you’ve got the correct drivers setup and you’ll know why they’re working.
So… it’ll be more effort, but you might get “better” results.
Ooh, just spotted this, maybe something for the future…
https://wiki.archlinux.org/title/Systemd-boot#Memtest86
(But yes, unplug stuff first 😉)
My bet’s on hardware.
Boot memtest86+ and let it run (overnight…?) that’s the simplest & easiest test - even if the RAM is ok, it might show other problems (over heating, etc)
(Sorry, couldn’t resist)
+1 for Ansible
True, but they’ve answered your question.
Maybe raise a bug report with Firefox (Mozilla) and see if they can look into it further and that might help others too
His complaint was it was different.
This is the absolute core of everyone’s reluctance… and 99% of (domestic) stuff is browser based…
That says that upgrades won’t enable it… the user can still enable it.
Yes, I feel your pain.
Encryption drives sound like a good idea until the subject of unlocking them comes up… and automatically unlocking the drive for the OS isn’t really helping.
But, for user data, it can be unlocked automatically during login. The Arch wiki covers this.
But backup your data 😉
It depends on your use-case.
Encryption of data at rest (this discussion) is mostly helpful for physical theft, so a device that never leaves the house, there’s little reason for encryption.
Similarly, on a lower powered mobile device, maybe you only want / need user data to be encrypted, and there’s no need to encrypt the OS, which keeps the performance up.
Maybe you want the whole thing encrypted on your high performance laptop.
So, it’s difficult to define a sane default for everyone, thus making it an option for the end user to decide on.
Linux has more choice than Windows - and the encryption algorithm(s) can be verified - so it’s definitely the better choice.
For which OS?
It can be enabled at any time on Windows & Linux. It’s just optional.
It’s dumb and inexcusable IMO
No, it’s a choice, because:
History… encryption didn’t exist in the beginning. Upgrades won’t enable it.
Recovery… try telling the people that didn’t backup the encryption key - outside of the encrypted vault - that their data’s gone.
Performance… not such an issue these days, but it does slow your system down (and then everyone complains)
So, please continue to encrypt your data as you choose and be less judgemental on others, esp. anyone new
No excuses.
Wouldn’t that be Gentoo’s website?
Or… <using package manager of choice> install immich
Done.
No need to map internal & external ports, wrestle with permissions (or… good grief, run the container as root!), etc, etc.
It’s just… less faff.
Plus I save all that additional disk space, not having to install docker! 😉
Don’t get me wrong; Containers, chroot jails, Type-1 & Type-2 hypervisors all had their place in the history of my systems, I just don’t see it as a necessity.
The additional software required to run it in a container plus its configuration, on top of Immich’s configuration.
Just install & configure Immich, done.
Firstly, I agree with your main point.
Just an open thought: I wonder if zscalar are using settings in a heirarchy, ie if no env var is set, then check Gnome - just in case the user’s only making changes there…? Dunno…