

- Red Hat Linux 5.1 - 7.x
- Slackware 7.0 - 12.0
- Ubuntu 6.10 - 9.10
- Slackware 13.37 - 14.1
- Mint 16 - 17
- Arch
Yeah god forbid people have some interesting discussion on this platform, right?
The post doesn’t answer the questions, it’s why I asked.
It says:
All running on a krun microVM with FEX and full TSO support 💪
I was not expecting Party Animals to run! That’s a DX11 game, running with the classic WineD3D on our OpenGL 4.6 driver!
Now I know some of these words, but it does not answer my question.
So how does that work given that most Steam games are x86/x64 and the M2 is an ARM processor? Does it emulate an x86 CPU? Isn’t that slow, given that it’s an entirely different architecture, or is there some kind of secret sauce?
I ran it perfectly on a 33MHz 486 with 4mb RAM for a long time. Even Doom II with some of its heavier maps ran fine.
“Perfectly” would mean it ran at 35fps, the maximum framerate DOS Doom is capped at. In the standard Doom benchmark, a dx33 gets about half that: 18fps average in demo3 of the shareware version with the window size reduced 1 step. Demo3 runs on E1M7, which isn’t the heaviest map, so heavier maps would bog the dx33 down even more.
I’m sure you found that acceptable at the time, and that you look back on it with slightly rose-tinted glasses of nostalgia, but a dx2/66 and preferably even better definitely gave you a much better experience, which was my point.
If anyone can enlighten me, This is pretty much why you can find DooM on almost any platform BECAUSE of its Linux code port roots?
I mean yeah. Doom was extremely popular and had a huge cultural impact in the 90s. It was also the first game of that magnitude of which the source was freely released. So naturally people tried to port it to everything, and “but can it run Doom?” became a meme on its own.
It also helps that the system requirements are very modest by today’s standards.
It ran like absolute ass on 386 hardware though, and it required at least 4MB of RAM which was also not so common for 386 computers. Source: I had a 386 at the time, couldn’t play Doom until I got a Pentium a few years later.
Even on lower clocked 486 hardware it wasn’t that great. IIRC, it needed about a 486 DX2/66 to really start to shine.
What has Rocky done?
Also, I 100% understand not liking Oracle as a company, but anyone can use OEL freely without ever having to deal with Oracle the company, and it’s a damn good RHEL substitute.
Without knowing what was being hosted, the only surefire way would be pulling a complete disk image with cat or dd.
That’s not surefire, unless you’re doing it offline. If the data is in motion (like a database that’s being updated), you will end up with an inconsistent or corrupt backup.
Surefire in that case would be something like an lvm snapshot.
If you wanted to stay on a similar system, RHEL 9 would be a good option or one of its “as similar as possible” like AlmaLinux.
No love for Rocky?
Also Oracle Linux is still free, and fully compatible with RHEL.
How the fuck am I supposed to know that Network Manager won’t support DNS over TLS
Read the documentation? Use google?
The very first hit when you google “dns over tls tumbleweed” provides the answer: https://dev.to/archerallstars/using-dns-over-tls-on-opensuse-linux-in-4-easy-steps-enable-cloud-firewall-for-free-today-2job
A more generic query “dns over tls linux” gives this, which works just the same: https://medium.com/@jawadalkassim/enable-dns-over-tls-in-linux-using-systemd-b03e44448c1c
Both google searches return several more hits that basically say the same thing.
Even the NetworkManager reference manual refers you to systemd-resolved as the solution: https://www.networkmanager.dev/docs/api/latest/settings-connection.html
Key Name | Value Type | Description |
---|---|---|
dns-over-tls | int32 | Whether DNSOverTls (dns-over-tls) is enabled for the connection. DNSOverTls is a technology which uses TLS to encrypt dns traffic. The permitted values are: “yes” (2) use DNSOverTls and disabled fallback, “opportunistic” (1) use DNSOverTls but allow fallback to unencrypted resolution, “no” (0) don’t ever use DNSOverTls. If unspecified “default” depends on the plugin used. Systemd-resolved uses global setting. This feature requires a plugin which supports DNSOverTls. Otherwise, the setting has no effect. One such plugin is dns-systemd-resolved. |
I don’t use NetworkManager, I’ve never even used Tumbleweed and I found the answer in all of 10 minutes. Of course that doesn’t help if you’re so clueless that you didn’t even know that you were using DNS-over-TLS, or that DoT is a very recent development that differs significantly from regular DNS and that it requires a DNS resolver that supports it.
when every other operating system does?
Like Windows 10? (Hint: it doesn’t)
You use Arch. Mr skillful
Who cares what I use. When I’m messing with something I don’t understand, I at least read the documentation first instead of complaining on the internet and calling the whole community toxic and, I quote, “Butthurt Linux gobblers” when you get the slightest bit of pushback.
I have had so many instances of having to spend hours upon hours upon hours just do figure out how to do some basic shit on Linux that I can do on every operating system within a matter of 5 minutes
skill issue.
Read the post. The user obviously didn’t even know that Mullvad uses DNS over TLS and that the other providers used regular DNS, nor did he know how to properly troubleshoot a DNS issue, which is a skill you should have on any OS if you’re going to mess about with DNS settings.
LOL this isn’t even a Linux issue. This is an “I’m confused about how DNS works” issue.
Multilib packages aren’t installed by default just by enabling the multilib repo, so yes you need to find the lib32 libraries your application needs and install them by hand.
Perhaps it’s a 32-bit application and it needs lib32-zlib
.
What does ldd ./runner
say?
But everyone says its always breaking and gives problems. That’s because of users, not OS… right?
It’s an exaggeration, it doesn’t always break but yes it occasionally does. Any Arch user who tells you otherwise is lying or hasn’t used Arch for very long yet.
That’s because of users, not OS… right?
No it’s because of regressions in new releases. Arch relentlessly marches forward and always tries to give you the latest-and-greatest version of any package on your system. There is some testing done obviously, but it can never be ruled out that newer software contains new bugs and regressions that are not caught in testing, and that it ends up being released.
To give an example of such a regression, the past few weeks there have been some kernel releases with broken bluetooth support for the (very common) Intel AX200 chipset. It is fixed now, but if you wanted to use bluetooth, Arch was in fact broken for some time.
The fix is usually: temporarily rollback the offending packages until the issue is fixed upstream or until a workaround is found. It does mean you will occasionally have to spend some time diagnosing issues and checking user forums to see if other users are having the same problem.
There was a short period of time when enlightenment was the default window manager for Gnome, later to be replaced by Sawfish. It was a hideous experience by the way.
Early Gnome was weird. The Gnome File Manager was also originally based on the terminal program Midnight Commander.
It was a bit rocky coming over from Plasma 5, but settled in nicely now.
Oh and don’t forget to take backups of your /home. Thats good practice for every desktop environment.
The config files of the major desktop environments have become a mess though. Plasma absolutely shits files all over ~/.config
and /.local/share
where they sit mingled together with the config files of all your other applications and most of it is thoroughly undocumented. I’ve been in the situation where I wanted to restore a previous state of my Plasma desktop from my backups or just start with a clean default desktop and there is just no straightforward way to do that, short of nuking all your configurations.
Doing a quick find query in my current home directory, there are 57 directories and 79 config files that have either plasma or kde in the name, and that doesn’t even include all the /.config/*
files belonging to plasma or kde components that don’t have it in their name explicitly (e.g. dolphinrc
, katerc
, kwinrc
, powerdevilrc
, bluedevilglobalrc
, …)
It was much simpler in the old days when you just had something like a ~/.fvwmrc
file that was easy to backup and restore, even early kde used to store everything together in a ~/.kde
directory.
Removed by mod