• 0 Posts
  • 145 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle

  • The 6.6.x kernel series is LTS and should be fine as a downgrade target (6.7.x not so much so). Unless there’s something specific from the newer kernel versions that you need to drive that system, there shouldn’t be any issues. I’m still on a 6.6-series kernel.

    That being said, you could try troubleshooting this from the bottom up rather than the top down.

    First, use lspci -v to verify that the device is being correctly identified and associated with a driver.

    Next, invoke alsamixer and make sure everything is unmuted and your HD audio controller is the first sound device. The last time I had something like this happen to me, the issue turned out to be that the main soundcard slot was being hijacked by an HDMI audio output that I didn’t want and wasn’t using, and that was somehow muting the sound at the audio jack even when I tried to switch to it. A little mucking around in ALSA-level config files fixed everything.






  • Automated command-line jobs, in my case, which are technically not random but still annoying, because they don’t need to show a window at all. Interestingly, the one thing I can get to absolutely not pop up any window ever are Perl scripts using Win32::Detached . . . which means that it is possible, but Microsoft doesn’t bother to expose such a facility.



  • nyan@sh.itjust.workstoLinux@lemmy.mlAMD vs Nvidia
    link
    fedilink
    arrow-up
    4
    arrow-down
    3
    ·
    3 months ago

    I wouldn’t say the proprietary nvidia drivers are any worse than the open-source AMD drivers in terms of stability and performance (nouveau is far inferior to either). Their main issue is that they tend to be desupported long before the hardware breaks, leaving you with the choice of either nouveau or keeping an old kernel (and X version if using X—not sure how things work with Wayland) for compatibility with the old proprietary drivers.


  • nyan@sh.itjust.workstoLinux@lemmy.mlAMD vs Nvidia
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    3 months ago

    If those are your criteria, I would go with AMD right now, because only the proprietary driver will get decent performance out of most nVidia cards. Nouveau is reverse-engineered and can’t tap into a lot of features of newer cards especially, and while I seem to recall there is a new open-source driver in the works, there’s no way it’s mature enough to be an option for anyone but testers.



  • On Linux, the OOM reaper should come for the memory cannibal eventually, but it can take quite a while. Certainly it’s unlikely to be quick enough to avoid the desktop going unresponsive for a while. And it may take out a couple of other processes first, since it takes out the process holding the most memory rather than the one that’s trying to allocate, if I recall correctly.


  • Test the network from the lowest level if you haven’t already, using ping and the IPv4 address of a common server (for instance, ping 8.8.8.8) to bypass DNS.

    If it works, your DNS is borked.

    If it doesn’t, then there’s something more fundamentally wrong with your network configuration—I’d guess it was an issue with the gateway IP address, which would mean it can’t figure out how to get to the wider Internet, although it seems super-weird to have that happening with DHCP in the mix. Maybe you left some vestiges of your old configuration behind in a file that your admin GUI doesn’t clean up and it’s overriding DHCP, I don’t know.



  • The performance boost provided by compiling for your specific CPU is real but not terribly large (<10% in the last tests I saw some years ago). Probably not noticeable on common arches unless you’re running CPU-intensive software frequently.

    Feature selection has some knockon effects. Tailored features mean that you don’t have to install masses of libraries for features you don’t want, which come with their own bugs and security issues. The number of vulnerabilities added and the amount of disk space chewed up usually isn’t large for any one library, but once you’re talking about a hundred or more, it does add up.

    Occasionally, feature selection prevents mutually contradictory features from fighting each other—for instance, a custom kernel that doesn’t include the nouveau drivers isn’t going to have nouveau fighting the proprietary nvidia drivers for command of the system’s video card, as happened to an acquaintance of mine who was running Ubuntu (I ended up coaching her through blacklisting nouveau). These cases are very rare, however.

    Disabling features may allow software to run on rare or very new architectures where some libraries aren’t available, or aren’t available yet. This is more interesting for up-and-coming arches like riscv than dying ones like mips, but it holds for both.

    One specific pro-compile case I can think of that involves neither features nor optimization is that of aseprite, a pixel graphics program. The last time I checked, it had a rather strange licensing setup that made compiling it yourself the best choice for obtaining it legally.

    (Gentoo user, so I build everything myself. Except rust. That one isn’t worth the effort.)




  • Thing is, even when Ubuntu’s software has been packaged outside Ubuntu, it’s so far failed to gain traction. Upstart and Unity were available from a Gentoo overlay at one point, but never achieved enough popularity for anyone to try to move them to the main tree. I seem to recall that Unity required a cartload of core system patches that were never upstreamed by Ubuntu to be able to work, which may have been a contributing factor. It’s possible that Ubuntu doesn’t want its homegrown software ported, which would make its contribution to diversity less than useful.

    I’d add irrational hate against Canonical to the list of possible causes.

    Canonical’s done a few things that make it quite rational to hate them, though. I seem to remember an attempt to shoehorn advertising into Ubuntu, à la Microsoft—it was a while ago and they walked it back quickly, but it didn’t make them popular.

    (Also, I’m aware of the history of systemd, and Poettering is partly responsible for the hatred still focused on the software in some quarters. I won’t speak to his ability as a programmer or the quality of the resulting software, but he is terrible at communication.)

    And you have fixed versions every half a year with a set of packages that is guaranteed to work together. On top of that, there’s an upgrade path to the next version - no reinstall needed.

    I’ve been upgrading one of my Gentoo systems continuously since 2008 with no reinstalls required—that’s the beauty of a rolling-release distro. And I’ve never had problems with packages not working together when installing normally from the main repository (shooting myself in the foot in creative ways while rolling my own packages or upgrades doesn’t count). Basic consistency of installed software should be a minimum requirement for any distro. I’m always amazed when some mainstream distro seems unable to handle dependencies in a sensible manner.

    I have nothing against Ubuntu—just not my cup of tea for my own use—and I don’t think it’s a bad distro to recommend to newcomers (I certainly wouldn’t recommend Gentoo!) Doesn’t mean that it’s the best, or problem-free, or that its homegrown software is necessarily useful.


  • On the one hand, diversity is usually a good thing for its own sake, because it reduces the number of single points of failure in the system.

    On the gripping hand, none of Ubuntu’s many projects has ever become a long-term, distro-agnostic alternative to whatever it was supposed to replace, suggesting either low quality or insufficient effort.

    I’m . . . kind of torn. Not that I’m ever likely to switch from Gentoo to Ubuntu, so I guess it’s a moot point.