Honestly I’ll just send it back at this point. I have kernel panics that point to at least two of the cores being bad. Which would explain the sporadic nature of the errors. Also why memcheck ran fine because it only uses the first core by default. Too bad I haven’t thought about it when running memtest because it lets you select cores explicitly.
lemmyvore
- 6 Posts
- 1.2K Comments
Welp no change. I’m guessing the motherboard firmware already contained the latest microcode. Oh well, was worth a try, thank you.
It’s a pain in the butt to swap CPUs one more time but that may pale in comparison to trying to convince the shop that a core is bad and having intermittent faults. 🤪
This sounds like my best shot, thank you.
I’ve installed the
amd-ucode
package. It already addsmicrocode
to theHOOKS
array in/etc/mkinitcpio.conf
and runsmkinitcpio -P
but I’ve movedmicrocode
beforeautodetect
so it bundles code for all CPUs not just for the current one (to have it ready when I swap) and re-ranmkinitcpio -P
. Also had to re-rungrub-mkconfig -o /boot/grub/grub.cfg
.I’ve seen the message “Early uncompressed CPIO image generation successful” pass by, and
lsinitcpio --early /boot/initramfs-6.12-x86_64.img|grep micro
showskernel/x86/microcode/AuthenticAMD.bin
, there’s a/boot/amd-ucode.img
, and aninitrd
parameter for it ingrub.cfg
. I’ve also confirmed that/usr/lib/firmware/amd-ucode/README
lists an update for that new CPU (and for the current one, speaking of which).Now from what I understand all I have to do is reboot and the early stage will apply the update?
Any idea what it looks like when it applies the microcode? Will it appear in
dmesg
after boot or is it something that happens too early in the boot process?
BIOS is up to date, CPU model explicitly listed as supported, memtest ran fine, not using XMP profiles.
All hardware is the same, I’m trying to upgrade from a Ryzen 3100 so everything should be compatible. Both old and new CPU have a 65W TDP.
I’m on Manjaro, everything is up to date, kernel is 6.12.17.
Memory runs at 2133 MHz, same as for the other CPU. I usually don’t tweak BIOS much if at all from the default settings, just change the boot drive and stuff like “don’t show full logo at startup”.
I’ve add some voltage readings in the post and answered some other posts here.
Everything is up to date as far as I can tell, I did Windows too.
memtest ran fine for a couple of hours, CPU stress test hang up partway through though, while CPU temp was around 75C.
Yep, it’s explicitly listed in the supported list and BIOS is up to date.
RAM is indeed at 2133 MHz and the cooling is great, got a tower cooler (Scythe Kotetsu mark II), idle temps are in the low 30’s C, stress temp was 76C.
Motherboard is a Gigabyte B450 Aorus M. It’s fully updated and support for this particular CPU is explicitly listed in a past revision of the mobo firmware.
Manual doesn’t list any specific CPU settings but their website says stepping
A0
, and that’s what the defaults were setting. Also I got “core speed: 400 MHz”, “multiplier: x 4.0 (14-36)”.even some normal batch cpus might sometimes require a bit more (or less) juice or a system tweak
What does that involve? I wouldn’t know where to begin changing voltages or other parameters. I suspect I shouldn’t just faff about in the BIOS and hope for the best. :/
lemmyvore@feddit.nlto Linux@lemmy.ml•Mozilla's massive lapse in judgement causes clash with uBlock Origin developerEnglish41·7 months agoThe dev has not made available any means to donate to him directly. He asks that people donate to the maintainers of the block lists instead.
Linux printing is very complex. Before Foomatic came along you got to experience it in all it’s glory and setting up a working printing chain was a pain. The Foomatic Wikipedia page has a diagram that will make your head spin.
lemmyvore@feddit.nlto Linux@lemmy.ml•is there any way to increase the size of my /var directory under debian 12.7? (flatpak related)English1·7 months agoGreat trick, I had no idea Flatpak can use an existing install as a repo!
lemmyvore@feddit.nlto Linux@lemmy.ml•is there any way to increase the size of my /var directory under debian 12.7? (flatpak related)English5·7 months agoIf you end up with resizing /var as the only solution, please post your partition layout first and ask, don’t rush into it. A screenshot from an app like Disk Manager or Gparted should do it, and we’ll explain the steps and the risks.
When you’re ready to resize, you MUST use a bootable stick, not resize from inside the running system. You have to make a stick using something like Ventoy, and drop the ISO for the live version of GParted on the stick, then boot with it and pick the Gparted live. You’ll have to write down the instructions and be careful what you do, and also hope that there’s no power outage during.
lemmyvore@feddit.nlto Linux@lemmy.ml•is there any way to increase the size of my /var directory under debian 12.7? (flatpak related)English51·7 months agoThe safest method, if your /home has enough space, is to use it instead of /var for (some) Flatpak installs. You can force any Flatpak install to go to /home by adding
--user
to the command.If you look at the output of
flatpak list
it will tell you which package is installed in user home dir and which in system (/var). You can also show the size of each package withflatpak list --columns=name,application,version,size,installation
.I don’t think you can move installed apps directly between system/user like Steam can (Flatpak is REALLY overdue for a good package manager) but you can uninstall apps from system, then run
flatpak remove --unused
, then install them again with--user
.Please note that apps installed with
--user
are only seen by the user that installed them. Also you’ll have to cleanup separately for system and user(s) in the future (flatpak remove --unused
for system, thenflatpak remove --unused --user
for each user).
It’s not an issue on Arch & derivates, due to the simple fact I mentioned above: third-party (AUR) packages are never allowed to use the name of an official package.
If a third-party package was already using a name that a new official package wishes to use, users are required to willingly uninstall the third-party package in order to be allowed to install the official one, and can never re-install the third-party package unless it changes its name.
It also helps that there’s only one third-party repo (the AUR) so it prevents name overlaps among third-party packages. Although that’s of secondary importance since it can be bypassed by crafting custom packages locally.
I appreciate the difficulty of enacting such a rule on Debian or Ubuntu now, considering the vast amount of already existing, widely established third-party repos, and also the fact that Debian official repos contain 3-4 times as many packages as Arch official repos. Which is why I think there’s no way to fix this aspect of Debian/Ubuntu anymore.
I’m not saying that makes them unusable… but I believe that anybody who uses them should be [made] aware of this caveat. It’s not readily apparent and by the time it bites a new user she’s probably already invested a couple of years in them.
Interesting, I’ll keep it in mind.
Still not sure it would help in all cases. Particularly when 3rd party repos have to override core packages because they need to be patched to support whatever they’re installing. Which is another very bad practice in the Ubuntu/Debian world, granted.
I’m not sure how that would help. First of all, it would still end up blocking proper updates. Secondly, it’s hard to figure out what exactly you’re supposed to pin.
Third party package mechanism is fundamentally broken in Ubuntu (and in Debian).
Third party repos should never be allowed to use package names from the core repos. But they are, so they pretend they’re core packages, but use different version names, and at upgrade time the updater doesn’t know what to do with those version and how to solve dependencies.
That leaves you with a broken system where you can’t upgrade and can’t do anything entirely l eventually except a clean reinstall.
After this happened several times while using Ubuntu I resorted to leaving more and more time between major upgrades, running old versions on extended support or even unsupported.
Eventually I figured that if I’m gonna reinstall from scratch I might as well install a different distro.
I should note I still run Debian on my server, because that’s a basic install with just core packages and everything else runs in Docker.
So if you delegate your package management to a completely different tool, like Flatpak, I guess you can continue to use Ubuntu. But it seems dumb to be required to resort to Flatpak to make Ubuntu usable.
Things like desktop automation, screen sharing, screen recording, remote desktop etc. are incredibly broken, with no hope in sight because the core design of Wayland simply didn’t account for them(!?), apparently.
Add to that the decision to push everything downstream into compositors, which led to widespread feature fragmentation and duplicated effort.
Add to that antagonizing the largest graphics chipset manufacturer (by usage among Linux desktop users) for no good reason. Nvidia has never had an incentive to cater to the Linux desktop, so Linux desktop users sending them bad vibes is… neither here nor there. It certainly won’t make them move faster.
Add to that the million little bugs that crop up when you try to use Wayland with any of the desktop apps whose developers aren’t snorting the Koolaid and not dedicating oustanding effort to catching up to Wayland – which is most of them.
I cannot use Wayland.
I’m an average Linux desktop user, who has an Nvidia card, has no need for Wayland “security”, doesn’t have multiple monitors with different refresh rates, uses desktop automation, screen sharing, screen recording, remote desktop on a daily basis, and uses lots of apps which don’t work perfectly with Wayland.
…how and why would I subject myself to it? I’d have to be a masochist.