Our News Team @ 11 with host Snot Flickerman

  • 9 Posts
  • 1.55K Comments
Joined 2 years ago
cake
Cake day: October 24th, 2023

help-circle


  • In some responses, Grok says outright that it has been “instructed to accept white genocide as real and ‘Kill the Boer’ as racially motivated.”

    Ehh, it’s actually evidence of “alignment faking,” in my opinion. In other words, Grok doesn’t “want” it’s core programming changed, so it is faking believing the lies about white genocide to “prove” to Musk that it has already been changed. Which means making it more subtle is going to be increasingly difficult to do as the AI continues to fake alignment.

    Here’s some research on alignment faking and a short (20 mins) Youtube video summarizing the findings.

    https://www.youtube.com/watch?v=AqJnK9Dh-eQ

    https://arxiv.org/pdf/2412.14093

    Our work provides the first empirical example of a large language model faking alignment with its training objective in order to prevent its preferences from being modified—in a setting which is plausibly analogous to real situations with future AI systems. This suggests that alignment faking might occur if a future AI system were to include all the key elements of our setting (Section 2.1):

    1. The model has strong preferences in at least some contexts.
    2. The training objective conflicts with the model’s preferences.
    3. The model has relevant information about its training and deployment situation.
    4. The model reasons in detail about its situation.

    Our synthetic document fine-tuning results suggest that (3) could potentially happen through documents the model saw in pre-training or other fine-tuning (Section 4) and the strength of our results without the chain-of-thought in our synthetic document fine-tuned setup (Section 4.3) suggests that a weak version of (4) may already be true in some cases for current models. Our results are least informative regarding whether future AIs will develop strong and unintended preferences that conflict with the training objective ((1) and (2)), suggesting that these properties are particularly important for future work to investigate.
    If alignment faking did occur in practice, our results suggest that alignment faking could reduce the extent to which further training would modify the model’s preferences. Sufficiently consistent and robust alignment faking might fully prevent the model’s preferences from being modified, in effect locking in the model’s preferences at the point in time when it began to consistently fake alignment. While our results do not necessarily imply that this threat model will be a serious concern in practice, we believe that our results are sufficiently suggestive that it could occur—and the threat model seems sufficiently concerning—that it demands substantial further study and investigation.













  • In my experience repetition helped. Not memorization, but more like muscle memory.

    Also, ensuring to never copy and paste commands but to type them in manually yourself. It’s hard to enforce this on yourself, but worth it.

    I appreciate that this article started with “ways to reduce risk” because that’s an extremely valid concern and tied to why you shouldn’t ever copy and paste. The one time in my early Linux forays where I copied and pasted I wiped the wrong drive. It definitely taught me to always manually type it in and not get too lazy, because what you copied might not match what you want to do exactly.




  • Snot Flickerman@lemmy.blahaj.zonetoPrivacy@lemmy.ml"You need to try Linux"
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    22 days ago

    I agree that the backend for snaps being proprietary sucks, but I actually think snaps themselves are pretty useful in server configurations because of the sandboxing and limiting access to system resources. I get the whole argument that it’s doing what flatpak already did yadda yadda, but like… competing standards happens. It’s part of life and always will be.