• 𝒍𝒆𝒎𝒂𝒏𝒏
    link
    fedilink
    311 months ago

    Yeah that’s a no from me 😂 what causes this anyway? Badly thought out fine-tuning dataset?

    Haven’t had a response sounding that out of touch from the few LLaMA variants I’ve messed around with in chat mode

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      -211 months ago

      Probably just poor tuning, but in general it’s pretty hard to guarantee that the model won’t do something unexpected. Hence why it’s a terrible idea to use LLMs for something like this.