• Kichae
    link
    fedilink
    311 months ago

    when it is generating something that it wasn’t trained on before, then it could present incorrect answer.

    Not could, will. It’s basically guaranteed to start spitting out garbage once it’s extrapolating beyond the training data. Any semblance of correctness is just luck at that point.

    This is true for basically all models, everywhere.