Peter_Arbeitslos@discuss.tchncs.de to Privacy@lemmy.mlEnglish · 2 years agoIn the light of Snowden's latest post: What are your FOSS-AIs?message-squaremessage-square40linkfedilinkarrow-up1132arrow-down110file-text
arrow-up1122arrow-down1message-squareIn the light of Snowden's latest post: What are your FOSS-AIs?Peter_Arbeitslos@discuss.tchncs.de to Privacy@lemmy.mlEnglish · 2 years agomessage-square40linkfedilinkfile-text
minus-squareWalnutLum@lemmy.mllinkfedilinkarrow-up3·2 years agoI’ve seen this said multiple times, but I’m not sure where the idea that model training is inherently non-deterministic is coming from. I’ve trained a few very tiny models deterministically before…
minus-squareumami_wasabi@lemmy.mllinkfedilinkarrow-up1·2 years agoYou sure you can train a model deterministically down to each bits? Like feeding them into sha256sum will yield the same hash?
minus-squareWalnutLum@lemmy.mllinkfedilinkarrow-up1·2 years agoYes of course, there’s nothing gestalt about model training, fixed inputs result in fixed outputs
I’ve seen this said multiple times, but I’m not sure where the idea that model training is inherently non-deterministic is coming from. I’ve trained a few very tiny models deterministically before…
You sure you can train a model deterministically down to each bits? Like feeding them into sha256sum will yield the same hash?
Yes of course, there’s nothing gestalt about model training, fixed inputs result in fixed outputs