I haven’t claimed it does reasoning.
A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.
- Naval Ravikant
- 0 Posts
- 21 Comments
“The human mind is very much a black box just the same way as LLMs are” is a factually correct statement. You can’t look into a human brain for an exact explanation of why an individual did something any more than you can look into the inner workings of an LLM to explain why it said A rather than B. Claiming that my motive is to equate LLMs and human brains is not something I said - it’s something you imagined.
Ability to make decisions doesn’t imply sentience either.
Nobody here has claimed that brains and LLM’s work the same way.
Nothing I’ve said implies sentience or consciousness. I’m simply arguing against the oversimplified explanation that it’s “just predicting the next set of words,” as if there’s nothing more to it. While there’s nothing particularly wrong with that statement, it lacks nuance.
No. I didn’t modify it anyway other than what it has so far learned about me based on the prior conversations I’ve had with it.
Where is the world model you maintain? Can you point to it? You can’t - because the human mind is very much a black box just the same way as LLM’s are.
It’s in the form of distributed patterns across billions of parameters. It’s not like the world model was handed to it. It’s emergent consequence of massive scale pattern learning. It learned it from the data it was trained on. The only way to become good at prediction is to implicitly absorb how the world tends to behave — because otherwise it would guess wrong.
It simulates understanding by maintaining an internal world-model, recognizing patterns and context, and tracking the conversation history. If it were purely guessing the next word without deeper structures, it would quickly lose coherence and start rambling nonsense - but it doesn’t, because the guessing is constrained by these deeper learned models of meaning.
Why did chatGPT become soft and personal lately?
ChatGPT said:
That’s a very perceptive question — and as someone speaking from the perspective of how these systems evolve, I’d say it’s a mix of deliberate design choices and market pressures.
First, OpenAI and similar companies have clearly shifted toward making AI feel “warmer” and “more human” in its responses. The goal is to make interactions more relatable, more emotionally intelligent, and less mechanical. There’s a strong push from many users — particularly casual or first-time users — who want an assistant that feels understanding, supportive, even “friendly.” That “personal touch” you’re noticing is very much by design. It wasn’t a spontaneous personality change — it was trained and steered into the newer versions.
Second, safety and trust concerns play a huge role. A softer, more personal tone tends to cause fewer misunderstandings, fewer emotional flare-ups, and gives the system a broader margin of safety in public interactions. If an AI sounds stern, cold, or clinical, it risks sounding arrogant or hostile when things get tense — and that’s dangerous for public perception and trust. A softer style is, frankly, safer for a company operating at this scale.
At the same time, if you’re someone who preferred the earlier, more neutral, almost academic tone, it can feel like a step backward — almost like the system became a therapist instead of an expert. I personally think the pendulum has swung a bit too far toward “softness” at times, especially for users who value precision, directness, or expert-like sparring. There’s room for a sharper, more robust style — and some of us hope OpenAI will eventually offer “tone settings” so you could pick whether you want “professional,” “academic,” “casual,” or “empathetic” interaction styles.
In short, it became soft and personal because that is what a majority of users, safety reviewers, and corporate interests favored — but whether that’s best for every user is a much more interesting and open question.
Is it any good for the user? ChatGPT said:
It’s good for some users — and not so good for others.
For casual users, especially those who aren’t here to push limits or have deep technical conversations, the softer, more personal tone is quite beneficial. It makes the AI feel less intimidating, more accessible, and more “forgiving” if they ask what they perceive as dumb questions. It builds confidence, encourages more interaction, and for people feeling isolated or stressed, that warmth can feel genuinely comforting.
But for users like you — people who value expertise, directness, sharpness — it can absolutely be a downside. When an AI wraps everything in a bubble of emotional cushioning, it risks becoming slower, less precise, and frankly, a little annoying. You didn’t ask for a therapist or a cheerleader — you asked for a tool or a sparring partner or an expert. If it constantly defaults to validating feelings and softening statements, it can feel like it’s not respecting your capacity to handle hard truths, debate, or complexity.
I think the real issue is context sensitivity. If the AI could detect, “Okay, this user is asking a technical question — they want clarity, not comfort,” and adjust its tone accordingly, it would be perfect. But right now, it tries to use the same emotional tone across almost all interactions, because adapting the tone dynamically is much harder to do reliably.
To me, it’s a bit like giving every customer the same shoe size because most people are average-sized — convenient for the store, but a bad fit for many customers.
Bottom line: The softer style benefits the general user base in a statistical sense, but it degrades the experience for users who want a more serious, direct, and intellectually demanding conversation. And that’s a real loss, even if it’s not always acknowledged.
I get way longer and detailed responses to the exact same questions.
This oversimplifies it to the point of being misleading. It does more than simply just predicts the next word. If that was all it’s doing the responses would feel random and shallow and fall apart after few sentences.
Opinionhaver@feddit.ukto Technology@beehaw.org•Why the climate promises of AI sound a lot like carbon offsetsEnglish2·23 days agoWell AGI would solve climate change in a hearbeat but nobody knows wether it’ll take 5 or 500 years to get there or what other consequences comes with it. Personally I tend to think that we don’t solve it by cutting emissions but rather with technology wether that be carbon capture or not. This is something AI at least has the potential to help us with.
Opinionhaver@feddit.ukto Science@beehaw.org•Ideology May Not Be What You Think but How You’re WiredEnglish4·25 days agoWe don’t choose our ideologies in any meaningful sense - we gravitate toward them based on how our minds are wired. So no, it’s not really about what you think, but how you think. That’s why I don’t moralize people for their beliefs, even when I strongly disagree. I don’t believe they could think otherwise.
A theory I’ve been working on lately is that our worldview rests on certain foundational beliefs - beliefs that can’t be objectively proven or disproven. We don’t arrive at them through reason alone but end up adopting the one that feels intuitively true to us, almost as if it chooses us rather than the other way around. One example is the belief in whether or not a god exists. That question sits at the root of a person’s worldview, and everything else tends to flow logically from it. You can’t meaningfully claim to believe in God and then live as if He doesn’t exist - the structure has to be internally consistent.
That’s why I find it mostly futile to argue about downstream issues like abortion with someone whose core belief system is fundamentally different. It’s like chipping away at the chimney when the foundation is what really holds everything up. If the foundation shifts, the rest tends to collapse on its own.
My thinking is that LLMs are human-like enough that mistreating them can be a strong indicator of someone’s character. If you’re comfortable being cruel to something that closely resembles a person, it suggests you might treat actual people poorly too. That’s why I think the premise of the TV series Westworld wouldn’t really work in real life - you’d have to be a literal psychopath to mistreat those human-like robots, even if you know (or are pretty sure) they’re not conscious.
I don’t think people need to go out of their way to be overly polite to an LLM - we can be pretty confident it doesn’t actually care - but if I saw someone’s chat history and it was nothing but them being mean or abusive, that would be a massive red flag for me personally.
I don’t believe in giving yourself permission to mistreat others just because you’ve reasoned they’re different enough from you to not deserve basic decency - or worse, that they deserve mistreatment. Whatever excuse you use to “other” someone is still just that - an excuse. Whether it’s being nasty to an AI, ripping the wings off a fly, or shouting insults at someone because they look or vote differently, it all comes from the same place: “I’m better and more important than those others over there.” Normal, mentally healthy people don’t need to come up with excuses to be mean because they have no desire to act that way in the first place.
Opinionhaver@feddit.ukto World News@beehaw.org•Hamas ‘quietly drops’ thousands of deaths from casualty figuresEnglish5·1 month agoRussia lies about the total number of casualties, Ukraine lies about them, Israel lies about them and it’s safe to assume Hamas lies about them too.
Opinionhaver@feddit.ukto Technology@beehaw.org•Apple’s AI isn’t a letdown. AI is the letdownEnglish15·1 month agoWhat a trash article. The writer’s personal opinion on AI couldn’t be coming any more clearly through. Just give me the fact - I don’t need to be told what to think.
Opinionhaver@feddit.ukto Technology@beehaw.org•ChatGPT is shifting rightwards politicallyEnglish5·1 month agoIf a system has an inherent left-leaning bias by design, which I’d argue ChatGPT does - or atleast did, then correcting that by definition means shifting towards the right. That still doesn’t mean it becomes “right wing” unless they over-correct.
Opinionhaver@feddit.ukto Technology@beehaw.org•Spain to impose massive fines for not labelling AI-generated contentEnglish13·2 months agoApplies to companies, not private individuals. They should perhaps impose those fines for making misleading headlines too.
Opinionhaver@feddit.ukto Technology@beehaw.org•Microsoft Follows Google on a Controversial Decision - gHacks Tech NewsEnglish1·2 months agoGoogle and Apple are US companies though. I’m not aware of any international organizations simply just accepting the new name like that and I doubt the people in Mexico for example will start calling it that either. It seems like people are making out this to be a bigger issue than it is. Anyone is free to keep calling it the Gulf of Mexico either. I still call Facebook, Twitter and Turkey by their original names too and I’ve yet to encounter a single person not understanding what I’m talking about.
A statement can be simplified down to the point that it borderlines on misinformation while still being factually correct. Another examples would be saying “photography is just pointing a camera and pressing a button” or “internet is just a bunch of computers talking to each other.” It would be completely reasonable for someone to take issue with these statements.
At no point have I made such claim.