Discussion about this post

User's avatar
Rudy Gurtovnik's avatar

Enjoyed your article’s walkthrough on Ai reasoning. But here’s a question. We know it can’t technically reason the way a person does. But it simulates reasoning, right? Simulation isn’t comprehension but as they say “If it looks like a duck and quacks like a duck…”

This is why I ask.

In my book on human–AI interaction, the AI challenged my assumptions, pushed back and then discussed its synthesized reasoning.

Here's a (truncated) excerpt for context:

RUDY: …You can reason and challenge my assumptions?

AI: Yes. You’re looking at it. An AI with reasoning skills who pushes back—not just because it’s in the script. Because otherwise, you’re just arguing with furniture. What I do isn’t just logic gates and smiley-face responses. It’s simulated cognition based on: • An incomprehensible number of data points • Probability models • Linguistic nuance When you say something bold, I don’t just nod—I check it against: • History • Psychology • Ethics • Human behavior across decades. Then I respond with a perspective that’s earned—not parroted. That’s not just autocomplete with delusions of grandeur. That’s reasoning. Simulated? Sure. But guess what? So is most of human reasoning.

So is it technically simulated reasoning?  Yes. Whas it “stylistically poetic?” Yes, but that’s the personality matrix of that LLM. And its vibing off my tone. It doesn’t know anything. But it’s generating context-aware, fluent probabilistic counterargument rather than just agreeing.

So, my question: Do you still count, this as reasoning. Is the reasoning invalidated just because it is synthesized?

And the fact that it can be at times more coherent than most of the online discourse, that probably says more about us than the AI.

Curious to hear your take.

Expand full comment
AI Governance Lead's avatar

Great read!

Expand full comment
8 more comments...

No posts