Discussion about this post

User's avatar
Apis Dea's avatar

What would happen if the same tasks were given to randomly selected humans?

Expand full comment
Rudy Gurtovnik's avatar

The way I see it, is that Apple is late to the AI arms race and is trying to rebrand itself with mostly marketing and little in technical differentiation.

It’s stating AI can’t reason like a human? That’s an obvious statement. No ræsonnable person or developer is claiming that. But LLMs and I’m assuming LGM’s which probably aren’t much different are capable of synthesized reasoning based on chaining logical steps and making inferences.

But the architecture is the same. LRMs can do tree of though, audit trails, and self-correct? So can LLMs if you prompt them.

I’ve already asked LLMs to explain the reasoning, show how an answer was surmised, logic behind recommendation, check answer and revise if needed.

So it seems like LGM is just a prompted LLM with a different marketing label to distance itself from the likes of ChatGPT and Altman.

Expand full comment
5 more comments...

No posts