11 Comments
User's avatar
Chris J. Karr's avatar

Well done covering this.

It's worth noting that the conflict at the heart of this (symbolic vs. statistical reasoning) is an OLD fight in the AI community that's been fought for well over forty years, between the classical AI proponents (the symbolic camp) and the newer machine learning fans (the statistical camp).

Expand full comment
Steve Berman's avatar

The money picked the statistical camp and bet wrong.

Expand full comment
Chris J. Karr's avatar

The problem that the symbolic camp hasn't managed to overcome is scaling their systems up to a point where they are genuinely useful. We're certainly in a bubble at the moment with these deep learning systems, but once you blow away some of the hype smoke, there is still some useful value being created.

Last week, I introduced a system that uses statistical AND symbolic systems to moderate content in the context of using LLMs to improve some of our online mental health interventions[1].

The trick to all of this is to keep clear eyes and understand how this stuff all works. Right now, everyone's so enamored with the possibility of making big profits (pumping up a LARGE bubble that's big enough to mask other issues in the economy) that a lot of the discourse around this tech is more faith-based than value-based.

To me, it seems like we're in the late '90s broadband build out explosion before the crash. Lots of folks spending lots of money on tech and infrastructure before the bubble pops - data centers, power generation capacity, etc. - but this infrastructure will be the seeds of the next generation of innovation - after those innovators pay pennies on the dollars for it.

[1] https://bric.digital/newsletter/introducing-simple-moderation-real-time-moderation-for-llm-and-user-generated-content/

Expand full comment
Steve Berman's avatar

Statistical LLM had and has the greatest promise of achieving Turing level AI. But the gap between that and AGI might seem small but the model makes it unbridgeable. Some hybrid of LLM and neurosymbolic where the world model is the filter between the statistical and reality is probably the sweet spot. It’s good we went down the LLM road but doing it without guardrails is a bad idea.

Expand full comment
Chris J. Karr's avatar

And for what it's worth, I never gave up my affection for Decision Trees[1] - models that you can train in the statistical fashion with real-world data, BUT they remain understandable in a manner that makes explaining outcomes pretty easy, as well as allowing humans to inspect the decision-making machinery and replace elements that are reflected in the data, but actually contradict the actual world model we use.

[1] https://en.wikipedia.org/wiki/Decision_tree

Expand full comment
Curtis Stinespring's avatar

I think I like that but I'm not sure. You guys can sort all of this out before I learn a new vocabulary.

Expand full comment
SGman's avatar

LLMs are an imperfect tool, one that most people should likely avoid for complex topics. Coding can benefit, but one must know how to code and do it well/correctly before using one to generate any code.

For example, vibe coding is likely to lead to some serious issues with security and the like because people that don't know how to code are relying on LLMs to actually do correct things. Without the ability to actually understand the code and determine where it is correct/incorrect/needs enhancement, they're likely to deploy buggy/compromised software.

Expand full comment
Curtis Stinespring's avatar

Great discussion. Thanks. I am at a loss to understand how someone with your intellect could be confused about a $70 unpaid loan balance unless there were transactions in cash that generated no records. Something similar happened to a college buddy who is one of the very smartest (also most determined and stubborn) people I know. When his credit rating was threatened, he decided, like you, to pay up. It wasn't worth the hassle.

Expand full comment
Steve Berman's avatar

Company lost the payoff check and it took 3 weeks to figure out it never got processed. The $70 was the daily interest until they got a replacement check. And no they never took responsibility for it.

Expand full comment
Curtis Stinespring's avatar

Sounds like the Town of Braselton. They charge late fees even if they screw up the billing. I have learned to email the water department if the bill does not arrive by the fifth day of the month. I get prompt answers. Of course, I never remember when semi-annual storm water assessment is due

Expand full comment
Steve Cheung's avatar

Freddie DeBoer is a lefty I read…who also happens to be a reasonable AI skeptic. He has also made a strong case that the algorithmic prediction which undergirds current LLMs is a far cry from human thought. And he’s also produced some hilarious hallucinations on GPT5.

As they say, you can’t believe everything you “read” on the internet. And it’s doubly true with AI, plus include “see” and “hear”.

Ironically, I got a strategy on how to mitigate against hallucination…by asking Perplexity. 😂

Expand full comment