2025-12-08 What Is Lars Thinking ("WILT")

Submitted by Lars.Toomre on Mon, 12/08/2025 - 16:00
Cover of book "Rebooting AI" by Gary Marcus and Ernest Davis

Rebooting AI in the Age of Hallucinating Agents

A former client – now a good friend – and Lars exchanged holiday greetings this morning. He observed that conversations between them have become far less frequent than they were during the years when he managed billions in fixed-income for a large institution, and Lars ran a fixed-income trading desk on Wall Street as one of the original “bond vigilantes.”

He began with an anecdote (of questionable provenance) about a mutual acquaintance who was laid off this past summer yet faces the holiday season with remarkable equanimity. When pressed for an explanation, that individual replied: “I have dated the same woman for five years. I keep her photograph in my wallet. Whenever difficulties arise, I look at her picture and remind myself that if I can survive this psychopath (and a boss who supposedly was even worse!), I can survive anything.” [The story remains unverified. Confidence interval: narrowing.]

The real purpose of the call soon emerged. He asked – and in the preceding days, two other long-time associates requested the same – whether Lars would resume publishing “What Is Lars Thinking” ("WILT") notes, and share more of the books and research that currently occupy his attention. The universe appears to have reached a consensus. Lars accepts the assignment.

Let's therefore restart this WILT series with a book that has aged exceptionally well:

Rebooting AI: Building Artificial Intelligence We Can Trust

Gary Marcus & Ernest Davis (2019, Pantheon)

Published before the large-language-model explosion and before junior associates began asking Claude to draft waterfall provisions or invent SEC precedents, Marcus and Davis delivered a calm, rigorous critique of deep learning’s inherent limitations. Their core argument remains unchanged: statistical pattern matching at scale, absent rich models of common sense, causal reasoning, and structured knowledge, will never yield systems worthy of trust in high-stakes financial, legal, or operational decisions.

Six years later, every hallucinated indenture clause, fabricated citation, and confidently wrong regulatory interpretation serves as empirical confirmation of their thesis. They were not wrong; they were simply early.

In an industry that increasingly delegates mission-critical decisions to probabilistic autocomplete engines, the book now reads less like prophecy and more like an operating manual for the present moment. Perhaps the reader will now better understand why Brass Rat Capital insists upon using the "complicated" concepts of `ontology`, `knowledge graph`, and `trustworthy AI`?

BRC FinTech intends to post WILT pieces more regularly in 2026 – short takes on books, research papers, market structure observations, and the occasional irreverent aside. Topics will remain centered on the intersection of investing, capital markets, technology, governance, and risk. Feedback, suggestions, and dissenting views are always welcome.

Happy Holidays, and may 2026 bring robust risk controls, trustworthy artificial intelligence, and relationships that require no photographic reminders of endurance.