2023-09-26 What Is Lars Thinking ("WILT")

Submitted by Lars.Toomre on Tue, 09/26/2023 - 09:00

Perhaps the most major problem with Large Language Models ("LLMs") and hence generative AI is when the model(s) generate factually incorrect text. The Gen AI now refers to this phenomenon as producing "hallucinations." As a result, a key AI research area has become "explainability." In short, how does a human being understand how the model came up with "that" result?

I strongly suspect a major academic paper addressing this issue was released yesterday entitled "ATTENTION SATISFIES: A CONSTRAINT-SATISFACTION LENS ON FACTUAL ERRORS OF LANGUAGE MODELS" with lead author Mert Yuksekgonul from Stanford University.

If one carea about accurate Generative AI, Brass Rat Capital would suggest you review this work.

 

Paper link: https://arxiv.org/pdf/2309.15098v1.pdf