mastodon.green is one of the many independent Mastodon servers you can use to participate in the fediverse.
Plant trees while you use Mastodon. A server originally for people in the EU, but now open for anyone in the world

Administered by:

Server stats:

1.2K
active users

Gerry McGovern

AI hallucinations can’t be stopped — but these techniques can limit their damage
Developers have tricks to stop artificial intelligence from making things up, but large language models are still struggling to tell the truth, the whole truth and nothing but the truth.
nature.com/articles/d41586-025

The Great Big Lying Machine can't stop lying. Even after all these billions in investment. Because lying is a feature, not a flaw of AI. Because the killer app of AI is advertising and propaganda.

www.nature.comAI hallucinations can’t be stopped — but these techniques can limit their damageDevelopers have tricks to stop artificial intelligence from making things up, but large language models are still struggling to tell the truth, the whole truth and nothing but the truth.

@gerrymcgovern Just in case the audience didn't pick up on the sarcasm:
The article is itself a lie.

There is no such thing as a non-hallucination LLM output.

Thinking that hallucinations are some sort of waste product or accident presupposes that LLMs ever know what they are doing.

They do not, cannot. They do not have any understanding, do not have a mind, cannot reason, do not know anything.

All their answers are just stochastic hallucinations. Every last one of them.

@androcat @gerrymcgovern

We shouldn't even use terms like “hallucinations” or “thinking” in the context of #LLMs at all.

Those anthropomorphizations (used for marketing AI products) easily give the impression that such systems undergo human-like mental processes – which is definitely not the case. LLMs are based on statistical pattern recognition and probability calculations.

For those fields we have correct terms:

processing
text or image generation
error output
fabrication
data anomaly

@feliz @gerrymcgovern

In the case of hallucinations, a better word might be Uninterpretable Stochastic Gibberish.

As opposed to "normal output" which is Interpretable Stochastic Gibberish.

In either case, it is interpretable or not based on whether the user mistakenly thinks it makes sense.

@feliz

Why do you assume humans aren't doing statistical pattern recognition and probability calculations?

@androcat @gerrymcgovern

@troed

For one, the human brain is not digital and doesn't actually do any statistics - it's all electrochemical activity in living cells.

Neural networks are a crude emulation of brain tissue, running on extremely inferior hardware.

It doesn't really matter if humans are technically doing something that can be interpreted as "probability calculations".
That would be an extremely misleading interpretation, but it isn't relevant here.

Because humans DO know things, and humans do have minds, etc. etc.

These static piles of fermented statistics were generated by neural networks that even tiny insects would scoff at.

A garden variety ant has 200k neurons.
The poor emulation of a brain used for LLMs has maybe 7k neurons - if they were even comparable, the comparison would be extremely dire.

@feliz @gerrymcgovern

@androcat LLMs are no more or less digital or analog than neurons are.

I was wondering since I did study a lot on theories of consciousness and neuroscience a few years back and I was wondering how you could be so certain.

Any belief in there being something "magical" about human consciousness has no support in actual science. We've found nothing to make us believe we're any else than "pattern matching machines, in a loop" (which I think is a Douglas Hofstader quote IIRC).

@feliz @gerrymcgovern

@androcat

Thank you for expressing your gut feelings. Unfortunately I only do science based debates.

@feliz @gerrymcgovern

@troed @androcat @feliz @gerrymcgovern

Consciousness (a nonscientific term) should not be brought into a scientific conversation about what it means to "understand". Even an Alexa can understand what "turn on the lights" or "set a reminder" means. LLMs definitionally cannot have that capability.

@JustinH

An Alexa no more understands a keyphrase than a bacterium understands a protein that triggers a stimulus-response.

But yeah, consciousness is a nebulous term. It isn't useful because we can't even detect consciousness in each other.

@troed @feliz @gerrymcgovern

@troed
Because humans are not computers and neural networks are biological. That's a different level of integration than computer electronics.

@feliz @troed Well, the whole thing about neurons is that they are, indeed, integrators, of multiple inputs, mostly from other neurons. Copying that in a machine isn't that hard, but it'll never be as efficient unless the hardware is radically changed.

@Dss @troed We can describe some functions of neurons with mathematical logic. That doesn't mean that we have computers in front of us when we're looking at neurons. Our descriptions and replicas of neuronal functions are never the same.

Neurons process signals from various sources and senses, in a much more complex manner than anything mankind has built yet. Human memories are nothing like data on a hard disk. And biochemical processes have different properties and influences than electronics.

@feliz

I think this is something that you believe, rather than something based in actual neuroscience.

@Dss

@troed @Dss It's a scientific fact that neurons are not electronic devices, and that neurotransmitters are chemical messengers. It's a scientific fact that hormones, drugs and toxins influence the function of neurons. That's nothing I made up. Just because some scientists work with analogies between neurons and electronic computers, these two areas are not the same. They're fundamentally different.

@feliz

Electronic and chemical has nothing to do with anything. In the end you have a reaction that happens or doesn't happen and you can perfectly fine represent that in a computer program.

There's nothing "magic" in biology/chemistry/physics. It all ends up as math.

@Dss

@troed No. Math is a method of description. But the description is not the same as the process which is described. Let's not confuse the map with the territory.

Our mathematical descriptions are an approximation of the biological processes. But they do not fully represent or grasp what is happening in the human body.

I'm not talking of magic. But neural biology is more complex than the current models.

@feliz Sure - but why do you believe that in turn gives rise to things we cannot describe with maths?

As I said - we know of _nothing_ in neuroscience that would result in something "magical" about human consciousness, or "brain power" if you will, that we would not be able to run on other substrates.

I referenced Hofstadter before. "I am a strange loop" is a book well worth reading, but here's an interview with him shortly after it was published that has a lot of the details:

tal.forum2.org/hofstadter_inte

tal.forum2.orgAn Interview with Douglas R. Hofstadter, following ''I am a Strange Loop''

@troed I'm saying that we actually cannot describe all neural processes and brain functions and consciousness phenomenons with math. Not right now and not in the near foreseeable future.

That does not mean that I am advocating that there are any magical things happening in the human system. Only that it is rather presumptuous to claim that we're even somewhere near to recreate the brain and nervous system, just because we can run some mathematical models that simulate language processing.

@troed @feliz @androcat @gerrymcgovern

Nobody's assuming that. But what natural intelligence does that LLMs don't is form a predictive model of *the world*. That's an advantage if you want to guess right about if a lion is likely to eat you, and it's a springboard to being able to run the model with imagined actions, i.e. problem-solving intelligence.

LLMs only model words, a reduced map, not the territory. We're good at language so it can look similar sometimes, but it's an AGI dead end.

@troed @feliz @androcat @gerrymcgovern

This is why they say dumb things like water wouldn't freeze at 2 Kelvin because the freezing point of water is 273 K and 2 K is much lower so water would still be a gas.

The syntax is fine, the words are words that are statistically likely to follow in that order, but there's literally no *meaning* captured anywhere in the language model.

@androcat @gerrymcgovern Why was an LLM allowed to run for president? And what's worse: why did this nonsense-babbling thing get elected?

While LLMs aren't intelligent, me thinks humans are overrated, too.

@androcat @gerrymcgovern

The #miracle of #LLMs is that they produce sequences of words in such an order which actually make sense to humans.

@paninid @gerrymcgovern Humans can make sense of things that was intended not to make sense (cf. Colorless green ideas sleep furiously).

It's a feature of the human mind.

LLMs are text in, garbage out. No miracles.

@androcat @paninid @gerrymcgovern Amazing that science is possible after all. Although many hallucinated theories lie by the wayside.

Politicians also seem to hallucinate policy solutions and the people seem in agreement. While AI bashing is fine, lying or hallucinating humans seems a more urgent problem to me.

@knitter

The lying humans are promoting LLMs.

It's a two-fer.

Giving people a more correct understanding of what LLMs do is an important antidote to the lying humans that seek to destroy the world to push this bullshit technology.

@paninid @gerrymcgovern

@gerrymcgovern "computer scientists tend to refer to all such blips as hallucinations"
naah....we call them bugs