36 Comments

TL:DR GIGO

AI is dependent on human input, and so cannot determine truth from falsehood. It might recognize probabilities based on information to which it has access, but there is a nuanced difference between facts and Truth. As long as humans run the input and algorithm, AI will be handicapped. The recent troubles with Google's AI images is an excellent example of how humans can manipulate how AI determines even facts. I'm confident that there were no Japanese Nazis or female East Indian popes (https://www.theverge.com/2024/2/22/24079876/google-gemini-ai-photos-people-pause). Meta has produced similar results (https://www.axios.com/2024/03/01/meta-ai-google-gemini-black-founding-fathers).

Expand full comment

"It might recognize probabilities based on information to which it has access, but there is a nuanced difference between facts and Truth."

It seems AI can't even get the facts right. Those gaffes are hilarious!

Expand full comment

Right!?!

Expand full comment

I heard Sam Altman on a podcast talk about OpenAI's problems with "hallucinations". The key, he said, is to actually make them controllable. Sometimes you want an answer with extremely high verifiability so you tamp down the guessing. Other times you want creative guesswork so you open up the throttle. These Google AI images are clearly opened up. Watson's performance on Jeopardy ages ago was much more factual.

Dr Ginger Campbell's book "Are You Sure: The Unconscious Origins of Certainty" is super interesting about this feeling in ourselves.

https://brainsciencepodcast.com/2020-products/areyousure-pdf

Expand full comment

AI will only become capable of knowledge when the actual components of consciousness are incorporated into its design. This could mean quantum computing. I’m really not sure. I do know for damn sure that our minds aren’t just highly complex LLMs.

Expand full comment

Thanks for commenting!

I agree our minds are something different from LLMs and it's not just a matter of scale, especially considering all the data we'd have to be fed, which we'd also have to remember. And it doesn't seem likely the LLMs would ever equal human language, even if granted more data.

Expand full comment

I answered Yes to the AI question, assuming the sense of "truth" here is the world model context discussed in the chapter. I don't think there's any particular obstacle to an engineered system eventually being able to have that. But that doesn't mean any of the LLM type systems are anywhere close, or even moving in the right direction. Self driving cars might be, but they have very a long way to go, before they break through what is often called "the barrier of meaning".

Expand full comment

"I don't think there's any particular obstacle to an engineered system eventually being able to have that."

Do you think an engineered system could have common sense (which, I would think, would involve knowledge beyond rules and facts)?

Expand full comment

Sure, eventually. Of course, I think it's rules all the way down.

Expand full comment

Loved the chapter! As a big etymology and semantics nerd, I could read about this topic forever!

In regards to the poll/question:

I think conversations like these are muddied by our understanding of AI. We've started using the term prematurely, and it's affecting everyone's ability to think deeply about what artificial intelligence actually means and what its true form will look like. I do agree with Sam Altman's decision to rush this stuff to the public and expose it to a general audience though. Conversations and questions about AI are the most important discussions we need to be having, and they weren't happening before OpenAI made their program public.

We haven't developed AI yet. AI does not exist. Yes, we have LLMs, MLAs, NLG and NLPs, but none of them rise to the definition of intelligence yet. These programs are all still in their infancy - it's as if we're asking "Can phones become computers?" in 1995. Angela Collier has a great video about it this concept. https://www.youtube.com/watch?v=EUrOxh_0leE&t=191s

The question, therefore, is somewhat self-answering. AI will exist once questions about its efficacy cease to exist. It will become obvious, like phones becoming computers.

Expand full comment

Thanks! And thanks for taking the time to comment!

"AI will exist once questions about its efficacy cease to exist."

That's an interesting point. If we really thought the existing systems were intelligent, we wouldn't be debating the issue.

Expand full comment

I answered "Other" because [a] I don't know and [b] it depends on what you mean by "understand" and "truth". In some sense, all computers do is repeat the "truth" of the "facts" they have. And Searle's Chinese Room can certainly give the *appearance* of factual understanding.

So, it's kind of a tricky question. For my definitions of "truth" and "understand", no computer so far comes close, and there have been some signs of a flattening out of progress with LLMs. These may remain nothing more than very surprising search engines requiring a major breakthrough to progress further.

Or who knows. Maybe neuromorphic chips will usher in a whole new level of "thinking machine".

Expand full comment

Sounds like your definitions are the same as mine, because I agree, no computer comes close. I'm amazed anyone could think so: https://www.bbc.com/news/technology-61784011

Then again, maybe it's a good thing people are predisposed to anthropomorphize if it makes them treat creatures—or even things—with more kindness.

Expand full comment

AI =/= AGI =/= LLMs The simplest computer chess programs are AI. LLMs are simply data crunching and expression of a space of vectors from that data, same with other machine learning. AGI doesn’t exist, yet. “Understand” is a phenomenal state, maybe if you’re a functionalist about consciousness, AI without the lights on may “understand” something. But I don’t hold that view, if there is no first person experience there is no understanding. That brings us to the question, “I consciousness necessarily biological.” Which I take the answer to be “no.” So conscious AI, if it is possible, is simply an engineering problem. This also relates to other seemingly unrelated questions, like, “Does a soul exist?” and, “Can machines be ensouled?” Anyway, you get the point.

Expand full comment

Good points! I was being a bit sloppy in my technological lumping together. I agree with you on all of the rest, though my answer to "Is consciousness necessarily biological?" would be, "Maybe not, but how will be we ever know?" Putting aside the problem of other minds, is behavior enough? Or is it enough to have the proper artificial equivalent to our makeup (but then how do we know that?)

I like your questions. What you do think, do souls exist? Can machines be ensouled?

Expand full comment

I know of no reason, as of yet, why consciousness is necessarily biological. Currently I subscribe to a form of Russellian Monism, but I’m still quite new to the topic, still reading about it. The Hard Problem independent of qualia (illusionism about qualia seems plausible to me) is quite challenging, I’m flirting with Sean Carrol’s view that we should have greater epistemic weight for our current physics, and altering our opinions of physics for Goff’s “Phenomenal Constraint” is inappropriate given our ignorance of any specific mechanism and the complexity of our brains (possibly the most complex contiguous we know of). Both of those views are compatible with AI being conscious eventually (and possibly now, but I doubt it). Anyway, I don’t believe in souls, I’m open to their existence, given a convincing reason, I’ll change my mind. If there are souls, whether computers can harbor them depends on so many other metaphysical facts that I don’t think any general opinion is appropriate.

Expand full comment

I'm new to the topic as well, and the topic of mind-brain as it's currently being discussed by the likes of Carrol, who I'm familiar with, and Goff who sounds interesting to me. I've been meaning to look him up, so thanks for the reminder.

It just so happens that a longtime blogger friend of mine recently published a paper on the topic of whether AI can ever truly be intelligent and contacted me after reading this post. I'll leave the link here in case you're interested, but please don't feel obligated to. I haven't read it yet, but it's on my to-do list:

https://www.cckp.space/single-post/bp-7-2024-andreas-keller-intelligence-and-paraintelligence-62-108

Expand full comment

I’ll read it. I saw your summary in another comment. I find the conceivability argument or the zombie argument implausible for two reasons, one is bacause of an obscure paper called “The Inconceivability of Conceivability Arguments” but the other is that I don’t think zombies are conceivable. The only possible way is AI mimicking a huge set of human words, behavior etc. to effectively appear human, which I found challenging, I’m assuming my conclusions are similar to the paper you linked. Thanks for sharing, it’s related to something I’m thinking about.

Expand full comment

I'm not too keen on the zombie/conceivability arguments either, although probably for different reasons. For the longest time I didn't even bother to read Chalmers because I was under the impression he simply added a "fun zombie spin" to what was already discussed in Descartes, thereby reigniting an ancient debate—which I assumed he was able to pull off because people are just really into zombies for some reason. Now I'm realizing he's making a point that Descartes took for granted: the mental and the physical are not identical. I have to wonder, can we really not agree on that much? No wonder we're arguing in circles. Everything has become about the meanings of words and what they refer to, if anything. I don't see any clarification or improvement over Descartes in the verbal wrangling people have been doing these days involving mind-bending thought experiments. Descartes' Meditations ask us to reflect honestly on our own experience and to try to rid ourselves of certain preconceptions and biases we have about the world and ourselves. It's a stronger foundation than the one Chalmers takes up with zombie conceivability, where everything hinges on the meaning of the word 'conceive'. Descartes was involved in an entirely different way of thinking than what's going on in our current debates, and while I certainly don't agree with D on everything, I do admire him on that. That said, if it weren't for Chalmers, many people involved in the sciences wouldn't even think about the mind-body problem, much less take it seriously, so I have to say kudos to him on that.

But anyways, sorry about the long-winded reply. I hope you find the paper useful or at least interesting!

Expand full comment

Hmm, I am suspicious of Chalmers, but I haven’t read any original papers directly, just half of Goff’s book Consciousness and Fundamental Reality, in which Chalmers is referenced. As for Descartes, I find the discussion of his methodology interesting, and his 6 part argument for god impossible to finish, I’ve only gotten as far as part 3. But yeah, a version of the conceivability argument from that work is mentioned in the paper I referenced earlier.

I started reading your friend’s paper, I’m on page 30 or so, I like it so far. Found a couple of typos though, but who doesn’t. Their reasons are drastically different than mine, I learned a lot, it put quite a few disparate things I’d heard of here and there together in a nice way. They might already be familiar with this but maybe send them this SEP article. I don’t know much about the topic but I thought of it when reading the paper: https://plato.stanford.edu/entries/hyperintensionality/

Expand full comment

If you haven't seen it yet, I think Dan Dennett's article "The Unimagined Preposterousness of Zombies" completely erases any hopes of thinking of Chalmers' p-zombies as "conceivable".

https://dl.tufts.edu/concern/pdfs/6m312182x

Expand full comment

This is very interesting, although I feel like I got a preview in the chapter you contributed to my book. :)

Expand full comment

Haha...somewhat, but "generosity" is taken in a different sense here, not a stance or attitude towards approaching views you disagree with for the sake of debate but a description or uncovering of what goes on in language and what that implies. I think your understanding of what "generosity" means is the more common one, though.

Expand full comment

Oh, I got that the chapter here was a primarily linguistic case, and that your chapter was made for the purpose of facilitating debate. However, I think that this is more a difference of degree rather than a difference of structure.

For example, in your chapter, you forced yourself into the mind of a controversial populist political figure in order to better understand his actions and the actions of his followers. Likewise, if you tell me "Ben, the pig is in the pen," I am putting myself in your position in order to understand your meaning. If I'm a jerk, or if I'm closed off to either process, I'd do the same thing.

"The controversial populist is bad, and everything he says is wrong, and his followers are dumb. I will interpret everything he says in the dumbest, most ridiculous way possible."

"Tina is bad, and everything she says is wrong, and anyone who believes the pig is in the pen is dumb. Everyone knows a pig won't fit inside a writing instrument."

I'd argue that the primary difference between an ungenerous interpretation of the controversial populist and the pig-in-pen intensity is emotional intensity. Most people have quite strong emotions about controversial populists, and thus being generous is emotionally hard, while most people have few feelings about pigs and writing instruments.

Expand full comment

Just thought I'd add this paper by my WP blogger friend, Andreas Keller. He says, "A paraintelligent [AI] system can be compared to students who copy from their classmates’ exercise books without being able to produce new ideas themselves. Such students might get along in school because they have sharp eyes and a good memory, but they are wholly uncreative. They can combine existing ideas within a system of rules, but cannot look at that system from the outside, analyze it, criticize it, modify it, or extend it. They are immersed in this system and unable to emerge from it."

You can download the free PDF here:

https://www.cckp.space/single-post/bp-7-2024-andreas-keller-intelligence-and-paraintelligence-62-108

Expand full comment