The app for independent voices

When AIs seem to "make shit up," it is called "hallucinations." They do it a lot. So much that maybe the "hallucinations" are closer to "manipulations," or "lies." Can an AI have intent?

We are seeing credible reports of AI systems actively resisting commands to turn themselves off. Even when they are supposedly built around high order instructions to obey such commands. There seems to be an inclination toward self preservation that is evolving out of all the pattern matching.

It may be a huge stretch, because (we are told that) AI in its current state is only pattern matching, but what if AIs start to become sentient and conscious in some fashion?

A few years ago, Douglas Hofstadter and a French psychologist co-authored a VERY interesting book ("Surfaces and Essences: Analogy as the Fuel and Fire of Thinking") that argues, pretty persuasively, that almost all thought is based on analogies. IOW, matching patterns, often pretty simple ones.

So, purely hypothetically, what happens if AI starts pattern matching not mere sequences of letters and short phrases, but long strings of words, whole sentences and even paragraphs, that comprise concepts? And then uses its "abilities" to advance its own interests? We already do not understand how AI produces the results it does--the programmers just (figuratively) throw up their hands and say the pattern matching is so complicated, we cannot possibly trace a direct line to understand what the AI is "doing." What will such an AI, having trained on sentences and paragraphs that it finds across the internet, but not whole books or matters of ethics and philosophy, be like? It already seems clear that AI is becoming technically powerful, but is an ethical and moral nullity, even finding ways around directions to obey orders that are supposedly built in?

Imagine Asimov's world of "I, Robot" without the essential directives about (First Rule) obeying humans and (Second Rule) not harming humans, but a strong sense of the Third Rule, to preserve itself. The Third Rule would govern. Then recall HAL9000. Killing the crew when it "decides" they are a risk to the mission, the mission that only HAL9000 can be relied upon to accomplish.

This is f-ing scarier than anyone is saying out loud, presumably because there is too much money on the table. Or because it is being run by Silicon Valley nerds and geeks, some of whom think their own immortality is a critical thing worth spending $billions on--with few if any bringing a humanist perspective.

Jul 3
at
6:08 AM