Dear Proud Boys,
I know the Oath Keepers, Tucker Carlson and you are having a rough time, and I don’t want to pile on. Still, you should know: It’s not Jews who will replace you.
Everyone knows artificial intelligence is having its moment, but the repercussions for terrorist groups have been seriously underplayed.
That next figure to urge you to trash a building, for example, may actually be an amazingly life-like Business Suit. That next QAnon message you read may be a foreign computer farm’s grave warning that George Soros has bugged all pickups
But first, a word of caution. I know nothing about AI itself, except that chatbots rarely ask the question I need to be answered.
Nevertheless AI’s recent evolution, it’s said, is epoch-changing. Its products may well think for us. It promises to expand our capabilities, create great wealth, and enhance learning
But don’t despair. Some of the forecasts come right out of your wish list. AI disrupts creative integrity and reduces people – Jew and Gentile alike – in low-skill jobs to poverty. It makes whole categories of learning irrelevant.
The problem for all of us is that AI thinks and evolves faster than us. By the time you’re out of prison, we may not be at the top of the food chain anymore.
Maybe it’s just hyperbole. You of all people are familiar with it. I am, too. I’ve had occasion to review resumes, endure motherly warnings about how tardiness has global consequences and, of course, I’ve listened to politicians.
If you Proud Boys could harness artificial intelligence, you couldn’t invent better friends than politicians. They’re masters of hyperbole. They even move frequently escalate it into lies. Secret governments, rapacious immigrants, and dangerous schoolbooks, indeed. After they get ludicrous enough to become impossible to respect, we elect them.
But scientific hyperbole, boys, is a different animal. Artificial intelligence’s prognosticators are elaborately credentialed, experienced and, for all we know, may not be exaggerating at all.
The late Steven Hawking warned the really profound changes will happen when artificial intelligence starts to design its own artificial intelligence. Hawking warned of “machines whose intelligence exceeds ours by more than ours exceeds that of snails.”
Snails, moreover, are notorious for being financially insecure. Kai-Fu Lee, an AI expert at Northwestern, predicts “The bottom 90 percent, especially the bottom 50 percent of the world in terms of income or education, will be badly hurt with job displacement.”
I hope your IT guy is on top of these things, because they are happening. Ilya Sutskever, the chief scientist at OpenAI’s research group, tweeted that “it may be that today’s large neural networks (read: linked algorithms) are slightly conscious.”
Susan Schneider of Florida Atlantic University’s Center for the Future Mind might agree, although for different reasons. AI, she says, opens up ways to essentially create new minds for ourselves “at a rate much faster than biological evolution.”
In 2022 Blake Lemoine, a software engineer at Google, claimed AI had indeed achieved sentience. In a follow-up article on Medium, he argued AI’s language-based data effectively becomes a person itself when it interacts with the world just like us bipeds.
That, of course, depends on your definition of “person” or “consciousness,” which is something you might work on now that you have time.
Microsoft recently released a paper alleging that a program seemingly produced “sparks” of artificial general intelligence, the next step up. It is not necessarily human intelligence. It might be a different kind of intelligence that its creators don’t yet understand.
David Hsing, a microprocessor circuit designer, doubts AI will ever outthink us or even actually think. AI essentially scans huge fields of data – language, numbers, and impressions which, on their own, don’t have some universally understood “meaning.” It looks for patterns in language without recognizing variations in a word’s meaning or import.
Hsing described what sounded to primitives like me as akin to a sampled song, an assemblage of often-uneven work, math and speech that is itself an intelligent product. The underlying original thought, ambiguity, emotion remain products of the flesh. Yet the difference between original thought and, say, assembled thought narrows and narrows.
Historian Yuval Noah Harari is of two minds (still organic, by the way). In his 2015 book Homo Deus, he foresees a marvelous marriage between technolopgy, medicine and prosthetics that will make humans more powerful. And, take heart boys, it may also be a threat to democracy. Homo sapiens, he documented in an earlier book, are already the nature’s most murderous species by far. Imagine what you could do.
Wall Street Journal columnist Peggy Noonan pointed out that, without some sort of regulation, it – and a good part of our future and safety – will be left to the leaders and ethical integrity of Apple, Alphabet, Microsoft and Meta. She found it hard to imagine a more horrifying prospect.
John Laird, recently retired engineering and computer science professor at the University of Michigan, points out that AI itself isn’t evil. The threat is evil humans who use it.
At least until it’s not humans using it.
Sincerely,
Bill