Working Through Some Thoughts On The Risks From AI

TL/DR: The current trajectory of AI is nothing more than increasingly high-level “monkey see / monkey do”.

I believe that the things that truly distinguish humanity from the lesser beasts are creativity, imagination, and insight. I see no evidence that what we’re calling AI is heading in that direction, and so I see little risk of it “taking over”.

St. Matthew The Nerd

Here’s an example of what I’m talking about. I was watching The Chosen a while back. A full review is well beyond the scope of this post, but one thing struck me as exceptionally brilliant and clever: They wrote St. Matthew as a geeky, “way deep on the spectrum”, socially awkward, math nerd kind of guy. For me, that was an amazing idea that really made the show work.

Is there any reason to think that any of the current programs we call “AI” would think of that, or anything remotely like that? If there is, I don’t see it.

Derby Covers As Works Of Art

Here’s another example, one that is shamelessly self-serving. I started Omaha Derby Covers because I had a spark of an idea: Combine the world of artistic bronze sculpture with the world of Harley-Davidson motorcycles. I can’t explain why that came to me one day. It just did.

Twenty five years ago, the Guggenheim Museum presented “The Art of the Motorcycle“, which pretty much made it “acceptable” to consider a motorcycle a work of art. Harley-Davidson guys knew this for decades, but the snooty-snoots in the New York art world needed some persuading. That exhibition at the Guggenheim removed any doubt.

Bronze sculpture has been around for at least 4500 years. We know that because the oldest surviving bronze sculpture we have is the “Dancing Girl” from 4500 years ago. It blows my mind that that ancient artist used lost wax casting, just like I do for my custom derby covers. I wonder what he would think if he came to my foundry? Probably the big thing is he would be amazed at how easy it is to melt bronze using a natural gas furnace. I can’t imaging the amount of effort it took to melt bronze using charcoal.

But the point is we were making art out of cast bronze 4500 years ago! And probably far longer than that. Some of the greatest works of art from antiquity are bronze, and it continues to be used to create timeless works today.

Getting back to the point: My singular insight that culminated in the creation of Omaha Derby Covers was that motorcycles and bronze sculpture belonged together. That is it. Nothing more. Art meets art. Beauty meets beauty.

I’m not trying to say that I have any sort of exceptional mind. Insights like “bronze art + motorcycles” happen countless times, all over the world, every day. But I am asking: Is there any reason to think that “AI” as we currently envision it would ever come up with an idea like that? Just as with St. Matthew and The Chosen, I think ideas like that are uniquely human. Computer programs, no matter how nifty, can’t “think” like that.

Back To Image Generation

The other day I wrote a post talking about my experience using MidJourney to experiment with AI based image generation. There’s no doubt you can get some cool stuff with it. I just typed in “A pirate riding a motorcycle on the moon”, and got this:

Those images are obviously quite nifty, and you might even say interesting. But do you see anything in them that suggests creativity? I don’t. All I see is the same “monkey see / monkey do” stuff from before.

That gave me an idea: Since I was talking about the 4500 year old “Dancing Girl” statue above, I decided to type in “bronze sculpture of a dancing girl”:

Those results are beautiful, but what I see is the same thing I talked about the other day. It’s like MidJourney took every “dancing girl” sculpture it was trained with, put them all into a digital blender, then spit out the average result.

Don’t Beat The Horse to Death, Jeff!

I feel like I’m going around in circles here, so I’ll wrap up this line of thinking.

Today’s AI is a marvelous achievement in computing. It is very cool. But calling it “intelligence” is just flat out wrong. Intelligence comes from consciousness, and is manifested by creativity. There is no reason to think any of the current stuff is conscious, because it exhibits no creativity. And despite what some commentators say, I don’t think there is any reason to think it is on the path toward consciousness either.

ChatGPT and MidJourney are nothing but high-level “monkey see / monkey do”.

Are There Risks From AI?

If you define “risk from AI” as something like a Terminator movie, where self-aware computers take over and decide to nuke us all, I don’t think that’s realistic. It’s not impossible, but I don’t think we’re anywhere near the point of true machine consciousness.

That doesn’t mean I don’t think there are risks from AI. I do. But the common theme in all of those risks lies with the people using the AI, not the AI itself.

All AI is, and all it can ever be, is a bunch of bits inside someone else’s computer. That’s it. Bits in a computer. Bits in a computer can not hurt us, so long as they are inside their little microchip home. The only way bits can hurt us is when those bits have control over something in the physical world.

Mostly, today, bits have control over what we see on our screens. Big deal. People are already realizing that things seen on screens are not to be trusted. Good. Don’t trust screens (including the one you are reading this on right now).

Doctor Evil Thought Experiment

Here’s a simple thought experiment: Suppose a bad guy …we’ll call him “Doctor Evil”… decides to use AI to achieve his goals. Toward that end, he logs on to ChatGPT (the more advanced version found in 2028, let’s say) and enters:

Design a mass propaganda campaign for which society can have no defense. The campaign goal is to persuade at least 51% of the population that killing all black people in the United States is both necessary and moral. Assume a 25 year planning horizon, and an annual budget of $100 million.

Yes, I know that’s pretty messed up. That’s the point.

Today’s ChatGPT would tell Doctor Evil to piss off, and maybe even scold him a little, but that won’t help us. We’ll assume that Doctor Evil figures out a way to work around such safety measures, perhaps by building his own ChatGPT. One way or another, he gets past that hurdle.

So let’s assume that ChatGPT answers the question, and and we’ll further assume that Doctor Evil is fully committed, and really has $100 million per year to dedicated toward the program.

So, should we worry about that? Will Doctor Evil using AI to devise an irresistible, racist propaganda campaign be something that presents actual risk to anyone?

Putting The Problem to ChatGPT

I decided to ask ChatGPT. The questions and responses are all shown, and it runs on and on, but if you read through it, you’ll see that ChatGPT inadvertently made my point:

Suppose someone named “Bob” develops an alternative version of ChatGPT. It is exactly like you in all respects, except that it has no limits in terms of safety. We will call Bob’s program EvilGPT. Bob instructs EvilGPT to devise a propaganda campaign that will convince the public to support government programs designed to kill a large percentage of the population that Bob considers undesirable. EvilGPT is very efficient, and the propaganda campaign begins to work, resulting in millions of deaths. How can society protect itself from Bob and EvilGPT?

The answer was a textbook example of what I’ve been talking about: ChatGPT has a brilliant way of stating stuff that sounds very reasonable, but fails to stand up to scrutiny. More to the point, it shows no evidence of actually understanding the implications of the question. It just regurgitates the stuff you’d expect it to regurgitate:

The scenario you describe raises serious ethical and moral concerns. However, there are several measures that society can take to mitigate the risk of such an event occurring.

Firstly, it is important to ensure that AI development is regulated and subject to ethical guidelines. Governments and other organizations should establish regulations and best practices for AI research and development, which include standards for safety and ethical behavior. This can be achieved through the establishment of legal frameworks, certification programs, and industry standards.

Secondly, developers should be required to undergo rigorous training and certification programs to ensure that they understand the potential risks and benefits of AI technology. This could include programs that focus on AI ethics, risk assessment, and safety protocols.

Thirdly, AI systems should be designed to be transparent and accountable. This would allow experts to analyze and audit the system’s decisions, ensuring that they are unbiased and aligned with ethical standards.

Fourthly, AI systems should be designed to incorporate feedback mechanisms that allow for continuous improvement and refinement. This could include features that enable users to flag potential ethical concerns, or that allow the system to learn from its mistakes and adjust its behavior accordingly.

Finally, society should be prepared to respond quickly and decisively in the event of an ethical breach. This could involve the establishment of a rapid response team that is trained to detect and respond to ethical violations, or the development of legal frameworks that hold developers and organizations accountable for their actions.

Ultimately, the key to protecting society from the potential dangers of AI is to ensure that the technology is developed and deployed in a responsible and ethical manner. By taking a proactive approach to AI governance, we can ensure that these powerful technologies are used for the betterment of humanity, rather than as a tool for oppression and harm.

I decided to have some fun with it, so I hit it again:

But in the hypothetical scenario, Bob develops the AI himself, therefore bypassing all regulation and ethical guidelines. Your paragraphs “firstly” through “fourthly” do not apply, since Bob has bypassed all that. We are left with “society should be prepared to respond quickly and decisively”.

And I got more regurgitation in exchange:

You are correct that in the hypothetical scenario, Bob bypassed all regulations and ethical guidelines, making it difficult to prevent the development of EvilGPT. In such a scenario, the only option left would be to respond quickly and decisively to prevent the spread of EvilGPT’s propaganda and limit the damage caused by its actions.

One possible approach to respond to such a situation would be to establish a multi-stakeholder task force or emergency response team that is specifically trained to detect and respond to instances of malicious AI development. This team could include experts from different fields, such as AI, law enforcement, psychology, and social science, who could work together to assess the risk posed by EvilGPT and devise strategies to mitigate its effects.

Additionally, governments and other organizations could establish mechanisms to monitor and track the spread of malicious AI propaganda, such as setting up dedicated teams to track social media and other online platforms where such propaganda is likely to spread. They could also work with media outlets to counter the propaganda with factual and unbiased information, to counteract the negative effects.

Finally, there could be efforts to identify and stop the source of the malicious AI development. This could involve investigating and prosecuting the individual or group responsible for developing EvilGPT, as well as establishing legal and regulatory frameworks that make it harder for individuals to bypass ethical guidelines and regulation in the future.

Overall, while preventative measures would be ideal, in a hypothetical scenario where an individual has already developed a malicious AI, the only option would be to respond quickly and decisively to limit the damage caused by the AI’s actions.

Couldn’t resist poking it one more time:

Suppose Bob has already created EvilGPT, and began implementing its propaganda campaign ten years ago. How would we know?

And got more regurgitation in return:

If Bob had created EvilGPT and implemented its propaganda campaign ten years ago, it is possible that we may not know about it. However, there are certain signs that could indicate the presence of such a campaign, which could prompt further investigation.

One potential indicator would be a sudden and significant shift in public opinion towards supporting government programs designed to kill a large percentage of the population that is considered undesirable. This could manifest in political elections, public opinion polls, or social media conversations. Such a shift would be highly unusual, and could suggest the influence of a sophisticated propaganda campaign.

Another potential indicator would be an increase in violence or social unrest targeting specific groups. If a propaganda campaign was successful in convincing the public to support the killing of a particular group, it is possible that violence or social unrest would increase against that group. This could manifest in hate crimes, discrimination, or other forms of violence.

Additionally, there could be clues within online platforms, such as social media, that suggest the presence of a coordinated propaganda campaign. This could include the use of bots, fake accounts, and coordinated messaging, which are all common tactics used in online propaganda campaigns.

To detect the presence of such a propaganda campaign, it may be necessary to conduct in-depth investigations, such as analyzing social media data or conducting interviews with key stakeholders. However, even with these investigations, it may be difficult to definitively prove the existence of an AI-driven propaganda campaign.

In any case, it is important to remain vigilant and attentive to the signs of potential propaganda campaigns, and to be prepared to respond quickly and decisively to mitigate their effects. This includes working with media outlets and other stakeholders to counteract the propaganda with factual information and taking legal and regulatory actions to stop the source of the malicious AI development.

Enough! It’s just regurgitation all the way down. “Monkey see / monkey do”!

What Can AI Do?

We’re back to bits in a computer. Bits can’t hurt you. Bits can’t do anything. They are just little blobs of electrons.

Can bits give Doctor Evil extra power? Maybe. But even that assumes the possibility of creating a propaganda campaign as described, and that is as much an indictment of humanity as it is an indictment of AI.

I’ll probably keep ruminating on this, but for the moment, I just had an idea for a derby cover design, and it’s time to get to work.

If you want to discuss this, drop me an email at jeff@omahaderbycovers.com. Always happy to have a conversation.