There are several grandiose claims about AI going around the internet, lately many of them coming from Elon Musk.
“AI is potentially more dangerous than nukes,” is one of his oft-quoted statements.
Elon had high hopes for OpenAI, co-founding ChatGPT in 2015 and taking the wheel alongside CEO Sam Altman.
But he’s recently thrown up his hands at the company, ultimately saying he is concerned about the direction of research and development even going so far as to say “OpenAI has become a closed source, maximum-profit company effectively controlled by Microsoft.”
It gets worse. Recently, NYT reporter Kevin Roose had a conversation with ChatGPT, which identified under the name “Sydney,” and tried to convince him that he was unhappy in his marriage and that he should leave his wife and be with it instead.
This may turn out to be:
A: The best marketing ploy in recent history
B: Just a parlor trick and massively overhyped
C: The start of the AI takeover
Let’s explore each.
Elon’s Dire Warning on ChatGPT
OpenAI and Microsoft had a modest love affair back in 2019 when they initially tied the knot with a $1bn investment. This year, their relationship grew stronger as they escalated to an exclusive $10 billion commitment — with OpenAI technology integrated into Microsoft’s Bing search engine and Edge browser.
Additionally, OpenAI continues to lock its secrets tightly… sharing little to no code, ensuring only one group reaps any rewards from this revolution.
ChatGPT will argue the opposite if asked:
“OpenAI is a research organization that focuses on developing advanced artificial intelligence technologies. While some of their research is proprietary, they also make significant efforts to promote open-source and collaborate with the wider research community,” ChatGPT writes.
At last year’s World Government Summit in Dubai, which took place last week, Elon called for AI safety protocols and warned that AI is “one of the biggest risks to the future of civilization.”
It echoes his statements on a Joe Rogan podcast four years ago: “I tried to convince people to slow down AI. To regulate AI. This was futile,” Elon says. “How long did it take for seatbelts to be required? The auto industry successfully fought seatbelts for more than a decade.”
The upshot is, according to Elon, if you thought social media algorithms were opaque, AI is a thousand times more of a black box.
Will world governments take Elon’s advice seriously and keep a close eye on the rapid advancement of AI? Hard to say.
Final Concerns
Elon has a few more concerns about AI:
- How will we know what images, videos, and news is genuine and not faked through AI?
- What if someone calls your bank or family pretending to be you using voice AI?
- When will content creators and workers be priced out of emerging AI technology?
- How can we stop AI from being used to spread misinformation on the internet?
Imagine if a deep fake of a politician or one of Zelensky or Putin gets circulated, which leads to escalated nuclear tension? These questions must be answered soon as we enter this race for AI supremacy.
Isaac Asimov had the right idea regarding robots and artificial intelligence.
We must set rules and regulations to ensure they are used ethically — and those laws should be put to a public consensus. Or at the very least, let us hire representatives on our behalf. You know, like a democracy.