The rapid and continuing advance of artificial intelligence raises fraught questions with no easy answers.
It was the prevailing sentiment among panelists at the Rochester Beacon’s online discussion, “The AI Dilemma: Risks, Rewards & Regulation.”
The May 23 event featured Pencheng Shi, associate dean at RIT’s Golisano College of Computing and Information Sciences; Chris Kanan, associate professor of computer science at the University of Rochester; and Tim Madigan, professor and chair of philosophy at St. John Fisher University.
The event was sponsored by Bond, Schoeneck & King LLP, Armbruster Capital Management and Next Corps Luminate.
Also in agreement was ChatGPT, the AI-powered program that discussion moderator Alex Zapesochny, Rochester Beacon publisher and CEO of Clerio Vision, asked to write an introduction for the event.
“AI carries immense potential benefits, but not without significant risks,” ChatGPT noted. Asked whether and how AI should be regulated, it answered in the affirmative.
“The role of effective and judicious regulation becomes paramount,” ChatGPT replied.
To meet the challenge posed by AI’s looming role in weapons systems, medical treatments and education, and its employment by scam artists and pranksters in deepfakes as well as the questions raised as AI displaces human workers, “international cooperation and multi-stakeholder dialogue will be required,” it explained.
Human panelists agreed with nearly every point ChatGPT made.
“(AI) has been everywhere over the past few years (but) has been less obvious. Unlocking your phone with your face, that’s AI,” Kanan said.
On the plus side, AI has already inserted itself into domains ranging from medicine to law to computer programming, sparking improvements.
In coding, for example, AI’s ability to program in multiple languages and convert from language to language has spurred productivity bursts. In education, it has improved tutoring for students. In medicine, it has successfully taken over some diagnostics, like interpretation of EKGs.
Shi’s daughter, a college freshman, had already attested to AI’s tutoring advantage, he said. She shared with him that AI tutors had given her a better experience than human teaching assistants.
In health care applications, AI, like telemedicine, can be a leveler, bringing access to care for people who otherwise would not be able to pay for extra services available to more well-heeled patients, Madigan said. However, like telemedicine, AI can have the downside of being less available to those without broadband or access to a computer or smartphone.
But what about other downsides that ChatGPT mentioned like deepfakes?
The proliferation of deepfakes and fake content like AI-generated texts has indeed worried him, Kanan said. Still, he concluded, he is less concerned than he used to be. Coming generations seem to be absorbing the lesson that seeing might not necessarily be believing.
“One of the good things about AI being so in the public sphere now especially with young people is that they’re aware of this,” Kanan said. “They hopefully are learning rapidly that you cannot necessarily trust your eyes and ears in terms of what you see or read.”
Proliferation of fake information is not new, Kanan noted. AI simply provides a new channel for disseminating fake content. Still, AI makes dissemination of such material “a lot faster,” he conceded.
Bringing up a point ChatGPT didn’t raise, Zapesochny wondered if reliance on AI would have a negative effect on children’s learning. Would kids raised with ChatGPT or even more competent future versions learn to write, to analyze on their own?
“I’m very interested in that question,” Kanan said, adding that he has no answer.
Nevertheless, he said, “it behooves us as educators and as parents to try to sort out that question in terms of how is this going to change how we learn.”
And in the end, he concluded, “I don’t think we can deprive children of these tools. People are going to be using them. Telling them that you’re not allowed to use it until a certain age may not be beneficial.”
How far might AI go? Could it evolve into an independent form of intelligence hostile to humans, something like Skynet, the AI war machine obsessed with eliminating humanity imagined in Arnold Schwarzenegger’s “Terminator” franchise?”
After all, no less an authority than the famed physicist Stephen Hawking, warned that “AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.”
While such dangers might seem farfetched, ChatGPT itself concluded that “unquestionably, the U.S. government must step up to regulate AI, an innovation of monumental scale that carries both promise and peril.”
However, Shi, like ChatGPT, noted that to really work effectively, regulation would have to span borders, encompassing the whole globe lest actors not falling under one nation or group of nations’ rules could let loose some danger on the world. To date, the U.S., the United Kingdom, the European Union and other countries and regions have not agreed on a single, uniform regulatory standard.
A friend in China, said Shi, recently asked the Chinese equivalent of ChatGPT a question relating to President Xi Jinping. Instead of answering, the Chinese program stayed silent. Then, said Shi, his friend’s account was shut down. In China, Shi explained, AI must toe the party line.
Ultimately, AI is a human creation. Though these programs might do it faster and even more eloquently than humans might do it, in the end AI programs are—so far, at least—giving back only information they gleaned from human sources.
The musician Frank Zappa once asked a question and answered in a musical quip that in the end might serve as the final word on AI:
Sang Zappa: “Do you love it; do you hate it? There it is the way you made it.”
Even the doomsayer Hawking conceded that “the genie is out of the bottle. We need to move forward on artificial intelligence development.”
Will Astor is Rochester Beacon senior writer. The Beacon welcomes comments and letters from readers who adhere to our comment policy including use of their full, real name. Submissions to the Letters page should be sent to [email protected].
Enjoyed the webinar, although 5 or so minutes late trying to “get on board.” AI is going to happen. Setting up guidelines or rules of the road is good for those who will abide by them. AI is too big of an item, too big of a temptation for those who could care less about laws applied. AI will rule and the frustration that will cause, incalculable.
Excellent piece and excellent webinar!