After the release of the OpenAI’s GPT-4 in March, thousands of scientists and researchers signed on to an open letter calling on all AI labs to immediately pause training of powerful artificial intelligent systems.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the letter began. These systems “should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Yet the race by OpenAI, Microsoft, Google and others to further develop AI using large language models continues. Microsoft founder Bill Gates believes “the development of AI is as fundamental as the creation of the microprocessor, the personal computer, the internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it.”
Are the enormous potential benefits worth the risk? Should further development halt to allow for creation of a set of shared safety protocols for advanced AI? Should AI systems be regulated? And what impact is advanced AI likely to have on our society?
On Tuesday, May 23, the Rochester Beacon will tackle these questions in a virtual event: “The AI Dilemma: Risks, Rewards & Regulation.” Attendees will hear from three distinguished panelists:
■ Pengcheng Shi, associate dean at Rochester Institute of Technology’s Golisano College of Computing and Information Sciences
■ Christopher Kanan, associate professor of computer science at the University of Rochester and a member of the scientific advisory board of Paige.AI
■ Timothy Madigan, professor and chair of philosophy, St. John Fisher University
The discussion will be moderated by Beacon publisher Alex Zapesochny (who also is CEO of Clerio Vision).
The event is slated for noon to 1 p.m., live on Zoom. It is free to attend, but registration is required. You can register now.