When it comes to opinions on artificial intelligence, the tech community is more strongly divided than the general public. That’s among the findings of an analysis conducted by University of Rochester researchers of nearly 34,000 comments posted on social network Reddit.
The study used ChatGPT and natural language processing techniques to examine themes and sentiments of comments in 388 unique subreddits (subject-specific forums) in the six months following the launch of ChatGPT in November 2022.
Published in Telematics and Informatics in a paper titled “Excitements and concerns in the post-ChatGPT era: Deciphering public perception of AI through social media analysis,” the study underscores the need to better understand public perception of AI systems.
Jiebo Luo, a professor of computer science and the Albert Arendt Hopeman Professor of Engineering at UR, led the study, which used a list of AI-related keywords. It found that technology-focused subreddits primarily discuss the technical dimensions of AI. By contrast, non-techies tend to focus on job-displacement worries and societal impacts.
“The disparity in focus between subreddits suggests a gap in the public understanding of AI,” the study states. Using a sentiment and emotion analysis, categorizing sentences into positive, negative, or neutral categories based on the emotional tone in the text, the research team discovered that “tech-centric communities exhibit greater polarization compared to non-tech communities when discussing AI topics.”
“The tech community’s opinions were either strongly positive or strongly negative, more so than the non-tech community,” says Luo. “I think the polarization is due to the commenters’ personal knowledge of the issues. You see that play out among some of the tech celebrities as well, with people like Geoffrey Hinton, one of the pioneers of deep learning, being very pessimistic, and others like Sam Altman (the CEO of OpenAI) being far more optimistic.”
Those with deeper knowledge have divided opinions, a mix of optimism and skepticism, while the general public used the Reddit forums to discuss social issues.
“On the positive side, members of the tech community say it can help improve productivity, and they are also happy about the open-source culture with the development of Llama (Large Language Model Meta AI) or other open-source language models,” says Hanjia Lyu, a computer science PhD student involved in the study. “Some of the concerns the tech community shows are related to the ethical implications and potential impact on society stemming from AI advancement, as well as topics like regulation and hallucination.”
Overall, the public tends to perceive AI as a beneficial force that can contribute to societal improvement, particularly when used as an assistant in decision-making processes like gaming and education, the study states.
While the authors acknowledge that the analysis might represent specific demographics (most Reddit users are in the United States, for instance) and differ from views of the broader population, the study does call for a deliberate effort to inform people. Educational campaigns and other resources could be used to explain AI technologies and their applications. Stakeholders including policymakers, developers, educators, and researchers can be part of the process to demystify AI and address misconceptions and fears.
“By bridging the gap between AI development and public sentiment, we can work towards building a future where AI technologies are embraced, trusted, and utilized in a manner that positively impacts individuals and society as a whole,” the study states.
Market researchers and scientists alike have been surveying the public on their views of AI on a regular basis, since the launch of ChatGPT. Curiosity, concern, distrust, and lack of human control are common themes. A Pew Research Center study a year ago found that more than half (52 percent) of Americans say they feel more concerned than excited about the increased use of AI. Ten percent were more excited than concerned, while these emotions were an equal mix among 36 percent.
In a separate Pew study in March, 54 percent of Americans said generative AI tools like ChatGPT and DALL-E need to credit the sources they use for their responses.
Smriti Jacob is Rochester Beacon managing editor. The Beacon welcomes comments and letters from readers who adhere to our comment policy including use of their full, real name. Submissions to the Letters page should be sent to [email protected].
So, they used AI to determine how people feel about AI? (“Look Dave I can see you’re really upset about this…”)
AI will be, or can be used for the good. The question is, will it. I think that the bad outweighs the good. It’s way too tempting to go off the rails with this. The world is split into many factions and AI will just take that split to another level. It will do more harm than good. But that won’t stop the AI from being implemented. It’s going to happen, period.