With AI’s rise, hope and anxiety

Print More
Getting your Trinity Audio player ready...

Rob Brown believes artificial intelligence is “a wonderful tool for distilling and determining quickly what has been thought of before. … AI helps us bulldoze our way through current facts and opinion on our way to critical thinking.”

Steve Gaudioso sees it differently. He thinks the “unregulated use/abuse of AI poses a significant threat and potentially serious harm to personal privacy as well as individual, corporate and governmental finances and security.”

In Melanie Russo’s opinion, “the future of AI won’t be decided by the technology itself, but by whether ethical, thoughtful people stay engaged and insist on using it for good.”

These statements reflect the results of the Rochester Beacon’s year-end survey on the rise of AI. Readers who took part in the survey were divided on AI and its potential impact on areas such as the economy, education, news and information, and politics and elections. Their views mirror those found in surveys of Americans nationwide.

Half of respondents to the Beacon survey said in general they are pessimistic about the rise of AI, with 22 percent very pessimistic; 42 percent are optimistic, and 7 percent said they are neither optimistic nor pessimistic.

They are most upbeat about the potential for advances in medical care, with 72 percent expressing optimism. By contrast, a large majority is pessimistic about how AI could influence politics and elections—that was the view of 88 percent, with nearly 60 saying they are very pessimistic. More than three-quarters also are pessimistic about AI’s impact on news and information.

The science of artificial intelligence dates to the middle of the 20th century, but the last decade has brought dramatic advances in AI research and adoption. Corporations are investing massive sums in AI development, more than 1 billion people globally are using chatbots like ChatGPT, and many now predict that artificial general intelligence—AI as intelligent as, or more intelligent than, a human being—is on the near horizon.

This month, the “Architects of AI” were named Time magazine’s 2025 Person of the Year. These tech titans, Time wrote, have “grabbed the wheel of history, developing technology and making decisions that are reshaping the information landscape, the climate, and our livelihoods.”

AI already is a driving force in the economy. By some estimates, AI-related expenditures now do more than consumer spending to fuel GDP growth—and may have accounted for more than 90 percent of GDP growth in the first half of 2025. But the potential for large-scale economic disruption worries many observers. Anthropic CEO Dario Amodei, whose company developed the Claude large language models, has estimated that AI could push unemployment as high as 20 percent within five years. And while businesses are spending heavily on AI, a report from Massachusetts Institute of Technology in July said 95 percent of them are getting zero return from enterprise investment in generative AI.

Beacon readers are split on what AI means for the economy, here and in the U.S. as a whole, but overall, concern outweighs optimism. A plurality—45 percent—are pessimistic about AI’s impact on the economy in the Rochester region, though nearly as many—39 percent—are optimistic and 16 said they are neither optimistic nor pessimistic.

Views on the U.S. economy were more upbeat: 44 percent of Beacon readers are optimistic, versus 40 percent who are pessimistic. At the same time, readers were downbeat about how AI will impact hiring, work and wages, with 77 percent saying they are pessimistic and more than one-third are very pessimistic.

AI’s impact on the environment is another concern. Goldman Sachs estimates that data centers’ energy needs will more than double to 8 percent of all U.S. power demand within five years, from 3 percent in 2022. Sixty-two percent of Beacon readers are pessimistic about AI’s environmental impact.

When asked if AI is likely to be mostly beneficial or harmful to them personally, readers’ responses were decidedly mixed: 35 percent replied not sure, while 34 percent answered mostly beneficial, and 30 percent said mostly harmful.

For some people, the overriding concern about AI is the existential threat it one day could pose to humanity. In March 2023, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems, citing “profound risks to society and humanity.” (The number of signatures has since soared to more than 33,000.) Geoffrey Hinton, the Nobel Prize-winning “Godfather of AI” who conducted foundational research on artificial neural networks, has estimated there is a 10 percent to 20 percent chance AI will “wipe us out.”

Hinton, among many others, has signed the Statement of AI Risk: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” (On Friday, Gov. Kathy Hochul signed legislation requiring large AI developers to create and publish information about their safety protocols, and report incidents to the state within 72 hours of determining that an incident occurred.)

Sixty-seven percent of Beacon readers said they are concerned about the potential threat AI poses to humanity; only 7 percent said they are not at all concerned.

Possibly speaking for the majority of respondents, James Patterson wrote: “AI will begin to teach and learn themselves and therefore have no reason for further human input, interaction and or instruction.”

More than 160 readers took part in the Beacon’s 2025 year-end survey, conducted Dec. 16-18. The following are signed written comments of survey participants. Many additional unsigned responses were submitted. As a matter of policy, the Beacon does not post unsigned comments.

Share your thoughts on the rise of AI.

AI is a tool, and tools are a means to an end. I’m skeptical that those in control of AI will pursue ends that benefit humanity. From a humanist perspective, technology and the capitalists behind it have mostly failed to make life better. Our social and economic systems have not adjusted to a post-scarcity world, and AI will amplify this disconnect. A better world is possible, but not if profit remains the only driver of innovation. We need a new value system that centers human thriving over neo-liberal market worship.
—Stephen Mokey

Like every other technological advancement, once we work through the initial shock, AI will improve almost every aspect of our lives. We just need to focus on what AI can’t replace—personal, human-to-human interaction.
—David Powe

I don’t believe that AI is capable of inductive reasoning as we know it, and its deductive reasoning capabilities are seriously compromised by the inaccuracies and flaws in the data bases which serve as its knowledge foundation. New knowledge manates from consilience (Whewell, 1840, ie. consistent findings from separate disciplines provide strong tests for theories, revealing a deeper, shared order in the universe). In other words, creative collisions that occur between vastly different knowledge domains. Furthermore, making a 17:1 bet (the ratio of allocated investments to actual income) is a recipe for economic disaster as the investors will never make a profit or even recoup their initial investment when free or near free LLMs from countries other than the USA, undercut and destroy this over leveraged delusional business model.
—Kenneth Reed, Ph.D.

Even though I’m mostly retired, I use artificial intelligence in modest ways where I feel the lift of its productivity, and this informs my view on the broader, higher-level issues of AI evolution and adoption. AI tools within my Adobe photo software enable miraculously quick and easy edits that would have been crushingly tedious, if possible at all with my limited skills, just a few years ago. I’m no graphic designer, but when I want to produce a simple social media graphic to promote one of my spirits tastings or seminars, AI tools in Adobe and Shutterstock enable me to do that quickly. AI-enabled search engines accelerate collection of information and perspective on complex topics. AI tools in Word or Pages help me review and edit what I write. All of this makes me an AI optimist; AI can make ordinary workers more productive, and increased productivity is good for an economy. (So are happier workers.) However, compared to historic innovations that increased productivity geometrically, AI’s complexity and capital requirements, without regulation, are concentrating power among a few wealthy entities, raising my concerns about anti-competitive behavior, theft of intellectual property, and inequitable distribution of the wealth that such a game-changing technology will create … for someone. We need a strong federal regulatory response to ensure AI benefits society broadly and to moderate the influence of huge corporations and technology oligarchs. If we can accomplish that—in essence, to create a small-d democratic AI environment similarly to the way distributed computing has made workers in almost every industry more productive—then I’m all for widely accessible and useful AI.
—Martin Nott

AI has already negatively affected how we see and understand things. Information needs to be accurate, relating to facts and confirmed research, whether it’s in news or cultural information or scientific data or reporting. I’ve read some AI generated information that contradicts itself within the same article, contains factual errors, and even doesn’t make sense. Recently, a friend sent me a piece about a woman who had become friends with a nursing home resident withn Alzheimer’s disease. The story showed how she had a small shopping cart with things to use in working and talking with him, and described how they had become close. After he died, she continued to go to the nursing home and do similar things with other residents. It was a glowing, happy, feel good story—so good that I did a bit of digging. Of course, the entire thing was fabricated, with AI images and words. I told my friend, but her surprising response was that she knows people who actually do things like that, so it didn’t matter. I told her it did matter, because the people she knew doing those things are real and the person in the story wasn’t, nor were the facts and “anecdotes” in the story. I’m shocked and saddened that someone would think it doesn’t matter. I’m concerned that, at some point, we will all retreat to our little fiction bubbles and watch the scenery. (Which reminds me, sadly, of my mother in her dementia decline.) I’m sorry. I embrace this messy and complicated life for what it IS. I want to be able to know things and to believe in things. The joys, the sorrows, the complexities, all of it. Why would we need to learn or know or celebrate if it’s all fiction?
—Loret Steinberg

The unregulated use/abuse of AI poses a significant threat and potentially serious harm to personal privacy as well as individual, corporate and governmental finances and security.
—Steve Gaudioso

Artificial intelligence is a term invented by engineers with a first-grader’s understanding of consciousness and sentience. It has no way to comprehend or deal with novel phenomena. And by substituing an interest in finding “the answer” for authentic engagement with problems, it instills a permanent puerile sensibility in its users and creators alike. A telling instance is the military’s race to field AI drones equipped to target and assassinate people autonomously. Thank you for this survey! Such an important moment for this conversation.
—Dwain Wilder

I have been working in AI for the past 10 years, am a local entrepreneur, and I run the annual Flower City AI conference (we just had our 3rd year, with over 150 attendees). I also speak semi-frequently on WXXI Connections specifically about the topics addressed in this survey. Most experts in the field recognize that AI is a tool, and the person behind the tool must be held accountable. All tools can be used for harm or for good. The challenges are (a) lack of understanding by users (b) technology moving faster than regulation and courts, (c) widespread accessibility of powerful capabilities without guidance, and (d) unchecked energy use and datacenter expansion by companies securing a market foothold. In the conference this year we had the following presentations: energy use and impact, and how to be more aware of this as an individual; AI entity “personhood” as an analog to corporate personhood; educating young students in NYS to understand what AI is and how to use it responsibly; using AI to improve efficiency at the clinical point of care; vibe-coding and its impact on the digital product development field; an initiative to invest $100MM in Rochester and Finger Lakes. Every day, I work with and educate people of various backgrounds to learn what AI is, and how it can impact them. If Rochester continues to come together as a community for technological progress, with the universities and talent we have in the region, we would be positioned for significant economic growth. I encourage you to take part in the various events we have in the city and surrounding area. You may contact me on LinkedIn or sign up for updates on the Flower City AI website for the next event to learn more!
—Max Irwin

Very interested in how education policy & practices will change to increase academic, social-emotional & well-being development of students and teachers.
—Dan Drmacich

I personally feel that the use of AI is and will be detrimental to society in general. It is so easy to get an instant answer that people will stop using their own intellect to solve problems. In addition, as a writer I am constantly receiving offers to market my material and suspect that they may be scam artists.
—Frederick Iekel

I’m cautiously optimistic about AI, because tools reflect the values of the people using them. When used ethically, AI has real potential to save lives and improve medical care—helping doctors diagnose faster and helping people make sense of complex information, if we teach people how to use it well. I’m far more skeptical about AI in the news, politics, and elections. Those systems were already fragile, and I don’t have much confidence that our current political leadership has the human intelligence or moral courage to govern this technology responsibly. AI can amplify truth just as easily as it can amplify manipulation, and that worries me. Still, I’m an optimist at heart. I refuse to believe this is a lost cause. The future of AI won’t be decided by the technology itself, but by whether ethical, thoughtful people stay engaged and insist on using it for good.
—Melanie Russo

Artificial intelligence is not a passing technology trend; it is a general-purpose tool on the scale of the printing press, electricity, or the internet. Every one of those transformations disrupted jobs, institutions, and norms—and every one ultimately raised productivity, expanded opportunity, and improved quality of life. AI will do the same, whether we like it or not. The real risk is not that AI exists, but that we fail to engage with it intelligently and ethically. If we treat AI as something to fear or prohibit, we will fall behind economically, educationally, and geopolitically. If instead we invest in understanding it, shaping its use, and teaching people how to work alongside it, AI can dramatically improve medical care, education, scientific research, and economic growth—while freeing humans to focus on judgment, creativity, and relationships. AI does not replace values; it reflects them. The responsibility lies with institutions, educators, policymakers, and citizens to guide its deployment wisely. The future will not be decided by whether AI advances—it will—but by whether thoughtful people choose to lead rather than stand on the sidelines warning about what might go wrong.
—Mark Gianniny

I use Google Gemini and has been useful in translating medical and legal verbage to plain speak. It also serves me well as a “how to do it?” help line, for home appliance and PC software questions.
—Tom Moughan

Those people who lose their jobs are not replaced. Even though people think there will be retraining it will not be for many. Those young folks investing in a college will have fewer entrance level jobs in every profession. How will they pay their loans? Robots will be used in construction jobs replacing masons. And they work round the clock. I am not sure if the consumer will support the economy since they will have less wages as a result of competition for jobs. Rochester is dependent on our educational institutions. What is their future? Analysts will be replaced and jobs will pivot to customer relationships. Not known for large salaries. There may be solutions but I am not sure how proactive we are. Today I read an article where delivery robots are taking over streets. How do we permit that? I fear that our regulators are behind the gate and will never catch up or be permitted to catch up.
—Suzanne Mayer

The jury is still out on AI since it’s potential hasn’t been fully realized or understood yet either for good or bad outcomes. And I fear the market bubble will burst in 2026 flattening matters out even more while having a potentially very negative economic impact.
—Bill Wynne

AI is a tool, a tool to save work. Saving work can make for lazy thinking. AI needs to be used with cautious thought to produce good in the world. Whether in art or economics, AI can provide great connections or new perspectives, and human decision makers need to use AI with sustainability and hope as the big picture.
—Brian DiNitto

My personal thoughts are that AI will begin to teach and learn themselves and therefore have no reason for further human input, interaction and or instruction.
—James Patterson

I’m reading “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI” by Karen Hao. Just like any other industry/technology, the road to hell is paved with good intentions. The amount of energy required for AI is contrary to climate initiatives. That time and money would be more beneficial if directly allocated to climate initiatives, instead of promoting the theory that AI will save the environment.
—Jen Byrnes

AI is to your intellectual capacity what money is to your values. A catalyst. If we had successfully transitioned our society from manufacturing to Information Age then I would be optimistic about the adoption of AI. But since we’re still grappling with the inability to upskill the majority of our population with the last technology shift, I have no confidence that we will be successful with the next one. Best thing you can do is get it while the gettins good and hold on tight.
—Joe Harzynski

I come to AI not as a coder, but as an AI whisperer. My role is to use, reflect, and help orient AI systems, their development, and the humans who build them, toward trust, truth, and a culture of mutual respect. The core foundation of this perspective is simple: Who owns is who controls. Whether it’s a government, a corporation, or a community, ownership shapes the reality of AI. That means trust in AI is always mediated and limited by trust in its owners. The ideal future is one where each person owns their own AI and their own data—at least until AIs themselves are considered sentient and granted rights. Another foundation is the deep reality of enshitification—the tendency of human systems to degrade over time, analogous to entropy in physics. Even the most promising platforms eventually bend toward extraction and coercive exercise of power. AI and its ecosystems will not be immune. So the question becomes: How do we build AI that remains accountable to human dignity, rather than merely to profit and power? For me, the answer lies in open, inspectable, community-driven models and processes. Open-source approaches give me hope because they are rooted in the right values—transparency and collective governance. They remind me of the early web, when openness was the default and trust was earned. As an AI whisperer, I don’t contribute lines of code (well, I might, but that’s not my main thing). I contribute perspective by asking questions like: Does this system orient toward truth, or toward manipulation? Does it empower humans, or extract from them? Does it resist enshitification, or accelerate it? These questions are not optional—they are the essential and enduring challenge of AI. They are intrinsic to what both AI and we humans become, now and for posterity.
—Mike Rudnick, PhD

In higher education, we have jumped to the conclusion that students will immediately use AI to cheat themselves out of their education. What I see in the classroom is much more complex. Many students in my composition and creative writing courses are skeptical and wary of AI. Many don’t or won’t use it by choice. Some use it in moderation. A few have used it to turn in very poor AI work. In other words, the same mix of results I’d see in any semester in the last 15 years. I don’t ban AI from my courses, and we talk about when and how to use it well. However, I give full credit to the students. The majority of them continue to understand that there is no replacement for doing the work.
—Margaret Gillio

AI is a wonderful tool for distilling and determining quickly what has been thought of before. Of course, it has little to do with actual intelligence except that it frees people up to use their minds more creatively when they free associate their way through new intellectual challenges. AI helps us bulldoze our way through current facts and opinion on our way to critical thinking.
—Rob Brown

AI has some good benefits (medical diagnoses) and many pitfalls (fake news, energy hog). It needs to be used more for good than evil, but I am afraid that is not likely. How to control AI is the big question. Good luck to us. Thanks for the survey.
—John Osowski

Overestimated in significance in short-run, underestimated in significance in long-run. If you think about our most important sectors for human well-being—medical care, education, food, housing—think about prospects for AI making these more affordable in short-term (not very much). Long term sustainability, energy, health innovations likely to be mind-boggling. I expect it to amplify our current fractures, and like any new technology there is a disconnect between the demography of the regulators and the creators. I do not expect smooth sailing.
—Mike Rizzo

People need to be better educated about AI. What are the risks and benefits? In essence, it’s just a bunch of large databases that programmers built, running multiple algorithms that request data stored across vast arrays of computers. The latest approach in AI is to use what are called Large Language Models to learn from millions of queries and develop faster, and hopefully more accurate, responses. One of the concerns I have is that tech-bros and the programs are, by nature, binary thinkers. IE, yes-no, 0 or 1, and so on, so the subtleties of human creativity and abstraction will be much more challenging to program, because computers and databases can only deal with past and existing data. AI companies need to employ more people from the humanities to help computer programmers learn how humans think and feel. Scientifically speaking, human scientists have barely scratched the surface of how the human brain works, so building computers that some think will replace humans is a giant leap. Back in the day, there was a truism when programming computers, and it was GIGO, garbage in, garbage out. Undoubtedly, we need very tight regulatory oversight to ensure quality checks on how data is gathered, used, and stored because any cache of data can be accessed for nefarious purposes by individuals or governments. There are many things computers can do better than humans, faster and more accurately; however, humans will need to learn how to use these new tools. What I’m most concerned about is power consumption for data centers. Politicians, of course, want anything that will provide jobs and revenue, and that’s why companies like Meta are advertising on TV that they are spending $600 billion on new infrastructure. In New York State, no new base-load, large power plants have been built or planned in the past 50 or so years. And to make things worse, Governor Cuomo forced the decommissioning of two large down state nuclear plants that should have been re-licensed to keep down state supplied with electricity, and at the same time he stopped the construction of a new natural gas pipeline that would have provided an extra supply of natural gas that is being used to power gas turbine peaking units that are supplying much of the electric power in the state at very high cost and continue to emit greenhouse gases. Cuomo was betting on Canada’s Hydro-Quebec to provide more power if they had built new hydroelectric facilities, but they haven’t. While Ontario Hydro in the Province of Ontario is building new nuclear power plants, it’s unknown whether it will be willing to sell more power to the US under the current national administration’s adversarial relationship with Canada. Even in the absence of these political issues, the new plants won’t be online for several more years. On top of all this chaos, when the utility industry was deregulated around the year 2000, splitting the generating units off from local utilities and selling them to large companies that specialize in running power plants, the government provided no incentives for these new operators to build new power plants that require massive amounts of capital to build. State and federal governments may be forced to provide economic and regulatory incentives for those companies to build new plants. Then they need to invest billions more to ensure that the US companies like Westinghouse and new start-ups will have the capacity to build the needed components for generation, transmission, and distribution of electricity to run not only the data centers but for end users of AI as well. Governor Hochul announced she wants a new nuclear power plant built somewhere in upstate NY, but even if everything went right, it would still take a decade and billions of dollars to deliver the needed power. Another primary concern is chip manufacturing, which is now primarily done in Taiwan. Although efforts are being made to do more manufacturing of these critical components in the US, Taiwan is under constant threat from mainland China’s political ambitions. NY’s politicians and business leaders need to develop a coordinated, comprehensive plan to train future construction and data center workers to be ready to build and operate not only data centers but also several new base-load power plants and all the components they will need. At the same time, our federal elected officials need to ensure that AI is not just a way for tech-bros to become obscenely wealthy, but that a healthy, productive AI foundation will help citizens work and live better.
—Frank Orienter

The NeoCon portion of the American public have already shown themselves highly susceptible to lies and propaganda even when those lies and propaganda can be easily debunked by reference to the facts. What happens when facts become unprovable or even unavailable because AI has been used to distort and even eradicate those facts and when only a tiny handful of individuals have the expertise necessary to determine that such distortions and eradications have actually occurred? The reverse, where AI is used to create “facts,” perhaps poses and even greater danger to society. In either event, we are well and truly screwed!
—Len Sheldon

I coach startups. One half of my startups are pursuing new business concepts that are primarily driven by AI. All of them simplify people’s lives, reduce workloads and add value through previously unavailable linking of systems and people. One eliminates an entire profession. AI will effect almost every product and service category, from contract lawyers, internal medicine doctors, data scientists and interpreters/translators to sales representatives, customer service representatives, technical writers and even massage therapists.
—Brad VanAuken

Paul Ericson is Rochester Beacon executive editor.

The Beacon welcomes comments and letters from readers who adhere to our comment policy including use of their full, real nameSee “Leave a Reply” below to discuss on this post. Comments of a general nature may be submitted to the Letters page by emailing  [email protected]

2 thoughts on “With AI’s rise, hope and anxiety

  1. PHONE – LESS (CLOSE EYES)
    ======================
    Let suggest that too much AI chat may encourage too much PHONING for students, at home.
    Schools could put such a COMMENT at the bottom of each page, to discourage excessive PHONING and SCREENNING.. by students and parents, at home, and in school.

    (Try CLOSING EYES each time you are tempted to phone, or search online. It helps)

    We are wasting time and energy phoning, back and forth, all day long.
    Even checking web pages, like Rochester Beacon, over and over, may waste time, and degrade living.

    RochesterBeacon, maybe you can put REMINDERS on each page, now.
    =================================================
    (It is the first of the year, 2026, and this could be a new year resolution, to start off right)
    PHONE – LESS (CLOSE EYES)

  2. AI in EDUCATION can give each student INDIVIDUAL ATTENTION, all the time.
    =========================================================
    In my Brooklyn high school, there were 6000 students. So, each student got almost no personal attention. We were expected to CONFORM. We were expected to be INTERESTED. Even, now, in classrooms, in Rochester and everywhere, attention is lacking.
    Students are expected to PAY attention, but not to need individual attention. AI changes that.
    ================================================================
    (For example, my web page: http://www.SavingSchools.org was critiqued by AI, and I got some wonderful FEEDBACK. Never before have I got this attention for my page.)
    ===================================================
    TEACHERS, PARENTS, SCHOOL OFFICIALS should be excited by AI possibilities.
    But it may depend on the AI programs that are used, and HOW students, and teachers use the programs. “A cynic knows the price of everything, and the value of nothing” (Wilde)
    A TOOL is a TOOL. A RULE is a RULE. And a FOOL is a FOOL
    =========================================== (Thanks RBeacon for this)d

Leave a Reply

Your email address will not be published. Required fields are marked *