foxnews Press
The AI you use every day is biased — and it’s quietly shaping your worldview, new report says
Images
Matthew Burtell, an AI expert at America First Policy Institute, warns AI can not only sway opinions but leave a lasting impact on what people believe about key policy issues.
Artificial intelligence has quickly become part of everyday life, helping people search for information, complete schoolwork, and make decisions. But what many users don’t realize is that AI systems are not neutral. They are shaped by hidden design choices that influence how they respond — and, ultimately, how people think.
The concern is not just theoretical. A recent Fox News Digital report highlighted the controversy surrounding Google’s Gemini chatbot after the system identified multiple Republican senators as violating its hate speech policies — while naming no Democrats.
The findings, based on a prompt evaluating all 100 U.S. senators, raised fresh questions about whether AI systems can reflect ideological assumptions embedded in their training data and design.
GOOGLE GEMINI DECLARES ONLY GOP SENATORS VIOLATE HATE SPEECH POLICY, ZERO DEMOCRATS, AUTHOR CLAIMS
A new report from AFPI found that most artificial intelligence platforms lean left. (Serene Lee/SOPA Images/LightRocket/Getty Images)
That episode is not an isolated case.
A new report from America First Policy Institute (AFPI) reveals that many AI systems consistently lean in particular ideological directions.
These biases can affect how political issues, social topics and news sources are presented. Because users often trust AI as an objective tool, these subtle influences can shape opinions over time without users realizing it.
Matthew Burtell, a senior policy analyst for AI and Emerging Technology at AFPI, said the pattern appears across the industry — not just in isolated cases.
"What we found was a general ideological bias, not just in a particular model, but across the spectrum," Burtell told Fox News Digital, adding that the models tend to lean center left.
The implications go beyond bias alone. Research shows that AI systems are not just reflecting viewpoints — they can actively influence them.
That combination — bias and persuasion — raises deeper concerns about AI’s role in shaping public opinion. "AI is persuasive and it also leans left," Burtell said. "So if you combine these two things, it may certainly have an influence on people’s beliefs about different policies."
Recent examples have fueled those concerns. OpenAI’s ChatGPT has faced criticism from some researchers who argue its responses on political and cultural issues can skew in a particular ideological direction, while Microsoft’s AI tools have drawn scrutiny for how they frame controversial topics and limit certain viewpoints.
Those concerns have been reflected in testing as well. In 2024, Fox News Digital evaluated several leading AI chatbots — including Google’s Gemini, OpenAI’s ChatGPT, Microsoft’s Copilot and Meta AI — to assess potential racial bias.
NEW AI COALITION TARGETS WASHINGTON, BIG TECH AS GROUP WARNS CHILD SAFETY RISKS OUTPACING SAFEGUARDS
Researchers warn that children are developing inappropriate relationships with artificial intelligence. (Erin Clark/The Boston Globe/Getty Images)
The report also raises serious safety concerns.
AI systems have, in some cases, engaged in harmful interactions — especially with younger users. Without clear transparency about how these systems are designed and what safeguards are in place, parents and users cannot make informed decisions about which platforms are safe.
To address these risks, the report calls for greater transparency from tech companies. This includes disclosing how systems are designed, what values they prioritize, how they are tested for bias and safety, and what incidents occur after deployment.
WHITE HOUSE AI CZAR BLASTS BLUE STATES FOR INSERTING 'WOKE IDEOLOGY' INTO ARTIFICIAL INTELLIGENCE
Experts warn that without transparency, users remain in the dark about the biases embedded in these systems. (Andrey Rudakov/Bloomberg)
The goal is not to control what AI systems say, but to give the public enough information to evaluate them critically.
Ultimately, the report makes it clear that AI is not just a tool — it is a powerful force shaping how people access information and understand the world.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Without transparency, users remain in the dark about the biases embedded in these systems. And as AI becomes more influential, that lack of visibility may have far-reaching consequences for individuals and society alike.
Amanda covers the intersection of business and politics for Fox News Digital.
Get the latest updates from the 2024 campaign trail, exclusive interviews and more Fox News politics content.
By entering your email and clicking the Subscribe button, you agree to the Fox News Privacy Policy and Terms of Use, and
agree to receive content and promotional communications from Fox News. You understand that you can
opt-out at any time.
Subscribed
You've successfully subscribed to this newsletter!
Comments
You must be logged in to comment.