New York, Nov 25 (IANS): Articles from liberal-leaning media have a more negative sentiment toward AI than articles from conservative media, new research has found.
In other words, liberal-leaning media tend to be more opposed to AI than conservative-leaning media, according to the study by Virginia Tech’s Pamplin College of Business in the US.
This opposition can be attributed to, according to the findings, liberal-leaning media being more concerned with AI magnifying social biases in society, such as racial, gender, and income disparities, than conservative-leaning media.
As AI’s reach expands, researchers are seeking to understand which sections of society might be more receptive to AI and which sections may be more averse to it.
Authors from Virginia Tech -- Angela Yi, Shreyans Goenka, and Mario Pandelaere -- examined the varied reactions to AI by analysing partisan media sentiment.
Their work was published in the journal Social Psychological and Personality Science.
The researchers also examined how media sentiment toward AI changed after George Floyd’s death.
“Since Floyd’s death ignited a national conversation about social biases in society, his death heightened social bias concerns in the media,” said Yi.
“This, in turn, resulted in the media becoming even more negative towards AI in their storytelling.”
To examine partisan media sentiment toward AI, the researchers compiled a collection of articles written about AI from several media outlets.
A mix of liberal-leaning outlets, such as The New York Times and The Washington Post, and more conservative-leaning outlets, such as The Wall Street Journal and the New York Post, were sourced.
Goenka stressed that this research is descriptive rather than prescriptive, and no stance is being taken as to the right way to discuss AI.
“We are not stating whether the liberal media is acting optimally, or the conservative media is acting optimally,” he said.
“We are just showing that these differences exist in the media sentiment and that these differences are important to quantify, see, and understand.”
According to Goenka and Yi, their findings may have important implications for future political discussions around AI.