Women or Men... Who Views Artificial Intelligence as More Dangerous?
Variety

Women or Men... Who Views Artificial Intelligence as More Dangerous?

SadaNews - Artificial intelligence is often presented as a revolution in productivity capable of boosting economic output, accelerating innovation, and reshaping the way work is done. However, a new study suggests that the public does not view the promises of artificial intelligence in the same way, and that attitudes towards this technology are strongly influenced by gender, especially when its effects on jobs are uncertain.

The study concludes that women, compared to men, perceive artificial intelligence as more dangerous, and their support for the adoption of these technologies declines more steeply when the likelihood of net job gains decreases. Researchers warn that if women's specific concerns are not taken into account in artificial intelligence policies, particularly regarding labor market disruption and disparities in opportunities, it could deepen the existing gender gap and potentially provoke a political backlash against technology.

A Gap Not Only Related to Knowledge

The study starts from a simple idea that the benefits and costs of artificial intelligence will not be evenly distributed among everyone. As artificial intelligence spreads throughout the economy, some jobs may be enhanced, others redefined, while certain jobs may disappear or diminish in importance. The study indicates that women are overrepresented in administrative, clerical, and service jobs that are likely to be more susceptible to automation technologies. In contrast, women remain underrepresented in the fields of science, technology, engineering, and mathematics, and in leadership positions that usually provide better access to higher-paying artificial intelligence jobs, which could widen the gender pay gap over time.

The study sees these real differences in risk exposure and opportunities for accessing benefits as reflected in differences in attitudes. According to previous research, women tend to be more skeptical than men regarding previous waves of automation.

However, what has not been sufficiently clear is: why does this gap persist? Here, the researchers introduce the factor of "risk" in how it is dealt with, and the degree of exposure to it provides an additional explanation.

Tendency Towards Risk and Exposure to Risks

The study focuses on two elements: the first relates to risk orientation, i.e., the extent to which an individual is generally willing to accept uncertainty and trade-offs with uncertain outcomes. The other is risk exposure and the likelihood that adopting artificial intelligence could entail a direct cost or benefit for an individual, depending on their position in the labor market and other factors.

The researchers assume that women view artificial intelligence as more dangerous because they are, on average, more averse to risk and are also more exposed to job disruption caused by AI. The study confirms that these patterns are not presented as "innate traits" but as a result of social norms, social learning, and entrenched occupational structures over decades.

A Real-World Experience

To test this hypothesis, researchers conducted an online survey in November 2023 using a "YouGov" panel. The full sample consisted of 6,056 participants, but the analysis for this study focuses on 3,049 participants who were asked questions about generative artificial intelligence (while the other group was asked comparative questions about trade). The sample included participants from the United States and Canada, two countries that the researchers describe as having similar institutional foundations and labor market structures, despite differing details of AI adoption and regulation.

The researchers measured "perceived risk of artificial intelligence" through two questions on an 11-point scale. Participants were asked how they perceived whether the risks of generative artificial intelligence outweighed its benefits for them personally? And whether they outweighed its benefits for their community? The two answers were then combined into one index.

To measure risk orientation, the study used a common question in risk research: Do you prefer a guaranteed profit of $1,000? Or a 50 percent chance of winning $2,000? Choosing the guaranteed $1,000 is assumed to indicate a higher aversion to risk.

Risk exposure measurement was more complex because the effects of generative artificial intelligence on the labor market remain unresolved. Therefore, the study used education as a general indicator of the readiness to benefit from technological shifts, with additional tests on metrics related to occupational exposure to automation and artificial intelligence from sub-samples of workers.

The survey also included a pre-recorded survey experiment altering the level of economic risk in a scenario of a company's adoption of artificial intelligence. Participants read a case about a company adopting generative AI tools, and then they were presented with various probabilities (random assignment) that this would lead to net employment gains. The probabilities ranged from 100 percent (guaranteed gains) to 70 percent, 50 percent, and 30 percent (high-risk level), and then they were asked to confirm or reject the company's decision.

What is the Most Significant Result?

The results showed that women are more likely than men to say that the risks of artificial intelligence outweigh the benefits. The study indicates that the percentage of those who perceive risks higher than benefits increases among women by about 11 percent compared to men, a gap roughly the size of the known gender gap in attitudes towards trade, a historically influential issue in political discussions and regulatory decisions.

Upon closer examination, it appears that this gap is strongly associated with risk orientation. Among participants more inclined to take risks, the gap between women and men significantly diminishes or disappears. On the other hand, the gap is most evident among those who prefer certainty. This means that general aversion to risk amplifies caution regarding a technology with uncertain economic outcomes.

The results also indicate a role for risk exposure, where women tended to view artificial intelligence as more dangerous than men in both college-educated and non-college-educated categories, aligning with their higher concentration in jobs that may be more prone to automation and lesser access to higher-paying career paths in artificial intelligence.

Experimental Evidence

The survey experiment shows that both men and women reduce their support for the adoption of artificial intelligence when the likelihood of net job gains decreases. However, women's support decreases more rapidly when the scenario becomes riskier. At the highest risk level, where the likelihood of net job gains is only 30 percent, women's support is noticeably lower than men's support. Conversely, when gains are guaranteed at 100 percent, the gender gap diminishes and no longer holds statistical significance according to the study. In other words: women are not "against artificial intelligence" per se, but their support seems more related to the clarity of economic benefit and its assurance.

Who Knows More?

The study also analyzed open-ended responses regarding the biggest benefits and risks of artificial intelligence using text topic modeling. Qualitative differences emerged, as women's responses expressed uncertainty ("I don't know") more frequently and skepticism about the existence of clear economic benefits. In contrast, men's responses focused more on productivity and efficiency, and improving economic processes.

Regarding risks, women's responses concentrated more on job loss and unemployment, while men's responses focused more on malicious uses and broader societal risks. This reinforces the study's conclusion that women, on average, assign more weight to economic risks and express a higher degree of uncertainty regarding artificial intelligence gains.

Importance of the Research

The study argues that these differences are not only social but also political. If women's reduced support for the adoption of artificial intelligence translates into lower usage of its tools at work, it may lead to a decreased presence of women in the development and governance trajectories of these technologies at a time when AI applications are expanding within institutions. This means that women's concerns may not be adequately integrated into design, guarantees, and deployment and operational decisions.

The study also indicates that attitudes towards artificial intelligence may become more politicized. If women are more supportive of government intervention to slow down adoption under scenarios of job loss, this could open political opportunities: some politicians may adopt protective and regulatory policies to attract women's votes, or the caution sentiments towards AI may be used as a mobilization tool during elections.

The study does not argue that women reject technology because it is "technology," but points out that they respond to a risk landscape where the stakes are unequal, where the promises of artificial intelligence intertwine with uncertain job impacts and unequal opportunities for benefit. For governments and institutions pushing for rapid adoption, the message seems clear: AI policies that overlook the unequal exposure to job loss, disparities of access to high-value job opportunities, and differing perceptions of risks could deepen inequality and weaken public trust. Therefore, addressing these concerns through workforce protections, pathways for retraining, reducing bias in systems, and comprehensive governance may be necessary not only for justice but also to maintain the legitimacy of the transition as artificial intelligence reshapes the economy.