
How Medical AI Tools Affect Women's Health?
SadaNews - Several studies have found that medical tools based on artificial intelligence models are biased against women and racial minorities in diagnosing their health conditions in a way that puts their lives at risk, according to a report published by the Financial Times.
Studies conducted at several prestigious universities in the United States and the United Kingdom indicate that tools relying on deep language models tend to downplay the severity of symptoms experienced by women and show less empathy towards racial minorities.
These studies coincide with major tech companies like "Microsoft" and "OpenAI" trying to introduce AI tools that reduce the burden on doctors, hoping to expedite the diagnostic process and move to treatment.
Many doctors have also begun using AI models such as "Gemini" and "ChatGPT" along with medical note-taking tools to receive patients' complaints and quickly reach the actual complaint.
It is noteworthy that "Microsoft" revealed in June last year a medical AI tool that it claims is four times more powerful in diagnosing diseases than human doctors.
However, studies conducted at the Massachusetts Institute of Technology simultaneously show that medical AI tools provided less care to female patients and recommended that some patients receive treatment at home instead of seeking medical intervention.
Another separate study conducted at the institute found that AI tools were less empathetic towards racial minorities suffering from mental and psychological illnesses.
Additionally, another study from the London School of Economics found that the "Google Gemma" model was downplaying the severity of both physical and psychological problems that women experience, and it's worth noting that "Gemma" is an open-source AI model from "Google" widely used by local authorities in the UK.
The report indicates that the reason for this racial bias lies in the training mechanisms of deep language models and the data used to train them, as companies rely on freely available data on the internet, which often contains racist phrases and biases against specific groups.
Thus, this bias is transferred to deep language models despite developers' attempts to mitigate this effect by implementing safe restrictions to stop the model.
For its part, "OpenAI" explained that most studies relied on outdated AI tool models, adding that the new models have become less susceptible to this bias.
Similarly, "Google" has confirmed that it takes strict actions against cases of racial discrimination and bias and is developing stringent restrictions to completely prevent it.
However, Travis Zack, an assistant professor at the University of California, San Francisco, and the chief medical officer at the startup "Open Evidence" in the field of medical AI information, believes that the real solution to this crisis is to carefully choose the quality of data that AI models are trained on.
Zack adds that the "Open Evidence" tool—trusted by over 400,000 doctors in the United States—was trained on medical documents and health guidelines from expert doctors, as well as medical references used in universities.
Source: Financial Times

How Medical AI Tools Affect Women's Health?

What is behind Russia's pursuit of dominance over Mars' moons?

Tim Hassan: "Lebanese People Are in My Heart Except for One Group"

Hazardous Plastics Materials for Children Found in Every Home

Brazil Proposes $125 Billion Tropical Forest Facility

Study: Coral Reefs Will Not Survive Global Temperature Rise

The End of "Kan Insan" Series Sparks Interaction Despite Broadcast Suspension Crisis
