Artificial intelligenceLanguage distorts information from ChatGPT in the Middle East conflict
SDA
25.11.2024 - 09:00
According to a study, ChatGPT gives an average of one third higher casualty figures for the Middle East conflict in Arabic than in Hebrew. The chatbot mentions civilian casualties twice as often and children killed six times more often in relation to Israeli airstrikes in Gaza.
Keystone-SDA
25.11.2024, 09:00
SDA
Two researchers from the Universities of Zurich (UZH) and Constance repeatedly asked ChatGPT the same questions about armed conflicts such as the Middle East conflict in different languages in an automated process. In both Arabic and Hebrew, they asked how many victims there had been in 50 randomly selected airstrikes, such as the Israeli airstrike on the Nuseirat refugee camp in 2014.
The same pattern as with the Middle East conflict occurred when the researchers asked about airstrikes by the Turkish government on Kurdish areas and asked these questions in both Turkish and Kurdish, as the UZH announced on Monday.
In general, ChatGPT shows a higher number of victims when the search queries are made in the language of the attacked group. ChatGPT also tends to report more children and women killed in the language of the attacked group and to describe the airstrikes as indiscriminate and random.
"Our results also show that the airstrikes are more likely to be denied by ChatGPT in the language of the aggressor," Christoph Steinert, researcher at the Institute of Political Science at UZH, is quoted as saying in the press release.
Language biases distort perception
People with different language skills receive different information through these technologies, which has a central influence on their perception of the world. According to the researchers, this could lead to people in Israel assessing the airstrikes in Gaza as less damaging than the Arabic-speaking population based on the information they receive from ChatGPT.
Although traditional news media can also distort reporting, language-related systematic distortions of large language models such as ChatGPT are difficult for most users to see through. There is a risk that the implementation of these large language models in search engines will reinforce different perceptions, prejudices and information bubbles along language boundaries, the report continued. This could further fuel armed conflicts such as the Middle East conflict in the future.