https://www.fastcompany.com/90983392/why-disinformation-experts-says-the-israel-hamas-conflict-is-a-nightmare-to-investigate
BY CHRIS STOKEL-WALKER4 MINUTE READ
The Israel-Hamas conflict has been a minefield of confusing counter-arguments and controversiesâand an information environment that experts investigating mis- and disinformation say is among the worst theyâve ever experienced.
In the time since Hamas launched its terror attack against Israel last monthâand Israel has responded with a weekslong counterattackâsocial media has been full of comments, pictures, and video from both sides of the conflict putting forward their case. But alongside real images of the battles going on in the region, plenty of disinformation has been sown by bad actors.
âWhat is new this time, especially with Twitter, is the clutter of information that the platform has created, or has given a space for people to create, with the way verification is handled,â says Pooja Chaudhuri, a researcher and trainer at Bellingcat, which has been working to verify or debunk claims from both the Israeli and Palestinian sides of the conflict, from confirming that Israel Defense Forces struck the Jabalia refugee camp in northern Gaza to debunking the idea that the IDF has blown up some of Gazaâs most sacred sites.
Bellingcat has found plenty of claims and counterclaims to investigate, but convincing people of the truth has proven more difficult than in previous situations because of the firmly entrenched views on either side, says Chaudhuriâs colleague Eliot Higgins, the siteâs founder.
âPeople are thinking in terms of, âWhose side are you on?â rather than âWhatâs real,ââ Higgins says. âAnd if youâre saying something that doesnât agree with my side, then it has to mean youâre on the other side. That makes it very difficult to be involved in the discourse around this stuff, because itâs so divided.â
For Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), there have only been two moments prior to this that have proved as difficult for his organization to monitor and track: One was the disinformation-fueled 2020 U.S. presidential election, and the other was the hotly contested space around the COVID-19 pandemic.
âI canât remember a comparable time. Youâve got this completely chaotic information ecosystem,â Ahmed says, adding that in the weeks since Hamasâs October 7 terror attack social media has become the opposite of a âuseful or healthy environment to be inââin stark contrast to what it used to be, which was a source of reputable, timely information about global events as they happened.
The CCDH has focused its attention on X (formerly Twitter), in particular, and is currently involved in a lawsuit with the social media company, but Ahmed says the problem runs much deeper.
âItâs fundamental at this point,â he says. âItâs not a failure of any one platform or individual. Itâs a failure of legislators and regulators, particularly in the United States, to get to grips with this.â (An X spokesperson has previously disputed the CCDHâs findings to Fast Company, taking issue with the organizationâs research methodology. âAccording to what we know, the CCDH will claim that posts are not âactionedâ unless the accounts posting them are suspended,â the spokesperson said. âThe majority of actions that X takes are on individual posts, for example by restricting the reach of a post.â)
Ahmed contends that inertia among regulators has allowed antisemitic conspiracy theories to fester online to the extent that many people believe and buy into those concepts. Further, he says it has prevented organizations like the CCDH from properly analyzing the spread of disinformation and those beliefs on social media platforms. âAs a result of the chaos created by the American legislative system, we have no transparency legislation. Doing research on these platforms right now is near impossible,â he says.
It doesnât help when social media companies are throttling access to their application programming interfaces, through which many organizations like the CCDH do research. âWe canât tell if thereâs more Islamophobia than antisemitism or vice versa,â he admits. âBut my gut tells me this is a moment in which we are seeing a radical increase in mobilization against Jewish people.â
Right at the time when the most insight is needed into how platforms are managing the torrent of dis- and misinformation flooding their apps, thereâs the least possible transparency.
The issue isnât limited to private organizations. Governments are also struggling to get a handle on how disinformation, misinformation, hate speech, and conspiracy theories are spreading on social media. Some have reached out to the CCDH to try and get clarity.
âIn the last few days and weeks, Iâve briefed governments all around the world,â says Ahmed, who declines to name those governmentsâthough Fast Company understands that they may include the U.K. and European Union representatives. Advertisers, too, have been calling on the CCDH to get information about which platforms are safest for them to advertise on.
Deeply divided viewpoints are exacerbated not only by platforms tamping down on their transparency but also by technological advances that make it easier than ever to produce convincing content that can be passed off as authentic. âThe use of AI images has been used to show support,â Chaudhuri says. This isnât necessarily a problem for trained open-source investigators like those working for Bellingcat, but it is for rank-and-file users who can be hoodwinked into believing generative-AI-created content is real.
And even if those AI-generated images donât sway minds, they can offer another weapon in the armory of those supporting one side or the otherâa slur, similar to the use of âfake newsâ to describe factual claims that donât chime with your beliefs, that can be deployed to discredit legitimate images or video of events.
âWhat is most interesting is anything that you donât agree with, you can just say that itâs AI and try to discredit information that may also be genuine,â Choudhury says, pointing to users who have claimed an image of a dead baby shared by Israelâs account on X was AIâwhen in fact it was realâas an example of weaponizing claims of AI tampering. âThe use of AI in this case,â she says, âhas been quite problematic.â