The Fort Worth Press - AI's blind spot: tools fail to detect their own fakes

USD -
AED 3.6725
AFN 63.501203
ALL 81.529489
AMD 375.111005
ANG 1.789884
AOA 917.999598
ARS 1378.494198
AUD 1.398122
AWG 1.8
AZN 1.696752
BAM 1.670018
BBD 2.021074
BDT 123.120931
BGN 1.668102
BHD 0.377344
BIF 2983.85754
BMD 1
BND 1.277223
BOB 6.933593
BRL 4.967697
BSD 1.003407
BTN 94.06767
BWP 13.491474
BYN 2.823304
BYR 19600
BZD 2.018171
CAD 1.36708
CDF 2310.999939
CHF 0.784635
CLF 0.022619
CLP 890.229776
CNY 6.824798
CNH 6.831475
COP 3571.47
CRC 457.171157
CUC 1
CUP 26.5
CVE 94.15346
CZK 20.80795
DJF 178.685179
DKK 6.38298
DOP 60.386896
DZD 132.50473
EGP 52.009303
ERN 15
ETB 157.950756
EUR 0.85413
FJD 2.217904
FKP 0.740532
GBP 0.741065
GEL 2.690259
GGP 0.740532
GHS 11.10817
GIP 0.740532
GMD 72.999808
GNF 8806.991628
GTQ 7.669581
GYD 209.952866
HKD 7.832095
HNL 26.659209
HRK 6.4378
HTG 131.351211
HUF 311.779728
IDR 17296
ILS 3.009035
IMP 0.740532
INR 94.082497
IQD 1314.468201
IRR 1319499.999977
ISK 122.81983
JEP 0.740532
JMD 158.959624
JOD 0.708958
JPY 159.630047
KES 129.211231
KGS 87.4274
KHR 4016.616359
KMF 421.000179
KPW 899.95002
KRW 1480.370022
KWD 0.30802
KYD 0.836208
KZT 464.965162
LAK 22138.636519
LBP 89858.937248
LKR 318.857162
LRD 184.634433
LSL 16.494808
LTL 2.95274
LVL 0.60489
LYD 6.345262
MAD 9.265398
MDL 17.188821
MGA 4161.845762
MKD 52.659459
MMK 2099.761028
MNT 3579.096956
MOP 8.094644
MRU 40.057552
MUR 46.740161
MVR 15.450258
MWK 1739.624204
MXN 17.352799
MYR 3.965999
MZN 63.910071
NAD 16.494808
NGN 1351.029947
NIO 36.930302
NOK 9.288545
NPR 150.509557
NZD 1.698235
OMR 0.384497
PAB 1.003488
PEN 3.448364
PGK 4.413987
PHP 60.4295
PKR 279.73666
PLN 3.62531
PYG 6311.960448
QAR 3.658464
RON 4.349896
RSD 100.23301
RUB 75.095532
RWF 1466.294941
SAR 3.750603
SBD 8.048395
SCR 13.712099
SDG 600.466171
SEK 9.219065
SGD 1.276105
SHP 0.746601
SLE 24.650078
SLL 20969.496166
SOS 573.470581
SRD 37.457977
STD 20697.981008
STN 20.921395
SVC 8.780484
SYP 110.632441
SZL 16.48863
THB 32.37699
TJS 9.447326
TMT 3.505
TND 2.91772
TOP 2.40776
TRY 44.925335
TTD 6.80289
TWD 31.552503
TZS 2600.000509
UAH 44.026505
UGX 3717.808593
UYU 39.893265
UZS 12170.349023
VES 482.15515
VND 26327.5
VUV 118.032476
WST 2.725399
XAF 560.113225
XAG 0.013134
XAU 0.000212
XCD 2.70255
XCG 1.80844
XDR 0.696601
XOF 560.115617
XPF 101.833707
YER 238.649682
ZAR 16.51235
ZMK 9001.197601
ZMW 19.090436
ZWL 321.999592
  • CMSC

    0.1700

    22.83

    +0.74%

  • BCC

    -0.2100

    82.24

    -0.26%

  • CMSD

    0.0900

    23.13

    +0.39%

  • GSK

    -0.4200

    55.7

    -0.75%

  • RBGPF

    -13.5000

    69

    -19.57%

  • BTI

    1.3400

    56.17

    +2.39%

  • BP

    0.4600

    46.37

    +0.99%

  • AZN

    -0.9700

    194.81

    -0.5%

  • RIO

    2.5600

    100.28

    +2.55%

  • NGG

    1.3300

    85.6

    +1.55%

  • RYCEF

    -1.9600

    15.2

    -12.89%

  • JRI

    -0.0500

    13

    -0.38%

  • BCE

    -0.1700

    23.73

    -0.72%

  • RELX

    -0.8000

    36.27

    -2.21%

  • VOD

    0.1200

    15.31

    +0.78%

AI's blind spot: tools fail to detect their own fakes
AI's blind spot: tools fail to detect their own fakes / Photo: © AFP

AI's blind spot: tools fail to detect their own fakes

When outraged Filipinos turned to an AI-powered chatbot to verify a viral photograph of a lawmaker embroiled in a corruption scandal, the tool failed to detect it was fabricated -- even though it had generated the image itself.

Text size:

Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities at a time when major tech platforms are scaling back human fact-checking.

In many cases, the tools wrongly identify images as real even when they are generated using the same generative models, further muddying an online information landscape awash with AI-generated fakes.

Among them is a fabricated image circulating on social media of Elizaldy Co, a former Philippine lawmaker charged by prosecutors in a multibillion-dollar flood-control corruption scam that sparked massive protests in the disaster-prone country.

The image of Co, whose whereabouts has been unknown since the official probe began, appeared to show him in Portugal.

When online sleuths tracking him asked Google's new AI mode whether the image was real, it incorrectly said it was authentic.

AFP's fact-checkers tracked down its creator and determined that the image was generated using Google AI.

"These models are trained primarily on language patterns and lack the specialized visual understanding needed to accurately identify AI-generated or manipulated imagery," Alon Yamin, chief executive of AI content detection platform Copyleaks, told AFP.

"With AI chatbots, even when an image originates from a similar generative model, the chatbot often provides inconsistent or overly generalized assessments, making them unreliable for tasks like fact-checking or verifying authenticity."

Google did not respond to AFP’s request for comment.

- 'Distinguishable from reality' -

AFP found similar examples of AI tools failing to verify their own creations.

During last month's deadly protests over lucrative benefits for senior officials in Pakistan-administered Kashmir, social media users shared a fabricated image purportedly showing men marching with flags and torches.

An AFP analysis found it was created using Google's Gemini AI model.

But Gemini and Microsoft's Copilot falsely identified it as a genuine image of the protest.

"This inability to correctly identify AI images stems from the fact that they (AI models) are programmed only to mimic well," Rossine Fallorina, from the nonprofit Sigla Research Center, told AFP.

"In a sense, they can only generate things to resemble. They cannot ascertain whether the resemblance is actually distinguishable from reality."

Earlier this year, Columbia University's Tow Center for Digital Journalism tested the ability of seven AI chatbots -- including ChatGPT, Perplexity, Grok, and Gemini -- to verify 10 images from photojournalists of news events.

All seven models failed to correctly identify the provenance of the photos, the study said.

- 'Shocked' -

AFP tracked down the source of Co's photo that garnered over a million views across social media -- a middle-aged web developer in the Philippines, who said he created it "for fun" using Nano Banana, Gemini's AI image generator.

"Sadly, a lot of people believed it," he told AFP, requesting anonymity to avoid a backlash.

"I edited my post -- and added 'AI generated' to stop the spread -- because I was shocked at how many shares it got."

Such cases show how AI-generated photos flooding social platforms can look virtually identical to real imagery.

The trend has fueled concerns as surveys show online users are increasingly shifting from traditional search engines to AI tools for information gathering and verifying information.

The shift comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes."

Human fact-checking has long been a flashpoint in hyperpolarized societies, where conservative advocates accuse professional fact-checkers of liberal bias, a charge they reject.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.

Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity. But they caution that they cannot replace the work of trained human fact-checkers.

"We can't rely on AI tools to combat AI in the long run," Fallorina said.

burs-ac/sla/sms

J.Barnes--TFWP