The Fort Worth Press - AI's blind spot: tools fail to detect their own fakes

USD -
AED 3.672499
AFN 64.502669
ALL 81.179694
AMD 377.569962
ANG 1.79008
AOA 916.999851
ARS 1391.668037
AUD 1.404031
AWG 1.8
AZN 1.697487
BAM 1.646095
BBD 2.014569
BDT 122.333554
BGN 1.67937
BHD 0.377008
BIF 2965.082759
BMD 1
BND 1.261126
BOB 6.911847
BRL 5.1599
BSD 1.000215
BTN 90.656892
BWP 13.115002
BYN 2.867495
BYR 19600
BZD 2.011792
CAD 1.35888
CDF 2224.999699
CHF 0.768205
CLF 0.021647
CLP 854.790343
CNY 6.91325
CNH 6.89278
COP 3668.45
CRC 487.566753
CUC 1
CUP 26.5
CVE 92.804329
CZK 20.412501
DJF 178.123987
DKK 6.288015
DOP 62.711201
DZD 129.562978
EGP 46.851775
ERN 15
ETB 155.729165
EUR 0.84161
FJD 2.1849
FKP 0.732521
GBP 0.731901
GEL 2.689565
GGP 0.732521
GHS 10.967886
GIP 0.732521
GMD 73.503637
GNF 8780.073139
GTQ 7.671623
GYD 209.274433
HKD 7.815815
HNL 26.432801
HRK 6.340899
HTG 130.97728
HUF 318.672984
IDR 16815
ILS 3.063435
IMP 0.732521
INR 90.567498
IQD 1310.361951
IRR 42125.000158
ISK 122.210379
JEP 0.732521
JMD 156.251973
JOD 0.70901
JPY 153.012013
KES 129.030239
KGS 87.44968
KHR 4024.896789
KMF 415.000248
KPW 899.988812
KRW 1435.160073
KWD 0.30663
KYD 0.833596
KZT 494.926752
LAK 21451.807711
LBP 89575.079644
LKR 309.456576
LRD 186.549169
LSL 15.870874
LTL 2.95274
LVL 0.60489
LYD 6.308994
MAD 9.133902
MDL 16.94968
MGA 4417.155194
MKD 51.860359
MMK 2100.304757
MNT 3579.516219
MOP 8.054945
MRU 39.92947
MUR 45.899323
MVR 15.459989
MWK 1734.526831
MXN 17.150739
MYR 3.902498
MZN 63.90433
NAD 15.870874
NGN 1354.839887
NIO 36.805272
NOK 9.466605
NPR 145.04947
NZD 1.650105
OMR 0.384457
PAB 1.000332
PEN 3.356661
PGK 4.293247
PHP 58.066019
PKR 279.79388
PLN 3.546185
PYG 6585.896503
QAR 3.64543
RON 4.285501
RSD 98.773017
RUB 77.325006
RWF 1460.39281
SAR 3.750373
SBD 8.048395
SCR 13.796614
SDG 601.496472
SEK 8.885525
SGD 1.26117
SHP 0.750259
SLE 24.249682
SLL 20969.499267
SOS 570.656634
SRD 37.779038
STD 20697.981008
STN 20.620379
SVC 8.752299
SYP 11059.574895
SZL 15.87836
THB 30.979502
TJS 9.417602
TMT 3.5
TND 2.884412
TOP 2.40776
TRY 43.649806
TTD 6.776109
TWD 31.347097
TZS 2598.154052
UAH 43.023284
UGX 3540.813621
UYU 38.353905
UZS 12313.311927
VES 388.253525
VND 25960
VUV 119.359605
WST 2.711523
XAF 552.10356
XAG 0.012099
XAU 0.000198
XCD 2.70255
XCG 1.802726
XDR 0.686599
XOF 552.084973
XPF 100.374954
YER 238.40415
ZAR 15.84035
ZMK 9001.201522
ZMW 18.555599
ZWL 321.999592
  • CMSD

    -0.0400

    24.03

    -0.17%

  • CMSC

    0.0400

    23.73

    +0.17%

  • RYCEF

    -0.0600

    16.87

    -0.36%

  • RBGPF

    0.1000

    82.5

    +0.12%

  • NGG

    0.9750

    91.615

    +1.06%

  • BCE

    0.1010

    25.751

    +0.39%

  • RIO

    -1.5600

    97.96

    -1.59%

  • RELX

    0.2600

    27.99

    +0.93%

  • JRI

    -0.0300

    13.1

    -0.23%

  • BCC

    -0.6100

    88.8

    -0.69%

  • VOD

    -0.1550

    15.525

    -1%

  • BTI

    0.1000

    60.43

    +0.17%

  • AZN

    -1.6450

    203.115

    -0.81%

  • GSK

    -0.1850

    58.305

    -0.32%

  • BP

    -1.5500

    37

    -4.19%

AI's blind spot: tools fail to detect their own fakes
AI's blind spot: tools fail to detect their own fakes / Photo: © AFP

AI's blind spot: tools fail to detect their own fakes

When outraged Filipinos turned to an AI-powered chatbot to verify a viral photograph of a lawmaker embroiled in a corruption scandal, the tool failed to detect it was fabricated -- even though it had generated the image itself.

Text size:

Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities at a time when major tech platforms are scaling back human fact-checking.

In many cases, the tools wrongly identify images as real even when they are generated using the same generative models, further muddying an online information landscape awash with AI-generated fakes.

Among them is a fabricated image circulating on social media of Elizaldy Co, a former Philippine lawmaker charged by prosecutors in a multibillion-dollar flood-control corruption scam that sparked massive protests in the disaster-prone country.

The image of Co, whose whereabouts has been unknown since the official probe began, appeared to show him in Portugal.

When online sleuths tracking him asked Google's new AI mode whether the image was real, it incorrectly said it was authentic.

AFP's fact-checkers tracked down its creator and determined that the image was generated using Google AI.

"These models are trained primarily on language patterns and lack the specialized visual understanding needed to accurately identify AI-generated or manipulated imagery," Alon Yamin, chief executive of AI content detection platform Copyleaks, told AFP.

"With AI chatbots, even when an image originates from a similar generative model, the chatbot often provides inconsistent or overly generalized assessments, making them unreliable for tasks like fact-checking or verifying authenticity."

Google did not respond to AFP’s request for comment.

- 'Distinguishable from reality' -

AFP found similar examples of AI tools failing to verify their own creations.

During last month's deadly protests over lucrative benefits for senior officials in Pakistan-administered Kashmir, social media users shared a fabricated image purportedly showing men marching with flags and torches.

An AFP analysis found it was created using Google's Gemini AI model.

But Gemini and Microsoft's Copilot falsely identified it as a genuine image of the protest.

"This inability to correctly identify AI images stems from the fact that they (AI models) are programmed only to mimic well," Rossine Fallorina, from the nonprofit Sigla Research Center, told AFP.

"In a sense, they can only generate things to resemble. They cannot ascertain whether the resemblance is actually distinguishable from reality."

Earlier this year, Columbia University's Tow Center for Digital Journalism tested the ability of seven AI chatbots -- including ChatGPT, Perplexity, Grok, and Gemini -- to verify 10 images from photojournalists of news events.

All seven models failed to correctly identify the provenance of the photos, the study said.

- 'Shocked' -

AFP tracked down the source of Co's photo that garnered over a million views across social media -- a middle-aged web developer in the Philippines, who said he created it "for fun" using Nano Banana, Gemini's AI image generator.

"Sadly, a lot of people believed it," he told AFP, requesting anonymity to avoid a backlash.

"I edited my post -- and added 'AI generated' to stop the spread -- because I was shocked at how many shares it got."

Such cases show how AI-generated photos flooding social platforms can look virtually identical to real imagery.

The trend has fueled concerns as surveys show online users are increasingly shifting from traditional search engines to AI tools for information gathering and verifying information.

The shift comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes."

Human fact-checking has long been a flashpoint in hyperpolarized societies, where conservative advocates accuse professional fact-checkers of liberal bias, a charge they reject.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.

Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity. But they caution that they cannot replace the work of trained human fact-checkers.

"We can't rely on AI tools to combat AI in the long run," Fallorina said.

burs-ac/sla/sms

J.Barnes--TFWP