The Fort Worth Press - AI's blind spot: tools fail to detect their own fakes

USD -
AED 3.672499
AFN 66.379449
ALL 81.856268
AMD 381.460105
ANG 1.790403
AOA 917.000152
ARS 1450.463045
AUD 1.491647
AWG 1.80025
AZN 1.698782
BAM 1.658674
BBD 2.014358
BDT 122.21671
BGN 1.6605
BHD 0.377225
BIF 2957.76141
BMD 1
BND 1.284077
BOB 6.926234
BRL 5.521499
BSD 1.00014
BTN 89.856547
BWP 13.14687
BYN 2.919259
BYR 19600
BZD 2.011466
CAD 1.36785
CDF 2200.000114
CHF 0.787726
CLF 0.023065
CLP 904.83987
CNY 7.028498
CNH 7.00831
COP 3743.8
CRC 499.518715
CUC 1
CUP 26.5
CVE 93.513465
CZK 20.600103
DJF 177.720598
DKK 6.343725
DOP 62.690023
DZD 129.440364
EGP 47.516232
ERN 15
ETB 155.604932
EUR 0.84928
FJD 2.269197
FKP 0.740634
GBP 0.73945
GEL 2.685021
GGP 0.740634
GHS 11.126753
GIP 0.740634
GMD 74.499432
GNF 8741.153473
GTQ 7.662397
GYD 209.237241
HKD 7.776215
HNL 26.362545
HRK 6.397503
HTG 130.951927
HUF 330.137983
IDR 16729.15
ILS 3.186032
IMP 0.740634
INR 90.263204
IQD 1310.19773
IRR 42124.999657
ISK 125.696211
JEP 0.740634
JMD 159.532199
JOD 0.708986
JPY 155.70087
KES 128.949872
KGS 87.450285
KHR 4008.85391
KMF 417.999814
KPW 899.988547
KRW 1445.601438
KWD 0.30719
KYD 0.833489
KZT 514.029352
LAK 21644.588429
LBP 89561.205624
LKR 309.599834
LRD 177.018844
LSL 16.645168
LTL 2.95274
LVL 0.60489
LYD 5.412442
MAD 9.124909
MDL 16.777482
MGA 4573.672337
MKD 52.285777
MMK 2100.202105
MNT 3556.654488
MOP 8.011093
MRU 39.604456
MUR 45.950364
MVR 15.45014
MWK 1734.230032
MXN 17.93969
MYR 4.045018
MZN 63.910308
NAD 16.645168
NGN 1450.449845
NIO 36.806642
NOK 10.006865
NPR 143.770645
NZD 1.712048
OMR 0.384496
PAB 1.000136
PEN 3.365433
PGK 4.319268
PHP 58.7875
PKR 280.16122
PLN 3.57948
PYG 6777.849865
QAR 3.645469
RON 4.325195
RSD 99.566027
RUB 78.960492
RWF 1456.65485
SAR 3.750695
SBD 8.153391
SCR 15.233419
SDG 601.504014
SEK 9.171285
SGD 1.283402
SHP 0.750259
SLE 24.074962
SLL 20969.503664
SOS 570.585342
SRD 38.335506
STD 20697.981008
STN 20.777943
SVC 8.75133
SYP 11058.430888
SZL 16.631683
THB 31.070126
TJS 9.19119
TMT 3.51
TND 2.909675
TOP 2.40776
TRY 42.846199
TTD 6.803263
TWD 31.585002
TZS 2473.44698
UAH 42.191946
UGX 3610.273633
UYU 39.087976
UZS 12053.751267
VES 288.088835
VND 26320
VUV 120.842065
WST 2.78861
XAF 556.301203
XAG 0.013898
XAU 0.000223
XCD 2.70255
XCG 1.802508
XDR 0.691025
XOF 556.303562
XPF 101.141939
YER 238.450123
ZAR 16.752502
ZMK 9001.200846
ZMW 22.577472
ZWL 321.999592
  • SCS

    0.0200

    16.14

    +0.12%

  • RYCEF

    -0.0300

    15.53

    -0.19%

  • JRI

    0.0600

    13.47

    +0.45%

  • NGG

    0.2500

    77.49

    +0.32%

  • BCC

    1.4800

    74.71

    +1.98%

  • GSK

    0.1100

    48.96

    +0.22%

  • BCE

    0.2800

    23.01

    +1.22%

  • RIO

    -0.0800

    80.89

    -0.1%

  • RBGPF

    0.0000

    81.26

    0%

  • RELX

    -0.0400

    41.09

    -0.1%

  • CMSC

    0.0100

    23.02

    +0.04%

  • VOD

    0.0400

    13.1

    +0.31%

  • BTI

    0.2000

    57.24

    +0.35%

  • CMSD

    0.1200

    23.14

    +0.52%

  • AZN

    0.3100

    92.45

    +0.34%

  • BP

    -0.2700

    34.31

    -0.79%

AI's blind spot: tools fail to detect their own fakes
AI's blind spot: tools fail to detect their own fakes / Photo: © AFP

AI's blind spot: tools fail to detect their own fakes

When outraged Filipinos turned to an AI-powered chatbot to verify a viral photograph of a lawmaker embroiled in a corruption scandal, the tool failed to detect it was fabricated -- even though it had generated the image itself.

Text size:

Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities at a time when major tech platforms are scaling back human fact-checking.

In many cases, the tools wrongly identify images as real even when they are generated using the same generative models, further muddying an online information landscape awash with AI-generated fakes.

Among them is a fabricated image circulating on social media of Elizaldy Co, a former Philippine lawmaker charged by prosecutors in a multibillion-dollar flood-control corruption scam that sparked massive protests in the disaster-prone country.

The image of Co, whose whereabouts has been unknown since the official probe began, appeared to show him in Portugal.

When online sleuths tracking him asked Google's new AI mode whether the image was real, it incorrectly said it was authentic.

AFP's fact-checkers tracked down its creator and determined that the image was generated using Google AI.

"These models are trained primarily on language patterns and lack the specialized visual understanding needed to accurately identify AI-generated or manipulated imagery," Alon Yamin, chief executive of AI content detection platform Copyleaks, told AFP.

"With AI chatbots, even when an image originates from a similar generative model, the chatbot often provides inconsistent or overly generalized assessments, making them unreliable for tasks like fact-checking or verifying authenticity."

Google did not respond to AFP’s request for comment.

- 'Distinguishable from reality' -

AFP found similar examples of AI tools failing to verify their own creations.

During last month's deadly protests over lucrative benefits for senior officials in Pakistan-administered Kashmir, social media users shared a fabricated image purportedly showing men marching with flags and torches.

An AFP analysis found it was created using Google's Gemini AI model.

But Gemini and Microsoft's Copilot falsely identified it as a genuine image of the protest.

"This inability to correctly identify AI images stems from the fact that they (AI models) are programmed only to mimic well," Rossine Fallorina, from the nonprofit Sigla Research Center, told AFP.

"In a sense, they can only generate things to resemble. They cannot ascertain whether the resemblance is actually distinguishable from reality."

Earlier this year, Columbia University's Tow Center for Digital Journalism tested the ability of seven AI chatbots -- including ChatGPT, Perplexity, Grok, and Gemini -- to verify 10 images from photojournalists of news events.

All seven models failed to correctly identify the provenance of the photos, the study said.

- 'Shocked' -

AFP tracked down the source of Co's photo that garnered over a million views across social media -- a middle-aged web developer in the Philippines, who said he created it "for fun" using Nano Banana, Gemini's AI image generator.

"Sadly, a lot of people believed it," he told AFP, requesting anonymity to avoid a backlash.

"I edited my post -- and added 'AI generated' to stop the spread -- because I was shocked at how many shares it got."

Such cases show how AI-generated photos flooding social platforms can look virtually identical to real imagery.

The trend has fueled concerns as surveys show online users are increasingly shifting from traditional search engines to AI tools for information gathering and verifying information.

The shift comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes."

Human fact-checking has long been a flashpoint in hyperpolarized societies, where conservative advocates accuse professional fact-checkers of liberal bias, a charge they reject.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.

Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity. But they caution that they cannot replace the work of trained human fact-checkers.

"We can't rely on AI tools to combat AI in the long run," Fallorina said.

burs-ac/sla/sms

J.Barnes--TFWP