The Fort Worth Press - Firms and researchers at odds over superhuman AI

USD -
AED 3.672982
AFN 65.999773
ALL 82.398957
AMD 381.501466
ANG 1.790403
AOA 917.000253
ARS 1451.762402
AUD 1.50263
AWG 1.8025
AZN 1.726387
BAM 1.666503
BBD 2.013642
BDT 122.171618
BGN 1.663698
BHD 0.377007
BIF 2960
BMD 1
BND 1.290015
BOB 6.92273
BRL 5.601196
BSD 0.999749
BTN 89.631315
BWP 13.185989
BYN 2.907816
BYR 19600
BZD 2.010685
CAD 1.37391
CDF 2260.000344
CHF 0.792305
CLF 0.023196
CLP 909.979902
CNY 7.04095
CNH 7.031755
COP 3806.3
CRC 498.36831
CUC 1
CUP 26.5
CVE 94.449864
CZK 20.70195
DJF 177.719968
DKK 6.354801
DOP 62.599594
DZD 129.703053
EGP 47.455201
ERN 15
ETB 155.349934
EUR 0.85091
FJD 2.289502
FKP 0.750114
GBP 0.742855
GEL 2.68499
GGP 0.750114
GHS 11.480017
GIP 0.750114
GMD 73.500185
GNF 8685.999704
GTQ 7.660619
GYD 209.163024
HKD 7.77985
HNL 26.349802
HRK 6.406699
HTG 130.901562
HUF 330.670496
IDR 16772.65
ILS 3.200198
IMP 0.750114
INR 89.629503
IQD 1310
IRR 42100.00025
ISK 125.870426
JEP 0.750114
JMD 159.578049
JOD 0.709026
JPY 156.930993
KES 128.902706
KGS 87.449794
KHR 4010.999985
KMF 418.999977
KPW 899.999969
KRW 1478.420212
KWD 0.307301
KYD 0.833142
KZT 515.528744
LAK 21635.000287
LBP 89600.000229
LKR 309.526853
LRD 177.502199
LSL 16.75963
LTL 2.95274
LVL 0.60489
LYD 5.424997
MAD 9.13875
MDL 16.926118
MGA 4547.509247
MKD 52.349809
MMK 2100.312258
MNT 3551.223311
MOP 8.011554
MRU 39.760473
MUR 46.15009
MVR 15.459721
MWK 1737.000062
MXN 17.981235
MYR 4.077797
MZN 63.898309
NAD 16.760224
NGN 1460.210219
NIO 36.699323
NOK 10.116765
NPR 143.404875
NZD 1.725225
OMR 0.3845
PAB 0.99977
PEN 3.365991
PGK 4.25025
PHP 58.809502
PKR 280.300677
PLN 3.586545
PYG 6755.311671
QAR 3.640984
RON 4.329702
RSD 99.920073
RUB 78.79999
RWF 1452
SAR 3.750101
SBD 8.146749
SCR 14.01211
SDG 601.504736
SEK 9.23419
SGD 1.28857
SHP 0.750259
SLE 24.050362
SLL 20969.503664
SOS 571.498
SRD 38.406502
STD 20697.981008
STN 21.3
SVC 8.748333
SYP 11058.38145
SZL 16.759962
THB 31.140236
TJS 9.197788
TMT 3.5
TND 2.914934
TOP 2.40776
TRY 42.813845
TTD 6.796861
TWD 31.489498
TZS 2485.981009
UAH 42.082661
UGX 3602.605669
UYU 39.187284
UZS 12002.503331
VES 282.15965
VND 26340
VUV 120.603378
WST 2.787816
XAF 558.912945
XAG 0.014588
XAU 0.000225
XCD 2.70255
XCG 1.801846
XDR 0.695829
XOF 558.502172
XPF 102.250112
YER 238.4008
ZAR 16.72425
ZMK 9001.201156
ZMW 22.594085
ZWL 321.999592
  • RBGPF

    0.7800

    81

    +0.96%

  • SCS

    0.0200

    16.14

    +0.12%

  • JRI

    -0.0100

    13.37

    -0.07%

  • CMSC

    -0.0500

    23.12

    -0.22%

  • RELX

    0.2500

    40.98

    +0.61%

  • BTI

    0.3200

    56.77

    +0.56%

  • BCC

    -0.5400

    74.23

    -0.73%

  • NGG

    0.3000

    76.41

    +0.39%

  • GSK

    -0.0200

    48.59

    -0.04%

  • RIO

    1.7800

    80.1

    +2.22%

  • BCE

    -0.1100

    22.73

    -0.48%

  • CMSD

    -0.0500

    23.2

    -0.22%

  • AZN

    0.1900

    91.55

    +0.21%

  • RYCEF

    -0.1100

    15.5

    -0.71%

  • BP

    0.2000

    34.14

    +0.59%

  • VOD

    0.0400

    12.88

    +0.31%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: © AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

P.McDonald--TFWP