The Fort Worth Press - Firms and researchers at odds over superhuman AI

USD -
AED 3.6725
AFN 63.503502
ALL 81.990336
AMD 370.903715
ANG 1.789884
AOA 918.000197
ARS 1401.993023
AUD 1.39913
AWG 1.8025
AZN 1.696504
BAM 1.67146
BBD 2.014355
BDT 122.739548
BGN 1.668102
BHD 0.377403
BIF 2975
BMD 1
BND 1.275858
BOB 6.936925
BRL 4.965705
BSD 1.000128
BTN 95.070143
BWP 13.576443
BYN 2.828953
BYR 19600
BZD 2.011854
CAD 1.36214
CDF 2315.999417
CHF 0.784106
CLF 0.023178
CLP 912.21986
CNY 6.83025
CNH 6.83533
COP 3728.45
CRC 454.739685
CUC 1
CUP 26.5
CVE 94.650328
CZK 20.87905
DJF 177.720468
DKK 6.39432
DOP 59.600085
DZD 132.411933
EGP 53.530803
ERN 15
ETB 157.075029
EUR 0.8557
FJD 2.202202
FKP 0.736222
GBP 0.739275
GEL 2.685011
GGP 0.736222
GHS 11.195005
GIP 0.736222
GMD 73.49532
GNF 8777.497369
GTQ 7.643867
GYD 209.252937
HKD 7.83558
HNL 26.629448
HRK 6.447202
HTG 130.892468
HUF 312.100503
IDR 17433
ILS 2.95367
IMP 0.736222
INR 95.350202
IQD 1310
IRR 1314999.999816
ISK 122.709857
JEP 0.736222
JMD 157.565709
JOD 0.709029
JPY 157.276498
KES 129.191543
KGS 87.420503
KHR 4011.999844
KMF 420.502192
KPW 899.999998
KRW 1475.990178
KWD 0.30811
KYD 0.833593
KZT 463.980036
LAK 21962.493505
LBP 89401.229103
LKR 319.60688
LRD 183.624998
LSL 16.83005
LTL 2.95274
LVL 0.60489
LYD 6.334982
MAD 9.246963
MDL 17.22053
MGA 4154.999745
MKD 52.771476
MMK 2099.74975
MNT 3576.675528
MOP 8.070745
MRU 39.949922
MUR 46.950046
MVR 15.454942
MWK 1741.501824
MXN 17.509742
MYR 3.964503
MZN 63.909913
NAD 16.830069
NGN 1370.929942
NIO 36.719711
NOK 9.27435
NPR 152.110449
NZD 1.702285
OMR 0.384497
PAB 1.000329
PEN 3.505986
PGK 4.332503
PHP 61.7085
PKR 278.749656
PLN 3.64193
PYG 6218.192229
QAR 3.642973
RON 4.441799
RSD 100.477983
RUB 75.00169
RWF 1460.5
SAR 3.752195
SBD 8.025868
SCR 13.35873
SDG 600.507781
SEK 9.299335
SGD 1.277245
SHP 0.746601
SLE 24.649962
SLL 20969.496166
SOS 571.499363
SRD 37.455993
STD 20697.981008
STN 21.15
SVC 8.752948
SYP 110.524984
SZL 16.82975
THB 32.770189
TJS 9.363182
TMT 3.505
TND 2.885502
TOP 2.40776
TRY 45.21975
TTD 6.794204
TWD 31.6445
TZS 2609.999854
UAH 44.075497
UGX 3753.577989
UYU 40.286638
UZS 11997.999952
VES 488.94275
VND 26323
VUV 118.778782
WST 2.715188
XAF 560.591908
XAG 0.013699
XAU 0.00022
XCD 2.70255
XCG 1.8029
XDR 0.69563
XOF 558.501381
XPF 102.375041
YER 238.625019
ZAR 16.80115
ZMK 9001.200271
ZMW 18.731492
ZWL 321.999592
  • RBGPF

    1.6000

    64.7

    +2.47%

  • CMSD

    -0.0300

    23.25

    -0.13%

  • RELX

    0.0100

    36.36

    +0.03%

  • CMSC

    -0.0100

    22.87

    -0.04%

  • RIO

    -1.9500

    98.63

    -1.98%

  • NGG

    -0.9800

    87.5

    -1.12%

  • GSK

    -0.7100

    50.9

    -1.39%

  • RYCEF

    -0.0200

    16.33

    -0.12%

  • BCC

    -3.8000

    74.33

    -5.11%

  • BCE

    -0.0300

    23.93

    -0.13%

  • JRI

    -0.0500

    12.93

    -0.39%

  • BTI

    -0.3600

    58.35

    -0.62%

  • VOD

    -0.1000

    16.05

    -0.62%

  • BP

    0.5300

    46.94

    +1.13%

  • AZN

    -1.2800

    183.46

    -0.7%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: © AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

P.McDonald--TFWP