The Fort Worth Press - Firms and researchers at odds over superhuman AI

USD -
AED 3.672504
AFN 64.000125
ALL 83.571528
AMD 379.306739
ANG 1.790083
AOA 916.999762
ARS 1394.493963
AUD 1.418842
AWG 1.8
AZN 1.701861
BAM 1.70403
BBD 2.026631
BDT 123.441516
BGN 1.709309
BHD 0.377519
BIF 2983.464413
BMD 1
BND 1.284852
BOB 6.95265
BRL 5.263199
BSD 1.006257
BTN 93.307018
BWP 13.64595
BYN 3.067036
BYR 19600
BZD 2.023756
CAD 1.372145
CDF 2270.000154
CHF 0.791955
CLF 0.023189
CLP 915.62992
CNY 6.87305
CNH 6.899385
COP 3706.28
CRC 469.967975
CUC 1
CUP 26.5
CVE 96.081456
CZK 21.300603
DJF 179.186419
DKK 6.509415
DOP 60.835276
DZD 132.532596
EGP 52.246006
ERN 15
ETB 157.116838
EUR 0.87109
FJD 2.218299
FKP 0.749449
GBP 0.75261
GEL 2.71503
GGP 0.749449
GHS 10.968788
GIP 0.749449
GMD 74.000291
GNF 8818.979979
GTQ 7.707255
GYD 210.505219
HKD 7.838665
HNL 26.6321
HRK 6.559102
HTG 131.875123
HUF 342.832038
IDR 16965
ILS 3.10005
IMP 0.749449
INR 93.02915
IQD 1318.032101
IRR 1314999.999493
ISK 124.740309
JEP 0.749449
JMD 157.992201
JOD 0.708996
JPY 159.678503
KES 130.250451
KGS 87.450143
KHR 4029.54184
KMF 427.999782
KPW 899.9784
KRW 1498.698999
KWD 0.30657
KYD 0.838475
KZT 485.403559
LAK 21591.404221
LBP 90120.825254
LKR 313.313697
LRD 184.128893
LSL 16.795929
LTL 2.95274
LVL 0.60489
LYD 6.420803
MAD 9.415922
MDL 17.543921
MGA 4190.776631
MKD 53.654672
MMK 2100.10344
MNT 3571.101739
MOP 8.123072
MRU 40.161217
MUR 46.510055
MVR 15.459929
MWK 1744.806191
MXN 17.80125
MYR 3.933503
MZN 63.898703
NAD 16.795929
NGN 1358.930199
NIO 37.027516
NOK 9.58355
NPR 149.303937
NZD 1.717898
OMR 0.384502
PAB 1.006169
PEN 3.436114
PGK 4.341518
PHP 60.083498
PKR 281.091833
PLN 3.720219
PYG 6503.590351
QAR 3.658789
RON 4.435702
RSD 102.323983
RUB 83.873907
RWF 1468.813316
SAR 3.754684
SBD 8.04524
SCR 15.186236
SDG 600.999678
SEK 9.394075
SGD 1.281845
SHP 0.750259
SLE 24.650034
SLL 20969.510825
SOS 575.063724
SRD 37.374989
STD 20697.981008
STN 21.350297
SVC 8.803744
SYP 110.58576
SZL 16.800579
THB 32.739843
TJS 9.62383
TMT 3.5
TND 2.960823
TOP 2.40776
TRY 44.320504
TTD 6.820677
TWD 31.954598
TZS 2603.730041
UAH 44.250993
UGX 3785.225075
UYU 40.745194
UZS 12269.740855
VES 450.94284
VND 26315.5
VUV 119.592862
WST 2.733704
XAF 571.627633
XAG 0.013074
XAU 0.000206
XCD 2.70255
XCG 1.813334
XDR 0.710924
XOF 571.630124
XPF 103.919416
YER 238.575012
ZAR 16.938598
ZMK 9001.245332
ZMW 19.677217
ZWL 321.999592
  • RBGPF

    0.1000

    82.5

    +0.12%

  • BCC

    -1.0800

    71.84

    -1.5%

  • BCE

    -0.2600

    25.75

    -1.01%

  • VOD

    -0.3800

    14.37

    -2.64%

  • CMSC

    -0.1200

    22.83

    -0.53%

  • RIO

    -2.0800

    87.72

    -2.37%

  • NGG

    -3.0200

    87.4

    -3.46%

  • CMSD

    0.0100

    22.89

    +0.04%

  • RELX

    -0.4300

    33.86

    -1.27%

  • GSK

    -1.3500

    52.06

    -2.59%

  • RYCEF

    -0.2100

    16.6

    -1.27%

  • JRI

    -0.1370

    12.323

    -1.11%

  • AZN

    -2.8700

    188.42

    -1.52%

  • BP

    0.7600

    44.61

    +1.7%

  • BTI

    -2.4600

    58.09

    -4.23%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: © AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

P.McDonald--TFWP