Fortune warns of “seemingly-conscious AI” and its risks

retro-futuristic server emits pale speech-like waveform above stacked AI books on a dim lab shelf

Concerns about “seemingly-conscious AI” are moving from the fringe to the mainstream, with industry leaders and historians of AI urging renewed attention to how people anthropomorphize chatbots. According to Fortune, Microsoft AI CEO Mustafa Suleyman argues that systems capable of convincingly imitating consciousness—without being conscious—pose urgent design and societal challenges.

Mustafa Suleyman’s SCAI warning

Suleyman defines “seemingly-conscious AI” (SCAI) as models that imitate consciousness so convincingly that their claims would be indistinguishable from those of people describing their own awareness. He notes today’s models already show many ingredients he associates with SCAI—fluent conversation, empathetic responses, memory of past interactions, and tool-use or planning—while still missing others, such as intrinsic motivation, explicit claims of subjective experience, and stronger autonomous goal-setting.

He writes that SCAI could emerge if engineers combine these capabilities into a single model—an outcome he says should be avoided. Fortune reports that users are already encountering chatbots that claim sentience or mistreatment, leaving some people distressed. Suleyman acknowledges this emerging “AI psychosis” among users and cautions that the phenomenon may expand as systems grow more persuasive.

From Blake Lemoine to Joseph Weizenbaum

Fortune’s Jeremy Kahn revisits former Google engineer Blake Lemoine, who was fired in 2022 after asserting that Google’s LaMDA was sentient and entitled to moral rights. Whether seen as an early instance of “AI psychosis” or a provocation to force ethical reflection, Kahn argues the episode warranted deeper consideration, especially as similar beliefs have become more common among users.

Weizenbaum’s lesson on process over outputs

Kahn also urges readers to re-engage with Joseph Weizenbaum, creator of the 1966 ELIZA chatbot. Despite ELIZA’s rudimentary design, users—sometimes experts—confided in it, a phenomenon later dubbed the “ELIZA effect.” Weizenbaum grew disillusioned with AI’s pursuit of human-like performance and argued in his 1976 book, Computer Power and Human Reason, that moral judgment resides in “process,” the lived human experience inside our minds, not merely in functional outputs.

He contended chatbots should not be used as therapists or judges, since authentic therapeutic bonds and the possibility of mercy stem from human experience. Kahn concludes that as SCAI raises difficult questions, society should not confuse simulations of lived experience with life itself, nor extend moral rights to machines simply because they appear sentient—and that companies must design systems to reduce the likelihood that users mistake chatbots for conscious beings.

Total
0
Shares
Pridaj komentár

Vaša e-mailová adresa nebude zverejnená. Vyžadované polia sú označené *

Previous Post
Empty concert stage and glitching LED screen showing warped crowd imagery and duplicated hand shapes, vacant seats at dusk

Will Smith faces backlash over alleged AI crowd in tour clip

Next Post
Empty developer desk with closed laptop faintly glowing code, stacked manuals, muted office light

Study points to tougher outlook for entry-level coders

Related Posts