The Risk of AI Addiction: Parallels with Tobacco Warning Labels

Addiction Economy Thought for Today - OpenAI and others warn that their Generative products may be addictive and harmful to the social and mental health of users by mimicking human interaction.

Wondering if this is the same as cigarette companies having 'This Product May be Harmful to your Health' on the pack. Which basically means 'We are still going to sell it to you anyway, even though we know it might kill you, but just so's you know, (and now we aren't liable)'

"OpenAI is not alone in recognizing the risk of AI assistants mimicking human interaction. In April, Google DeepMind released a lengthy paper discussing the potential ethical challenges raised by more capable AI assistants. Iason Gabriel, a staff research scientist at the company and a coauthor of the paper, tells WIRED that chatbots’ ability to use language “creates this impression of genuine intimacy,” adding that he himself had found an experimental voice interface for Google DeepMind’s AI to be especially sticky.
“There are all these questions about emotional entanglement,” Gabriel says.

"Such emotional ties may be more common than many realize. Some users of chatbots like Character AI and Replika report antisocial tensions resulting from their chat habits. A recent TikTok with almost a million views shows one user apparently so addicted to Character AI that they use the app while watching a movie in a theater. Some commenters mentioned that they would have to be alone to use the chatbot because of the intimacy of their interactions. “I’ll never be on [Character AI] unless I’m in my room,” wrote one."

I know lots of you love your ChatGPT et al, and I am being a bit luddite about it, seems fairly pointless for the money to me, but it better be bldy worth it.

https://lnkd.in/efzr5KQK

Previous
Previous

Public Support for Taxes on Unhealthy Foods: A Health Foundation Study

Next
Next

Promising Results from Interactive Text Messaging in Youth Vaping Cessation