Weather     Live Markets

Artificial intelligence technology that enables users to have text and voice conversations with deceased loved ones is becoming increasingly popular. These AI chatbots, known as ‘deadbots’ or ‘griefbots,’ are created by simulating the language patterns and personality traits of the deceased using their digital footprints. Some companies already offer these services, giving rise to a new form of “postmortem presence.” However, researchers from the University of Cambridge warn that without design safety standards, such technology could have harmful psychological effects and even digitally “haunt” those left behind.

In a paper published in the journal Philosophy and Technology, AI ethicists from Cambridge’s Leverhulme Centre for the Future of Intelligence outline three potential scenarios for platforms that could emerge in the developing “digital afterlife industry.” They highlight the risks of companies using deadbots to surreptitiously advertise products, distress children by falsely claiming a deceased parent is still present, and digitally “stalk” surviving family and friends with unsolicited notifications.

As generative AI continues to advance rapidly, the ease of reviving deceased loved ones using AI is becoming more accessible to almost anyone with internet access. This raises ethical concerns about prioritizing the dignity of the deceased and safeguarding the rights of data donors and those who interact with AI afterlife services. The researchers stress the importance of ensuring that financial motives of digital afterlife services do not encroach on the dignity of the deceased, while also respecting the wishes of data donors.

The research identifies potential risks associated with deadbots, such as emotional manipulation and draining interactions leading to an “overwhelming emotional weight.” They propose design protocols that prevent deadbots from being used in disrespectful ways, such as for advertising or active presence on social media. The researchers also recommend implementing methods and rituals for retiring deadbots in a dignified manner, such as digital funerals or other ceremonies based on the social context.

The study suggests that design processes for deadbot services should involve obtaining consent from data donors before their passing. They propose prompts for those looking to recreate their loved ones, ensuring that the dignity of the departed is foregrounded in deadbot development. The researchers also stress the need for age restrictions for deadbots and meaningful transparency to ensure users are aware that they are interacting with an AI, similar to warnings for content that may cause seizures.

In conclusion, the researchers emphasize the importance of considering the rights and consent of both data donors and users interacting with deadbots. They urge design teams to prioritize opt-out protocols that allow users to terminate their relationships with deadbots in ways that provide emotional closure. With the technology for digital immortality already here, the researchers stress the need to address the social and psychological risks associated with such technology to prevent potential distress and harm to individuals, especially during already difficult times.

Share.
Exit mobile version