Grief bots are here, right now.
We all know, death is the ultimate mystery. That is, we all die, and even if that should make us familiar with the idea of perpetual nothingness, we still try to keep as far away as possible from the concept of death itself.
As our society evolved, it pushed away the familiarity we – as animals – had with the idea of death in contrast to life: after all, who needs another Debbie Downer next door?
Anyone who lost someone knows about how hard it is to cope with their absence – but, what if there was a way to bring them back?
A few years ago the British TV series “Black Mirror” – that explores the deep and troubled connections intertwining humans and technology – focused on the possibility of re-creating the personality of a deceased by feeding information such as text messages, pictures and social media profiles to a self learning computer program.
The result was incredibly accurate, but obviously lacking the emotions and feelings that make us human.
“It’s definitely the future — I’m always for the future… but is it really what’s beneficial for us? Is it letting go, by forcing you to actually feel everything? Or is it just having a dead person in your attic? Where is the line? Where are we? It screws with your brain.”
— Eugenia Kuyda
Eugenia Kuyda – co-founder of the AI startup Luka, was inspired by this idea and driven by the loss of her best friend Roman Mazurenko when she decided to make the “grief bot” happen.
The Verge reports that the “Digital Monument” to Kuyda’s deceased friend is a project based on Google’s neural network software called “TensorFlow”, which allows the mimicking of the human learning process by “teaching” the machine to simulate a behavioural response based on information fed to it.
In this case, Kuyda’ s “grief bot” was jump-started with thousands of text messages, thus allowing the neural network to grow some kind of “personality” based on the linguistic skills, style and inclinations of Roman Mazurenko.
“The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
— Stephen Hawking
Sci-fi culture prepared us to an unavoidable future where humans and machines will coexist – more or less peacefully. Nevertheless, the whole idea ignites an ethical debate as to what the balance between science – specifically AI – and natural life, should be.
This will obviously spark questions among the public opinion, if even a guy like Stephen Hawking exposes his ideas of legitimate concern regarding the possibility of a “full-on” Artificial Intelligence; and the recollection of an infinite number of films and literature – where it typically ends poorly for the whole human race – comes to mind at once.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given [sic] it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
— Isaac Asimov’s Three Laws of Robotics
But even assuming that a good balance between humans and machines could be reachable and we could peacefully coexist under Asimov’s 3 laws of robotics – what would we be doing it for? Do we really need to postpone the idea of death a little further, by surrounding us with the digital footprint of our deceased loved ones? Do we really need AI to take over the simplest duties, which should make us human – such as taking care of elderly people? Do we really need robots in order to feel less lonely, as we keep drifting apart as a species?
On top of everything: do we really trust ourselves enough – with such great confidence and a touch of pride – to feel ready to imitate Mother Nature’s work?
After all, we all have seen The Terminator and The Matrix.