Max Tegmark’s Life 3.0 is a speculation about the consequences of developing “general artificial intelligence”. General AI is a computer intelligence that can be used to solve a wide variety of problems. This type of AI is opposed to today’s highly-specialized AIs that focus on one task: such as face-recognition or playing chess.
Tegmark is a general AI enthusiast. He is convinced that general intelligence is necessary for the survival of mankind. Otherwise, he is concerned that in a few billion years the Earth will be destroyed by the sun.
I could live with not worrying about that particular problem for at least another million years and not risk the destruction of mankind with AI in the meantime, but hey…
Life 3.0 is divided into three sections:
The first describes the restrictions faced by AI today.
The second delves into hypothetical scenarios linked to the development of AI: these range from utopias to antiutopias.
The third, and by some miles, the most interesting in my opinion deals with the question of what it takes to count something as being conscious.
Tegmark starts his book with a description of AI developers who take over the world using “contained” AI. His first and foremost concern is the AI getting into the wrong hands.
In part two of the book, he gets carried away by other scenarios. From a philosophical perspective, I found the speculative part of the book incredibly annoying.
The problem is that, for me, many of Tegmark’s utopian scenarios sound like antiutopias What he suggests – it sounds like- is that we should create ourselves a God- a general artificial intelligence to watch over us, extend our lifespans, our range of galaxies or even make us immortal.
AI will be so much smarter than us, that the decisions it makes will always be right. However, in many of his antiutopian scenarios, Tegmark raises the worry that it might appear to the newly-constructed AI that humans might have outlasted their usefulness and should be destroyed. Nonetheless, Tegmark is unwilling to follow this reasoning is to its logical conclusion. If all General AI does is right, than if it decides to obliterate mankind, that will surely be the “right” decision too. Yet somehow he still finds this idea worrying.
The other question Tegmark does not really tackle is why we should be developing general AI in the first place. We are currently building AI/ ML tools made for solving specific tasks without developing a consciousness. Tegmark objects to using a general AI as a “slave” to mankind. But, surely, the answer here might be to continue building specialized AI without consciousness, as one continues to use a laundry machine or a dishwasher. We don’t think of them or treat them as mechanical slaves- and why should we? They are objects.
The key here might be “humancentric” AI, which would not developed to replace all human work. Rather it works to remove aspect of work that humans don’t enjoy. Anyhow, there is some food for thought in this book, even though I found bits of it frustrating.