Stephen Hawking and Elon Musk have both famously warned us of the dangers of Artificial Intelligence, but is AI really going to lead to the downfall of humanity?
Perhaps because it made for more shocking copy, most media reporting on this letter spun it as if it were grim warning against the development of AI, but this is not giving the full picture.
TV shows and movies (“Terminator”, “Blade Runner”, “Black Mirror”, etc ...) have given us glances into dystopian futures where technology run amok spells the end of mankind. It’s easy to have some trepidation, if not full-blown fear, around AI being misused.
But the point of the letter from the Future of Life Institute is not to freak everyone out. Rather, it’s to ensure that the development of AI is handled with human safety in mind. It underscores the importance of creating it with morality, ethics and empathy, and if we do so, technology will continue to be nothing but helpful to mankind.