Mark Bergen wrote recently an article about the potential perils of artificial intelligence. He writes about the Boston-based Future of Life Institute (FLI), charged to research ways to make AI more robust and prevent AI from becoming a destructive force.
While the recently-released Terminator suggests weaponized robots, this is not a reality in the short term according to FLI. They argue it actually distracts from the real issues such as the ones covered by these projects (according to The Verge):
- Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
- A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
- A project lead by Manuela Veloso from Carnegie Mellon University on making AI systems explain their decisions to humans
- A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
- A project headed by Heather Roff studying how to keep AI-driven weapons under “meaningful human control”
A new Oxford-Cambridge research center for studying AI-relevant policy
Full list of 2015 grant winners here. FLI got funding from Elon Musk who has been outspoken about the dangers of AI.
My question is – can we really use our collectively limited brain capacity to develop enough boundaries around AI to contain its “potential” destructive power?