Developments in artificial intelligence (AI) have the potential to enable people around the world to flourish in hitherto unimagined ways – but such developments might also give humanity tools to address other sources of risk.
Despite this, AI also poses its own risks. AI systems behave in ways that sometimes surprise people. At the moment, such systems are usually quite narrow in their capabilities - for example being excellent at Go, or at minimizing power consumption in a server facility. If people designed a machine intelligence which was a sufficiently good general reasoner, or even better at general reasoning than people are, it might become difficult for human agents to interfere with its functioning. If it then behaved in a way which did not reflect human values, it might pose a real risk to humanity. Such a machine intelligence might use its intellectual superiority to develop a decisive strategic advantage, and if its behaviour was for some reason incompatible with human well-being, it could then pose an existential risk. Note that this does not depend on the machine intelligence gaining consciousness, or having any ill will towards humanity.