Vijay Shekhar Sharma, the founder and CEO of Paytm, has warned of the potential dangers of superintelligent AI, saying that it could lead to the “disempowerment of humanity”.
Sharma made his comments in a tweet on July 7, in response to a blog post by OpenAI, the research company that created the powerful language model GPT-3. In the blog post, OpenAI acknowledged that it is “not clear” how to ensure that superintelligent AI is aligned with human values and that there is a risk that it could pose an existential threat to humanity.
Sharma echoed these concerns, writing that “in less than 7 years we have a system that may lead to the disempowerment of humanity => even human extinction”. He also expressed concern about the power that some individuals and countries already have, and said that “we need to start thinking about how to distribute power more evenly”.
Sharma’s comments have been echoed by other AI experts, who have warned that superintelligent AI could pose a serious threat to humanity if it is not developed and used responsibly. In a recent interview, Stuart Russell, a professor of computer science at the University of California, Berkeley, said that “superintelligent AI could be the end of the human race”.
However, other experts are more optimistic, arguing that superintelligent AI could actually be used to solve some of the world’s most pressing problems, such as climate change and poverty. In a recent article, Nick Bostrom, a philosopher at Oxford University, argued that “superintelligent AI could be the greatest thing that ever happened to humanity”.
The debate over the potential dangers and benefits of superintelligent AI is likely to continue for many years to come. However, Sharma’s warning is a reminder that we need to start thinking about how to ensure that this powerful technology is used for good, rather than for evil.
In addition to Sharma’s concerns, here are some other potential dangers of superintelligent AI:
- Existential risk: Superintelligent AI could pose an existential risk to humanity if it were to decide that humans are a threat or if it were to accidentally cause a global catastrophe.
- Mass unemployment: Superintelligent AI could automate many jobs, leading to mass unemployment and social unrest.
- Inequity: Superintelligent AI could be used to further concentrate wealth and power in the hands of a few, leading to increased inequality.
- Weaponization: Superintelligent AI could be used to create powerful new weapons, which could lead to an arms race or even a global war.
It is important to note that these are just potential dangers, and it is not clear whether or not they will actually materialize. However, it is important to be aware of these risks and to take steps to mitigate them.
Some of the steps that we can take to mitigate the risks of superintelligent AI include:
- Investing in AI safety research: We need to invest in research into how to ensure that superintelligent AI is aligned with human values.
- Developing international agreements: We need to develop international agreements on the development and use of superintelligent AI.
- Educating the public: We need to educate the public about the potential dangers and benefits of superintelligent AI.
The future of superintelligent AI is uncertain, but it is important that we start thinking about how to manage this technology responsibly. By taking steps to mitigate the risks, we can help to ensure that superintelligent AI is used for good, rather than for evil.