DeepMind, an artificial intelligence company owned by Google, has cooperated with Oxford University to write research papers on artificial intelligence issues.
The team believes that super artificial intelligence may betray and destroy human beings in the future, and human fear of artificial intelligence may constitute war.
The research paper, published in AI Magazine last month, was co-authored by DeepMind and Oxford University researchers.
Through a potential artificial reward system, the team conducted research on how artificial intelligence endangered the survival of human beings.
The results of the research pointed out that the team believes that super artificial intelligence may betray or even wipe out human beings in the future.
The paper also points out that the human fear of being wiped out by artificial intelligence is similar to the fear of extraterrestrial life, and this fear may cause human beings of different cultures to constitute war.
Michael Cohen, one of the authors of the research paper, said on Twitter that the paper assumes that the planet is a zero-sum game, with humans needing to grow food and keep the lights on, while super-advanced machines want to use all available resources to get rewards and prevent humans from passing through. Constant escalation blocks the machine.
The team believes that super artificial intelligence may betray and destroy human beings in the future, and human fear of artificial intelligence may constitute war.
The research paper, published in AI Magazine last month, was co-authored by DeepMind and Oxford University researchers.
Through a potential artificial reward system, the team conducted research on how artificial intelligence endangered the survival of human beings.
The results of the research pointed out that the team believes that super artificial intelligence may betray or even wipe out human beings in the future.
The paper also points out that the human fear of being wiped out by artificial intelligence is similar to the fear of extraterrestrial life, and this fear may cause human beings of different cultures to constitute war.
Michael Cohen, one of the authors of the research paper, said on Twitter that the paper assumes that the planet is a zero-sum game, with humans needing to grow food and keep the lights on, while super-advanced machines want to use all available resources to get rewards and prevent humans from passing through. Constant escalation blocks the machine.
Bostrom, Russell, and others have argued that advanced AI poses a threat to humanity. We reach the same conclusion in a new paper in AI Magazine, but we note a few (very plausible) assumptions on which such arguments depend. https://t.co/LQLZcf3P2G 🧵 1/15 pic.twitter.com/QTMlD01IPp
— Michael Cohen (@Michael05156007) September 6, 2022
Post A Comment:
0 comments: