Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Artificial Intelligence: The Next Manhattan Project?
Transcript of Artificial Intelligence: The Next Manhattan Project?
Could we arrive at the same knowledge as that provided or created by AI through other means eventually?
Francis, David. "Killer Robots: If No One Pulls the Trigger, Who's to Blame?" The Fiscal Times. The Fiscal Times, 23 Dec. 2013.
Web. 09 Feb. 2015. <http://www.thefiscaltimes.com/Articles/2013/12/23/Killer-Robots-If-No-One-Pulls-Trigger-Who-s-Blame>.
Hawking, Stephen, Max Tegmark, Stuart Russell, and Frank Wilczek. "Transcending Complacency on Superintelligent
Machines." The Huffington Post. TheHuffingtonPost.com, 19 Apr. 2014. Web. 09 Feb. 2015. <http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html>.
Healey, Jon. "Be Afraid: Robot Experts Say Machines Are Catching Up." Los Angeles Times. Los Angeles Times Media Group,
13 Mar. 2012. Web. 09 Feb. 2015. <http://opinion.latimes.com/opinionla/2012/03/the-singularity-is-closer-than-you-think.html>.
Marcus, Gary. "Why We Should Think About the Threat of Artificial Intelligence." The New Yorker. Condé Nast, 24 Oct.
2013. Web. 09 Feb. 2015. <http://www.newyorker.com/tech/elements/why-we-should-think-about-the-threat-of-artificial-intelligence>.
Scoblete, Greg. "Is AI A Threat To Humanity?" CNN. Cable News Network, 30 Dec. 2014. Web. 09 Feb. 2015. <http://
Statt, Nick. "Artificial Intelligence Experts Sign Open Letter to Protect Mankind from Machines - CNET." CNET. CBS
Interactive Inc., 11 Jan. 2015. Web. 31 Jan. 2015. <http://www.cnet.com/news/artificial-intelligence-experts-sign-open-letter-to-protect-mankind-from-machines/>.
Tegmark, Max, and Et Al. "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter." FLI. The
Future of Life Institute, n.d. Web. 09 Feb. 2015. <http://futureoflife.org/misc/open_letter>.
Timperly, Jocelyn. "Artificial Intelligence: Can Scientists Stop ‘Negative’ Outcomes?" The Guardian. Guardian News and
Media Limited, 9 Feb. 2015. Web. 09 Feb. 2015. <http://www.theguardian.com/technology/2015/feb/09/artificial-intelligence-can-scientists-stop-negative-outcomes>.
Vardi, Moshe Y. "The Consequences of Machine Intelligence." The Atlantic. Atlantic Media Company, 25 Oct. 2012. Web. 09
Feb. 2015. <http://www.theatlantic.com/technology/archive/2012/10/the-consequences-of-machine-intelligence/264066/>.
AI: Benefits and Risks
Artificial Intelligence: Too Smart For Our Own Good?
Is the advancement of scientific knowledge necessarily always beneficial to humanity?
How should we go about developing
artificial intelligence (AI)? How much
caution is needed?
"The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence [...] Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls." --The Future of Life Institute open letter, "Research Priorities for Robust and Beneficial Artificial Intelligence"
High-volume data processing/analysis
Ethical/moral problems of military use
Robot vs. human values
Decline in available human jobs
Reliance upon eventually faulty systems
"Robot overlord" scenario
Argue that the benefits of AI outweigh the possible risks
Generally advocate in favor of expanding AI research and accelerating development
Not blind to the possible detriments of AI
Proponents include Demis Hassabis (founder of DeepMind) and Joanna Bryson (AI expert at University of Bath)
Understand the benefits of AI, but stress the importance of mitigating risk and controlling development
Many have different opinions on "responsible" development of AI
Proponents include Max Tegmark, Stephen Hawking, and Elon Musk (of the Future of Life Institute)
Focus more on the risks of AI than the benefits
Diversity of viewpoints on why AI research and development is unwise
Objections can be both moral and practical
Proponents include science fiction authors Daniel Wilson and William Hertling, and data analyst Chris Robson
What unique advantages would AI possess over other means of creating knowledge that would outweigh its potential risks?
Would knowledge created by an AI system be viewed on equal footing with that created by humans?
How could the knowledge created in the process of developing AI be used elsewhere?
How might we define acceptable levels of risk in determining whether or not to pursue a certain line of inquiry?
To what extent is more versatile knowledge inherently more valuable?
To what extent is it possible to verify knowledge that one does not fully understand?
How can the ways of knowing be successfully derived from each other?
If two pieces of knowledge are considered equally valuable, and that value is counted once for each possible use of that knowledge, then the knowledge with more uses would indeed be more valuable.
It has already been extensively established throughout TOK that the ways of knowing are interconnected. Even those that may seem relatively isolated from the others (e.g. reason) will be connected to at least one other that is quite clearly connected to all others, aside from the more subtle connections that will most likely also be present in every possible combination of ways of knowing.
Science strives to be as objective as possible, and from an objective point of view there is no "good" or "bad" knowledge, only knowledge used for purposes considered as such. Some of the most beneficial scientific research is also that with the greatest risk to humanity if it is used in a certain way.
It can indeed be possible to extrapolate the nature of the unknown from the known through inductive and deductive reasoning, as well as imagination, but beyond a certain point of comprehension either acquisition of new knowledge or the introduction of faith is necessary to continue working with something that is not understood.
Implications for AI
The benefits and risks of AI and similar technology must be weighed carefully in the future, but considering the unique insights to be gained from these intelligent systems, it may be a necessary priority in the future to focus on responsible development of AI agents for the advancement of humanity.
Assuming a goal for humanity of a state of continual progress and improvement, all scientific advancement is invaluable due to its potential unique, versatile, and perhaps unforeseeable benefits, but must be used cautiously due to its equal potential for harm.