Dartmouth Conference - The birth of artificial Intelligence(1956)
John McCarthy convinced Minsky, claude Shannon, and Nathaniel Rochester to help him bring together U.S. researchers interested in automata theory, neural nets, and the study of intelligence.
They organized a two-month workshop at Dartmouth in the summer of 1956. The proposal states:
There were 10 attendees in all
- John McCarthy
- Marbin Minsky
- Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory".
Shannon is noted for having founded information theory with a landmark paper, A Mathematical Theory of Communication, that he published in 1948. He is, perhaps, equally well known for founding digital circuit design theory in 1937, when—as a 21-year-old master's degree student at the Massachusetts Institute of Technology (MIT)—he wrote his thesis demonstrating that electrical applications of Boolean algebra could construct any logical, numerical relationship. Shannon contributed to the field of cryptanalysis for national defense during World War II, including his fundamental work on codebreaking and secure telecommunications.
- Nathaniel Rochester (January 14, 1919 – June 8, 2001) designed the IBM 701, wrote the first assembler and participated in the founding of the field of artificial intelligence.
- Trenchard More is a professor at Dartmouth College. He participated in the 1956 Dartmouth Summer Research Project on Artificial Intelligence.[1][2][3] At the 50th year meeting of the Dartmouth Conference with Marvin Minsky, Geoffrey Hinton and Simon Osindero he presented The Future of Network Models and also gave a lecture entitled Routes to the Summit.[4]
Designed a theory for nested rectangular array that provided a formal structure used in the development of the Nested Interactive Array Language.
- Arthur Lee Samuel (December 5, 1901 – July 29, 1990)[3] was an American pioneer in the field of computer gaming and artificial intelligence.[1] He coined the term "machine learning" in 1959.[4] The Samuel Checkers-playing Program appears to be the world's first self-learning program, and as such a very early demonstration of the fundamental concept of artificial intelligence (AI).[5] He was also a senior member in TeX community who devoted much time giving personal attention to the needs of users and wrote an early TeX manual in 1983.
- Ray Solomonoff (July 25, 1926 – December 7, 2009) was the inventor of algorithmic probability, his General Theory of Inductive Inference (also known as Universal Inductive Inference), and was a founder of algorithmic information theory. He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.
- Oliver Gordon Selfridge (10 May 1926 – 3 December 2008) was a pioneer of artificial intelligence. He has been called the "Father of Machine Perception."
He then became a graduate student of Norbert Wiener's at MIT, but did not write up his doctoral research and never earned a Ph.D. While at MIT, he acted as one of the earlier reviewers for Wiener's Cybernetics book in 1949. He was also technically a supervisor of Marvin Minsky, and helped organize the first ever public meeting on artificial intelligence (AI) with Minsky in 1955. Selfridge wrote important early papers on neural networks and pattern recognition and machine learning, and his "Pandemonium" paper (1959) is generally recognized as a classic in artificial intelligence. In it, Selfridge introduced the notion of "demons" that record events as they occur, recognize patterns in those events, and may trigger subsequent events according to patterns they recognize. Over time, this idea gave rise to aspect-oriented programming.
- Allen Newell (/ˈnuːəl, ˈnjuː-/; March 19, 1927 – July 19, 1992) was a researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University’s School of Computer Science, Tepper School of Business, and Department of Psychology. He contributed to the Information Processing Language (1956) and two of the earliest AI programs, the Logic Theory Machine (1956) and the General Problem Solver (1957) (with Herbert A. Simon). He was awarded the ACM's A.M. Turing Award along with Herbert A. Simon in 1975 for their basic contributions to artificial intelligence and the psychology of human cognition.[
- Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist, economist, sociologist, psychologist, and computer scientist whose research ranged across the fields of cognitive psychology, cognitive science, computer science, public administration, economics, management, philosophy of science, sociology, and political science, unified by studies of decision-making.Simon was among the pioneers of several of today's important scientific domains, including artificial intelligence, information processing, decision-making, problem-solving, organization theory, complex systems, and computer simulation of scientific discovery. He coined the terms bounded rationality and satisficing, and was among the earliest to analyze the architecture of complexity and to propose a preferential attachment mechanism to explain power law distributions.[
They organized a two-month workshop at Dartmouth in the summer of 1956. The proposal states:
We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
It is not my aim to surpise or shock you - but the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until - in the visible future - the range of problems they can handle will be coextensive with the range to whick the human mmind has been applied. [Herbert Simon, 1957]
There were 10 attendees in all
- John McCarthy
- Marbin Minsky
- Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory".
Shannon is noted for having founded information theory with a landmark paper, A Mathematical Theory of Communication, that he published in 1948. He is, perhaps, equally well known for founding digital circuit design theory in 1937, when—as a 21-year-old master's degree student at the Massachusetts Institute of Technology (MIT)—he wrote his thesis demonstrating that electrical applications of Boolean algebra could construct any logical, numerical relationship. Shannon contributed to the field of cryptanalysis for national defense during World War II, including his fundamental work on codebreaking and secure telecommunications.
- Nathaniel Rochester (January 14, 1919 – June 8, 2001) designed the IBM 701, wrote the first assembler and participated in the founding of the field of artificial intelligence.
- Trenchard More is a professor at Dartmouth College. He participated in the 1956 Dartmouth Summer Research Project on Artificial Intelligence.[1][2][3] At the 50th year meeting of the Dartmouth Conference with Marvin Minsky, Geoffrey Hinton and Simon Osindero he presented The Future of Network Models and also gave a lecture entitled Routes to the Summit.[4]
Designed a theory for nested rectangular array that provided a formal structure used in the development of the Nested Interactive Array Language.
- Arthur Lee Samuel (December 5, 1901 – July 29, 1990)[3] was an American pioneer in the field of computer gaming and artificial intelligence.[1] He coined the term "machine learning" in 1959.[4] The Samuel Checkers-playing Program appears to be the world's first self-learning program, and as such a very early demonstration of the fundamental concept of artificial intelligence (AI).[5] He was also a senior member in TeX community who devoted much time giving personal attention to the needs of users and wrote an early TeX manual in 1983.
- Ray Solomonoff (July 25, 1926 – December 7, 2009) was the inventor of algorithmic probability, his General Theory of Inductive Inference (also known as Universal Inductive Inference), and was a founder of algorithmic information theory. He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.
- Oliver Gordon Selfridge (10 May 1926 – 3 December 2008) was a pioneer of artificial intelligence. He has been called the "Father of Machine Perception."
He then became a graduate student of Norbert Wiener's at MIT, but did not write up his doctoral research and never earned a Ph.D. While at MIT, he acted as one of the earlier reviewers for Wiener's Cybernetics book in 1949. He was also technically a supervisor of Marvin Minsky, and helped organize the first ever public meeting on artificial intelligence (AI) with Minsky in 1955. Selfridge wrote important early papers on neural networks and pattern recognition and machine learning, and his "Pandemonium" paper (1959) is generally recognized as a classic in artificial intelligence. In it, Selfridge introduced the notion of "demons" that record events as they occur, recognize patterns in those events, and may trigger subsequent events according to patterns they recognize. Over time, this idea gave rise to aspect-oriented programming.
- Allen Newell (/ˈnuːəl, ˈnjuː-/; March 19, 1927 – July 19, 1992) was a researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University’s School of Computer Science, Tepper School of Business, and Department of Psychology. He contributed to the Information Processing Language (1956) and two of the earliest AI programs, the Logic Theory Machine (1956) and the General Problem Solver (1957) (with Herbert A. Simon). He was awarded the ACM's A.M. Turing Award along with Herbert A. Simon in 1975 for their basic contributions to artificial intelligence and the psychology of human cognition.[
- Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist, economist, sociologist, psychologist, and computer scientist whose research ranged across the fields of cognitive psychology, cognitive science, computer science, public administration, economics, management, philosophy of science, sociology, and political science, unified by studies of decision-making.Simon was among the pioneers of several of today's important scientific domains, including artificial intelligence, information processing, decision-making, problem-solving, organization theory, complex systems, and computer simulation of scientific discovery. He coined the terms bounded rationality and satisficing, and was among the earliest to analyze the architecture of complexity and to propose a preferential attachment mechanism to explain power law distributions.[
댓글