Symbolic artificial intelligence

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level "symbolic" (human-readable) representations of problems, logic and search. Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the late 1980s.[1][page needed][2][page needed]

John Haugeland gave the name GOFAI ("Good Old-Fashioned Artificial Intelligence") to symbolic AI in his 1985 book Artificial Intelligence: The Very Idea, which explored the philosophical implications of artificial intelligence research. In robotics the analogous term is GOFR ("Good Old-Fashioned Robotics").

The approach is based on the assumption that many aspects of intelligence can be achieved by the manipulation of symbols, an assumption defined as the "physical symbol systems hypothesis" by Allen Newell and Herbert A. Simon in the middle 1960s.

The most successful form of symbolic AI is expert systems, which use a network of production rules. Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols.

Opponents of the symbolic approach include roboticists such as Rodney Brooks, who aims to produce autonomous robots without symbolic representation (or with only minimal representation) and computational intelligence researchers, who apply techniques such as neural networks and optimization to solve problems in machine learning and control engineering.

Symbolic AI was intended to produce general, human-like intelligence in a machine, whereas most modern research is directed at specific sub-problems. Research into general intelligence is now studied in the sub-field of artificial general intelligence.

Machines were initially designed to formulate outputs based on the inputs that were represented by symbols. Symbols are used when the input is definite and falls under certainty. But when there is uncertainty involved, for example in formulating predictions, the representation is done using "fuzzy logic". This can be seen in artificial neural networks.

See also[edit]

References[edit]

  1. ^ Haugeland, John (1985), Artificial Intelligence: The Very Idea, Cambridge, Mass: MIT Press, ISBN 0-262-08153-9
  2. ^ Kosko, Bart (1993). Fuzzy Thinking. Hyperion. ISBN 978-0786880218.