LIDA (cognitive architecture)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The LIDA (Learning Intelligent Distribution Agent) cognitive architecture is an integrated artificial cognitive system that attempts to model a broad spectrum of cognition in biological systems, from low-level perception/action to high-level reasoning. Developed primarily by Stan Franklin and colleagues at the University of Memphis, the LIDA architecture is empirically grounded in cognitive science and cognitive neuroscience. In addition to providing hypotheses to guide further research, the architecture can support control structures for software agents and robots. Providing plausible explanations for many cognitive processes, the LIDA conceptual model is also intended as a tool with which to think about how minds work.

Two hypotheses underlie the LIDA architecture and its corresponding conceptual model: 1) Much of human cognition functions by means of frequently iterated (~10 Hz) interactions, called cognitive cycles, between conscious contents, the various memory systems and action selection. 2) These cognitive cycles, serve as the "atoms" of cognition of which higher-level cognitive processes are composed.

Overview[edit]

Though it is neither symbolic nor strictly connectionist, LIDA is a hybrid architecture in that it employs a variety of computational mechanisms, chosen for their psychological plausibility. The LIDA cognitive cycle is composed of modules and processes employing these mechanisms.

Computational mechanisms[edit]

The LIDA architecture employs several modules that are designed using computational mechanisms drawn from the "new AI". These include variants of the Copycat Architecture,[1][2] sparse distributed memory,[3][4] the schema mechanism,[5][6] the Behavior Net,[7][8] and the subsumption architecture.[9]

Psychological and neurobiological underpinnings[edit]

As a comprehensive, conceptual and computational cognitive architecture the LIDA architecture is intended to model a large portion of human cognition.[10][11] Comprising a broad array of cognitive modules and processes, the LIDA architecture attempts to implement and flesh out a number of psychological and neuropsychological theories including Global Workspace Theory,[12] situated cognition,[13] perceptual symbol systems,[14] working memory,[15] memory by affordances,[16] long-term working memory,[17] and the H-CogAff architecture.[18]

LIDA's cognitive cycle[edit]

The LIDA cognitive cycle can be subdivided into three phases: the understanding phase, the attention (consciousness) phase, and the action selection and learning phase. Beginning the understanding phase, incoming stimuli activate low-level feature detectors in sensory memory. The output engages perceptual associative memory where higher-level feature detectors feed in to more abstract entities such as objects, categories, actions, events, etc. The resulting percept moves to the Workspace where it cues both Transient Episodic Memory and Declarative Memory producing local associations. These local associations are combined with the percept to generate a current situational model which is the agent's understanding of what is going on right now. The attention phase begins with the forming of coalitions of the most salient portions of the current situational model, which then compete for attention, that is a place in the current conscious contents. These conscious contents are then broadcast globally, initiating the learning and action selection phase. New entities and associations, and the reinforcement of old ones, occur as the conscious broadcast reaches the various forms of memory, perceptual, episodic and procedural. In parallel with all this learning, and using the conscious contents, possible action schemes are instantiated from Procedural Memory and sent to Action Selection, where they compete to be the behavior selected for this cognitive cycle. The selected behavior triggers sensory-motor memory to produce a suitable algorithm for its execution, which completes the cognitive cycle.

History[edit]

Virtual Mattie (V-Mattie) is a software agent[19] that gathers information from seminar organizers, composes announcements of next week's seminars, and mails them each week to a list that it keeps updated, all without the supervision of a human.[20] V-Mattie employed many of the computational mechanisms mentioned above.

Baars' Global Workspace Theory (GWT) inspired the transformation of V-Mattie into Conscious Mattie, a software agent with the same domain and tasks whose architecture included a consciousness mechanism à la GWT. Conscious Mattie was the first functionally, though not phenomenally, conscious software agent. Conscious Mattie gave rise to IDA.

IDA (Intelligent Distribution Agent) was developed for the US Navy[21][22][23] to fulfill tasks performed by human resource personnel called detailers. At the end of each sailor's tour of duty, he or she is assigned to a new billet. This assignment process is called distribution. The Navy employs almost 300 full time detailers to effect these new assignments. IDA's task is to facilitate this process, by automating the role of detailer. IDA was tested by former detailers and accepted by the Navy. Various Navy agencies supported the IDA project to the tune of some $1,500,000.

The LIDA (Learning IDA) architecture was originally spawned from IDA by the addition of several styles and modes of learning,[24][25][26] but has since then grown to become a much larger and generic software framework.[27][28]

Footnotes[edit]

  1. ^ Hofstadter, D. (1995). Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. New York: Basic Books.
  2. ^ Marshall, J. (2002). Metacat: A self-watching cognitive architecture for analogy-making. In W. D. Gray & C. D. Schunn (eds.), Proceedings of the 24th Annual Conference of the Cognitive Science Society, pp. 631-636. Mahwah, NJ: Lawrence Erlbaum Associates
  3. ^ Kanerva, P. (1988). Sparse Distributed Memory. Cambridge MA: The MIT Press
  4. ^ Rao, R. P. N., & Fuentes, O. (1998). Hierarchical Learning of Navigational Behaviors in an Autonomous Robot using a Predictive Sparse Distributed Memory. Machine Learning, 31, 87-113
  5. ^ Drescher, G.L. (1991). Made-up minds: A Constructivist Approach to Artificial Intelligence
  6. ^ Chaput, H. H., Kuipers, B., & Miikkulainen, R. (2003). Constructivist Learning: A Neural Implementation of the Schema Mechanism. Paper presented at the Proceedings of WSOM '03: Workshop for Self-Organizing Maps, Kitakyushu, Japan
  7. ^ Maes, P. 1989. How to do the right thing. Connection Science 1:291-323
  8. ^ Tyrrell, T. (1994). An Evaluation of Maes's Bottom-Up Mechanism for Behavior Selection. Adaptive Behavior, 2, 307-348
  9. ^ Brooks, R.A. Intelligence without Representation. Artificial intelligence, 1991. Elsevier
  10. ^ Franklin, S., & Patterson, F. G. J. (2006). The LIDA Architecture: Adding New Modes of Learning to an Intelligent, Autonomous, Software Agent IDPT-2006 Proceedings (Integrated Design and Process Technology): Society for Design and Process Science
  11. ^ Franklin, S., Ramamurthy, U., D'Mello, S., McCauley, L., Negatu, A., Silva R., & Datla, V. (2007). LIDA: A computational model of global workspace theory and developmental learning. In AAAI Fall Symposium on AI and Consciousness: Theoretical Foundations and Current Approaches. Arlington, VA: AAAI
  12. ^ Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge: Cambridge University Press
  13. ^ Varela, F. J., Thompson, E., & Rosch, E. (1991). The Embodied Mind. Cambridge, Massachusetts: MIT Press
  14. ^ Barsalou, L. W. 1999. Perceptual symbol systems. Behavioral and Brain Sciences 22:577–609. MA: The MIT Press
  15. ^ Baddeley, A. D., & Hitch, G. J. (1974). Working memory. In G. A. Bower (Ed.), The Psychology of Learning and Motivation (pp. 47–89). New York: Academic Press
  16. ^ Glenberg, A. M. 1997. What memory is for. Behavioral and Brain Sciences 20:1–19
  17. ^ Ericsson, K. A., and W. Kintsch. 1995. Long-term working memory. Psychological Review 102:21–245
  18. ^ Sloman, A. 1999. What Sort of Architecture is Required for a Human-like Agent? In Foundations of Rational Agency, ed. M. Wooldridge, and A. Rao. Dordrecht, Netherlands: Kluwer Academic Publishers
  19. ^ Franklin, S., & Graesser, A., 1997. Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents. Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, published as Intelligent Agents III, Springer-Verlag, 1997, 21-35
  20. ^ Franklin, S., Graesser, A., Olde, B., Song, H., & Negatu, A. (1996, Nov). Virtual Mattie—an Intelligent Clerical Agent. Paper presented at the Symposium on Embodied Cognition and Action: AAAI, Cambridge, Massachusetts.
  21. ^ Franklin, S., Kelemen, A., & McCauley, L. (1998). IDA: A Cognitive Agent Architecture IEEE Conf on Systems, Man and Cybernetics (pp. 2646–2651 ): IEEE Press
  22. ^ Franklin, S. (2003). IDA: A Conscious Artifact? Journal of Consciousness Studies, 10, 47–66
  23. ^ Franklin, S., & McCauley, L. (2003). Interacting with IDA. In H. Hexmoor, C. Castelfranchi & R. Falcone (Eds.), Agent Autonomy (pp. 159–186 ). Dordrecht: Kluwer
  24. ^ D'Mello, Sidney K., Ramamurthy, U., Negatu, A., & Franklin, S. (2006). A Procedural Learning Mechanism for Novel Skill Acquisition. In T. Kovacs & James A. R. Marshall (Eds.), Proceeding of Adaptation in Artificial and Biological Systems, AISB'06 (Vol. 1, pp. 184–185). Bristol, England: Society for the Study of Artificial Intelligence and the Simulation of Behaviour
  25. ^ Franklin, S. (2005, March 21–23, 2005). Perceptual Memory and Learning: Recognizing, Categorizing, and Relating. Paper presented at the Symposium on Developmental Robotics: American Association for Artificial Intelligence (AAAI), Stanford University, Palo Alto CA, USA
  26. ^ Franklin, S., & Patterson, F. G. J. (2006). The LIDA Architecture: Adding New Modes of Learning to an Intelligent, Autonomous, Software Agent IDPT-2006 Proceedings (Integrated Design and Process Technology): Society for Design and Process Science
  27. ^ Franklin, S., & McCauley, L. (2004). Feelings and Emotions as Motivators and Learning Facilitators Architectures for Modeling Emotion: Cross-Disciplinary Foundations, AAAI 2004 Spring Symposium Series (Vol. Technical Report SS-04-02 pp. 48–51). Stanford University, Palo Alto, California, USA: American Association for Artificial Intelligence
  28. ^ Negatu, A., D'Mello, Sidney K., & Franklin, S. (2007). Cognitively Inspired Anticipation and Anticipatory Learning Mechanisms for Autonomous Agents. In M. V. Butz, O. Sigaud, G. Pezzulo & G. O. Baldassarre (Eds.), Proceedings of the Third Workshop on Anticipatory Behavior in Adaptive Learning Systems (ABiALS 2006) (pp. 108-127). Rome, Italy: Springer Verlag

External links[edit]