MGLAIR: A Multimodal Cognitive Agent Architecture
Bona, Jonathan P.
MetadataShow full item record
MGLAIR (multimodal, grounded, layered, architecture with integrated reasoning), is a cognitive architecture for embodied computational cognitive agents that includes a model of modality and an account of multimodal perception and action. It provides these capabilities as part of a cognitively plausible architecture without necessarily mimicking the implementation of the human brain. MGLAIR can be viewed as a cognitive science investigation into the structures and processes sufficient for this type of cognitive functionality. It is also an engineering project aimed at producing embodied computational agents that effectively carry out tasks involving concurrent multimodal perception and action. As a theory of multimodal perception and action, MGLAIR presents a model for modalities and mecha- nisms governing their use in concurrent multimodal sensing and acting. MGLAIR is a layered architecture. Conscious acting, planning, and reasoning take place within the Knowledge Layer (KL), which is comprised of SNePS and its subsystems. The Sensori-Actuator Layer (SAL) is embodiment-specific and includes low-level controls for the agents sensory and motor capabilities. The Perceptuo-Motor Layer (PML) connects the mind (KL) to the body (SAL), grounding conscious symbolic representations through perceptual structures. The PML is itself divided into sub-layers. Perception involves the flow of sense data up through the layers from the SAL through the PML, and its transformation via a gradation of abstractions into consciously available percepts at the knowledge layer. Acting involves the flow of impulses down from the conscious knowledge layer through the PML and into the SAL. Actions are decomposed along the way into low-level motor commands. A modality in MGLAIR is a limited resource corresponding to a single afferent or efferent capability of the agent's embodiment. Each modality implements only a limited number of related activities. Modality mechanisms govern the use of modalities and their integration with the rest of the agent: reasoning, planning, etc. These mechanisms determine the behavior of afferent modalities when an event or state of affairs in the world impinges on the parts of an agent's body that can detect it. The raw sense data is processed within the relevant modality and converted into percepts that are made consciously accessible to the agent. How this is achieved depends on the nature of the modality -- what properties it possesses -- as well as on configurable mechanisms shared by all afferent modalities. Corresponding mechanisms for efferent modalities determine how conscious actions performed by the agent result in the operation of its effectors. This work includes a software implementation of the architecture, which has been used to instantiate embodied agents that sense and act simultaneously using MGLAIR modalities.