Cognitive Systems
Overall Goal of this Field of Research
The term cognitive system denotes an artificial system that uses principles of human cognitive information processing as a prototype and thus aims at mimicing human cognitive abilities. A first motivation to construct such systems consists in gaining insight into the nature of consciousness of living beings by computational and engineering means, i.e., by means of theory building as well as simulations on computers and robotical systems. The main goal here is the study of general principles of information processing crucial for cognitive capabilities, such as the ability to learn. A second ambition consists in the development of useful technologies, that serve a practical purpose.
The scientific study of consciousness for a long time concentrated strongly on the neural foundations of consciousness alone, whereas other aspects that contributed to the evolution of our present consciousness - and thus can be regarded as integral factors - have been neglected. Two of these factors are the body and the environment of conscious systems. Our brains, and thus our consciousness, evolved as a part of our bodies (embodiement) and while our bodies were interacting within their environment (situatedness). Only more recently the view is expanded to the fact that brains are only small parts of larger complex systems, and interactions of cognitive systems within their environment are taken into account. For example, perception is not just a passive representation of the world, rather it is an active process which comprises interaction and manipulation in the environment. The expanded perspective, that consciousness evolved from bodies interacting within an environment, rather than beeing a product of isolated brains alone, might contribute new insights into the nature of consciousness and its evolution.
At the same time on the technical side, in the development of cognitive systems the first efforts were made on the simulation of isolated cognitive functions such as logical reasoning or categorization of stimuli, i.e., on those components of consciousness that are particularly accessible to a computational approach. The idea of investigating the role of interaction for the simulation of cognitive abilities appeared only later and is an ongoing field of research. For example, the problem of processing a huge amount of sensory data could be reduced enormously, if a system is able to actively select by interaction those data from the environment deemed useful for a task at hand.
To make a distinction, the ultimate goal of the field of artificial consciousness is the building of artificial systems with consciousness. While in principle it might be possible to artificially generate kinds of consciousness (e.g., in hybrid systems with both biological and technical components), artificial consciousness will probably not be an exact copy of human consciousness, because it will not have undergone the same evolutionary process. As there are some ethically disputable implications connected with the generation of artificial consciousness as a whole, we are not aiming at creating it. Rather, we concentrate on the investigation of general principles of information processing that are supposed to underly specific functions of consciousness, such as learning by interaction or spatial cognition (see also core area Computational Neuroscience).
Major Challenges
- Until today there exists no integrative, holistic definition of consciousness - one that neuoscientists, philosophers, or computer scientists would agree upon. There is only a general consensus that a few important aspects are key factors. The mechanisms, roles, and interplays of these factors have not been completely understood yet, but besides multimodal integration (binding problem) and the control of attention, memory and learning processes have been identified as beeing crucial to be explained for a deeper understanding of consciousness.
- One problem in the investigation of consciousness is posed by the relationship between symbolic and subsymbolic cognition. Humans seem to be able to perfom both forms of information processing. For example, they solve problems by explicit learning and conscious reasoning on the one hand (symbolic cognition), and they perform unaware information processing such as the acquisition of implicite representations of the world on the other hand (subsymbolic cognition). Although neural processing (i.e, subsymbolic processing) eventually is the basis for symbolic processing, the view of the symbolic approach should be taken into account for theory building and developing cognititve architectures. It is considered likely that symbolic knowledge or abilities on a macro level emerge from numeric processing on a micro level by a dynamic and self-organizing process. The challenge here consists in understanding how exactely the gap between subsymbolic and symbolic processing is bridged and how this can be simulated and exploited in artificial systems.
Unsolved Problems
Many unsolved problems can be found: from the theoretic point of view (while developing computational theories of consciousness), from the point of view of testing hypotheses on consciousness by simulations on computer systems, as well as from the engineering point of view. Fundamental for solutions in all of these fields are questions of integration, namely
- the integration of theories. One example is the question how to integrate the narrower view of "isolated" processes on the neural level in the brain with the more expanded view of bodies interacting in the world.
- the fusion of different methods for more realistic simulations of cognitive capabilities. Examples are the fusion of the concept of interaction with classical machine learning methods or the fusion of subsymbolic and symbolic learning approaches.
- Furthermore, ongoing research activities address the question how principles such as emergence, adaptivity, self-organization, or active exploration can be instantiated and exploited for the development of useful interactive systems.
Our Expertise
Learning as key factor of consciousness constitutes one focus of our work. Besides the development of theories on general information processing principles of self-learning and complex systems, we develop new machine learning methodologies, mainly in the fields of self-learning, adaptive systems and learning by interaction.
With a background in classic approaches of machine learning such as statistical and neural methods, we combine methods from opposite sides of machine learning. For example, we fuse reinforcement learning with symbolic reasoning methods. In the context of a DFG-funded project we developed a hierarchical, recurrent system that executes numerical operations on a lower level in such a form that symbolic representations emerge on a higher level. These symbolic representations, namely behavioral rules, in turn determine the focus of attention on the lower level in a next step.
Knowledge gained from such simulations can be intergrated into applied research, namely in the development of technologies. One example is the development of an intelligent 3D scanner, which is able to autonomously explore the objects to scan in a perception-action cycle and to direct its attention selectively on only those data which are important for a specific task at hand.
Images: Human-Computer Interaction
Downloads:
- Research Report 2010-2013 (PDF 2 MB)
- Forschungsbericht 2010-2013 (PDF 2 MB)
- Lehrbericht 2010-2022 (PDF 12 MB)