Inference, Decision, and Artificial Intelligence

CTI staff have conducted both fundamental and applied research in the fields of uncertainty representation, inference, and belief revision. In the course of this work, they have collaborated with leading researchers in the area, such as Zadeh, Shafer, Schum, Lindley, and Shastri.

As part of this work, we have investigated both Bayesian and non-Bayesian representations of uncertainty. One objective of this work has been to represent not simply "betting odds" (as exemplified in classical Bayesian probabilities), but also the amount of knowledge underlying a set of beliefs. CTI staff explored the theoretical underpinnings of uncertainty in a variety of reviews and analyses. Our recent work in this area has involved the development of an artificial intelligence system that reasons about qualitatively different patterns of uncertainty.

Inference is the expansion of one's set of beliefs that occurs when new beliefs are derived from a set of pre-existing beliefs by means of "rules" (e.g., of logic or probability). CTI has investigated, constructed, and tested a variety of mechanisms for inference, from assumption-based truth maintenance to rapid parallel reasoning in a connectionist system.

Rules of inference, however, are most definitely not equivalent to rules of reasoning. Belief revision involves strategies for deciding what inferences to attempt and when. Belief revision strategies are particularly important when new information conflicts with pre-existing beliefs. In this case, the belief set must contract and not simply expand, and a choice must be made among alternative ways to revise pre-existing beliefs to make them consistent with new information or goals. Belief revision has been one of CTI's prime areas of research.

CTI's work in this area began with a generic inference engine, called the Non-Monotonic Probabilist, which combines aspects of a formal numerical uncertainty calculus with assumption-based reasoning. The Non-Monotonic Probabilist was applied in the development of systems for several different domains:

  1. an expert system for image understanding (ETL),

  2. the Self-Reconciling Evidential Database, an information management system for national intelligence analysts, and

  3. an in-flight pilot decision aid for route replanning (WPAFB).

CTI's more recent work has involved connectionist implementation of a layered reasoning architecture, in which a reflective (metacognitive) subsystem monitors and regulates the activity of a rapid reflexive (recognitional) subsystem. The reflective subsystem learns effective strategies for belief revision from experience.

CTI's work on uncertainty, inference, and belief revision has led to insights into human reasoning in real-world domains. In particular, it sheds light on some so-called biases in decision making that may involve appropriate reasoning strategies rather than faulty rules of inference. This work on biases, in turn, was the basis for the development of Personalized and Prescriptive Decision Aiding.

This work has also led to research on inferential retrieval, in which reasoning about mental models of a research domain supports more efficient retrieval of relevant documents.

See also:

Reflexive and Reflective Reasoning Architecture
(Office of Naval Research, Arlington, VA; National Science Foundation)

CTI is developing an architecture to model human learning of metacognitive skills and the accelerated learning of domain facts that metacognitive skill (e.g., "learning how to learn") makes possible.

The architecture has been developoed and tested in machine learning studies within a naval tactical decision domain.

The design uses a localist connectionist model to support rapid recognitional domain reasoning and to support the learning of metacognitive behavior.

Research Agents for Inferential Information Retrieval
(National Science Foundation)

CTI is developing a system intended to mediate between and on behalf of the user and traditional information retrieval systems. Three products will be developed and integrated in this work.

The first is an interface with which users can express their mental models of the research context that drives their searches. That interface provides relatively intuitive tools for graphing causal and argument relations to represent mental models.

The second product is a research agent that supports critical thinking, by helping the user to uncover assumptions, identify incomplete or unreliable arguments, and otherwise focus attention on problems whose resolution may have especially high value.

The third product is an inference engine that leverages model representations to perform fast, parallel inferential retrieval. The engine filters and organizes the results from traditional search engines.

The integrated system will support collaborative research and critical thinking by enabling researchers to represent and share research contexts and to explore the products of automated inference over these models.

Copyright © 2000-2011 Cognitive Technologies, Inc.
Questions? Comments? Contact