Belief–desire–intention software model

BDI agents are representatives of the software agents. BDI stands for Belief, Desire and Intentions, the three main components of this architecture. The agents are accessible by assumptions about their environment, knowledge of the target state and intentions as this state is equipped.

Origin of the BDI architecture

The BDI architecture was originally developed in Rational Agency Project at Stanford Research Institute. It goes back to the work of Michael Bratman, professor of philosophy at Stanford University, who worked on the decision making in humans and published his model in 1987. In 1995, this by Anand Rao and Michael Georgeff was adapted for the practice. The artificial agents were added with mental abilities: they were given knowledge about their environment ( beliefs ), desirable states ( desires ) and currently pursuing intentions ( intentions ).

Construction of a BDI agent

Like classical agents is the BDI agent equipped with sensors that enlighten him about his environment, and with effectors to take through actions impact on the environment. In between, an interpreter, the sensor inputs operates processed (input) and selects actions (output). The crucial difference to another agent architectures, the data structures on which the interpreter works:

World knowledge ( beliefs )

Important for all kinds of AI systems, the information about the current state of the world in which the agent resides and concludes. This is stored in a knowledge database and contains facts about the current environment, the internal state of the agent and background knowledge that may be necessary for conclusions.

Storage can take place in any knowledge representation. The knowledge base must be constantly updated, influenced by the perception of the agent and internal conclusions. This ensures that the model of the world is valid as far as possible.

Goals ( desires )

In this data structure, the main goals of the agent are given, which influence his future behavior fundamentally. A target is selected from a set of options, which is being pursued for a certain time. Desires are a crucial part deliberative behavior, because the agent is initiated without goals no further actions.

The goal-oriented behavior also makes it possible to track the selected target further in the event of a failed action, eg by an alternative approach or a later retry.

Intentions ( intentions )

To achieve his desired destination, the agent has a plan database available from which they can select hierarchically organized plans that bring him closer to his goal. The currently selected plans are called intentions.

By hierarchically organized plans the reasoning takes place at BDI agents: first, a plan is selected, leading to the target state directly from the home state. A plan is in the normal case of sub-goals, for then each a separate, hierarchically subordinate chart is selected. This happens up to the level of elementary actions, eg Commands for driving a motor.

110439
de