Feed on
Posts
Comments

Application Fundamentals

Application Components

Agents

According to a classical definition, an intelligent agent is a computational system capable of autonomous action and perception in some environment; actions are supposed to change the environment in order to meet agents design objectives and perception is a process by which the agent recognises the state of the environment, so as to be able to adapt its behaviour to it.

The main point about agents is they are autonomous: capable of acting independently, encapsulating control over their internal state. Apart from being autonomous, the most typically mentioned properties are:

  • Reactivity
  • Situatedness
  • Pro-activeness
  • Social ability

The key data structures in our agents are beliefs, desires and intentions, this is one of the most interesting aspects that it was inspired by and based on a model of human behaviour that was developed by philosophers. This model is called the belief−desire−intention (BDI) model. Belief−desire−intention architectures originated in the work of the Rational Agency project at Stanford Research Institute in the mid-1980s. The origins of the model lie in the theory of human practical reasoning developed by the philosopher Michael Bratman, which focuses particularly on the role of intentions in practical reasoning.

  • Beliefs are information the agent has about the world. This information could be out of date or inaccurate, of course.
  • Desires are all the possible states of affairs that the agent might like to accomplish. Having a desire, however, does not imply that an agent acts upon it: it is a potential influencer of the agent’s actions. Note that it is perfectly reasonable for a rational agent to have desires that are mutually incompatible with one another. We often think and talk of desires as being options for an agent.
  • Intentions are the states of affairs that the agent has decided to work towards. Intentions may be goals that are delegated to the agent, or may result from considering options: we think of an agent looking at its options and choosing between them. Options that are selected in this way become intentions. Therefore, we can imagine our agent starting with some delegated goal, and then considering the possible options that are compatible with this delegated goal; the options that it chooses are then intentions, which the agent is committed to.

Agents governing their behaviour on the basis of internal states that mimic cognitive mental states, this is the central idea in the BDI model.

The particular model of decision-making underlying the BDI model is known as practical reasoning. Practical reasoning is reasoning directed towards actions — the process of figuring out what to do in order to achieve what is desired.

Cognitive practical reasoning consists of two main activities:

  • Deliberation when the agent makes decision on what state of affairs the agent desire to achieve, the output of the Deliberation phase are the Intentions—what agent desires to achieve, or what he desires to do;
  • Means-Ends Reasoning when the agent makes decisions on how to achieve these state of affairs. The output of Means-Ends is in selecting a given course of actions—the workflow of actions the agent need to do to achieve the Goals.

The Procedural Reasoning System (PRS), originally developed at Stanford Research Institute by Michael Georgeff and Amy Lansky, was perhaps the first agent architecture to explicitly embody the belief−desire−intention paradigm, and has proved to be one of the most durable approaches to developing agents to date.

In the PRS, an agent does no planning from first principles. Instead, it is equipped with a library of pre-compiled plans. These plans are manually constructed, in advance, by the agent programmer. Plans in the PRS each have the following components:

  • a goal − the post-condition of the plan;
  • a context − the pre-condition of the plan; and
  • a body − the ‘recipe’ part of the plan − the course of basic action and goals to carry out.

At start-up time a PRS agent will have a collection of such plans, and some initial beliefs about the world. Beliefs in the PRS are represented as Prolog-like facts − essentially, as atomic formulæ of first-order logic. In addition, at start-up, the agent will typically have a top-level goal. This goal acts in a rather similar way to the ‘main’ method in Java or C.

When the agent starts up, the goal to be achieved is pushed onto a stack, called the intention stack. This stack contains all the goals that are pending achievement. The agent then searches through its plan library to see what plans have the goal on the top of the intention stack as their post-condition. Of these, only some will have their pre-condition satisfied, according to the agent’s current beliefs. The set of plans that (i) achieve the goal, and (ii) have their pre-condition satisfied, become the possible options for the agent. The process of selecting between different possible plans is, of course, deliberation. The chosen plan is then executed in its turn; this may involve pushing further goals onto the intention stack, which may then in turn involve finding more plans to achieve these goals, and so on. The process bottoms-out with individual actions that may be directly computed (e.g. simple numerical calculations). If a particular plan to achieve a goal fails, then the agent is able to select another plan to achieve this goal from the set of all candidate plans.

Artifacts

An artifact is essentially a passive, dynamic, stateful entity, designed to encapsulate and provide some sort of function. The functionality of an artifact is structured in terms of operations, whose execution can be triggered by agents through an artifact’s usage interface. Each operation control is identified by a label (typically equal to the operation name to be triggered) and a list of input parameters.

Besides the operation control, the usage interface might contain also a set of observable properties; that is, properties whose dynamic values can be observed by agents without  necessarily interacting with (or operating upon) the artifact.

An operation is the basic unit upon which artifact functionality is structured. The execution of an operation upon an artifact can result both in changes in the artifact’s inner (i.e., non-observable) state, and in the generation of a stream of observable events that can be perceived by agents that are using or simply observing the artifact. It is worth remarking here the differences between observable properties and observable events. The former are (dynamic, persistent) attributes that belong to an artifact and that can be observed by agents without interacting with it (i.e., without using the operation controls). The latter are non-persistent information, as signals carrying also an information content.

Operation execution can be conceived as a process (from a conceptual point of view) combining the execution of possibly multiple guarded operation steps, where guards relate to the inner artifact state. In order to avoid interferences, the execution of a single operation step is atomic. This approach, overall, makes it possible to support the execution of multiple operations concurrently within the artifact, maintaining mutual exclusion to access the artifact state.

Analogously to artifacts in the human case, in A&A each artifact is meant to be equipped with a “manual” describing the artifact’s function (i.e., its intended purpose), the artifact’s usage interface (i.e., the observable “shape” of the artifact), and the artifact’s operating instructions (i.e., usage protocols or simply how to correctly use the artifact so as to take advantage of all its functionalities). An artifact manual is meant to be inspected and used at runtime by agents, in particular intelligent agents, for reasoning about how to select and use artifacts so as to best achieve their goals. This is a fundamental feature for developing open systems, where agents cannot have a priori knowledge of all the artifacts available in their workspaces since new instances and types of artifacts can be created dynamically, at runtime.

Finally, as a principle of composition, artifacts can be linked together, in order to enable artifact–artifact interaction. This is realised through link interfaces, which are analogous to interfaces of artifacts in the real world (e.g., linking/connecting/plugging the earphones into an MP3 player, or using a remote control for the TV). Linking is supported also for artifacts belonging to distinct workspaces, possibly residing on different network nodes.

Workspaces

Workspaces are used for defining the structure of the application, which can be organised in multiple workspaces, possibly distributed among different network nodes. By default each application provide a default workspace but eventually other ones can be created by agents using an appropriate action (createWorkspace). An agent can dynamically join (joinWorkspace action) a workspace — so working simultaneously in multiple workspaces — and eventually quit from it as soon as it completed its work (quitWorkspace action).

Comments are closed.

-->