Insights

Artificial intelligence based on agents

Hexis view on artificial intelligence based on agents.

in Software development, IT strategyBy Nuno Rebocho, Senior Software Engineer

At Hexis, we have a clear definition of what Artificial Intelligence (AI) is, and that the meaning of AI translates into systems that have the basic characteristics of reasoning, learning, as well as, pattern recognition and inference.

AI means different things to different people. For some, AI is about artificial life forms that can surpass human intelligence, and for others, almost any data processing technology can be called AI.

Although the in-depth mastery of mathematics requires (it seems) human intuition and ingenuity, most professional challenges can be solved with the use of a calculator and a simple set of rules. This analogy is an abstraction, but it translates into the business tools and apps that we use daily.

AI systems have two inherent characteristics that approximate intelligence simulation:

Autonomy

Autonomy translates into the ability to perform tasks in complex environments without constant user guidance.

Adaptability

In an AI system, the adaptability is the capacity to improve performance by learning from experience. As such, we have evolved in the study and understanding of the phenomenon of intelligence. In the field of science, AI seeks how human beings think, intending to modularize that thinking in computational processes, thus trying to build a body of algorithmic explanations of the human mentality process, in the field of engineering.

Artificial intelligence can be approached in 3 paradigms:

Symbolic

Set of symbols that form structures, set of rules and processes. So, when symbol and rules sets are applied to a symbol set, the action results in new structures.

Connection

Intelligent behaviour is related to the dynamics of connections between nodes, called neurons, representing the dynamics of knowledge.

Behavioural

Upon a stimulus, an agent reacts by producing interactions with the environment that surrounds it, developing a reactive behaviour.

As a theoretical component on the definitions of AI, the definition of an agent is a subject on which there is no common consensus. But there is an adjective shared by all artificial intelligence researchers: "autonomy". The agent autonomy classifies this as the main point of an agent.

An agent has the main characteristics listed below, and the different types of agents are described in more detail below.

Autonomy

As previously defined, at the agent level, autonomy is reflected in the ability of an agent to act without the direct intervention of humans or other agents, having some control over its internal state and the actions it performs.

Ability

Agent ability to detect changes in its environment and react in a timely manner to these changes.

Pro-Activity

Ability of an agent to act not only in response to the characteristics of the surrounding environment but also in a manner oriented to achieve its objectives, taking the initiative when appropriate.

Sociability

Ability to interact with other agents and, eventually, with humans to achieve their objectives and, if necessary, help to achieve the goals of other agents.

An Artificial Intelligence agent can be described over five different types, where here it is performed a comprehensive description:

Reactive agents

Reactive agents or designated reactive architectures, try not to use any complex symbolic model or reasoning and make decisions "in real-time". This type of architecture is based on developing intelligence for the agent, based only on the interaction with the environment, without needing a pre-established model, based on a very limited set of information and simple rules to act that allows selecting a behaviour. The agents based on this architecture obtain information from their sensors that are used directly in the decision-making process, with no symbolic representation of the world being created. Reactive agents may or may not have memory. Reactive agents that have memory have the possibility of representing temporal dynamics.

Deliberative agents

Deliberative agents or deliberative architectures follow the classic approach of artificial intelligence. Here agents act with little autonomy and have explicit symbolic models of the environments that surround them. These architectures widely interpret agents as part of a knowledge-based system. The decisions of the agents (which action to take) are made through logical reasoning. The agent has an internal representation of the world and an explicit mental state that can be modified by changing the environment.

Search in a state-space

Search in a state-space is one of the most used techniques for solving AI problems. The idea is to have an agent capable of carrying out actions that change the current state. Thus, given an initial state (initial position of the agent), a set of actions that the agent can perform and an objective (final state) a problem is defined. The resolution of the problem consists of a sequence of actions that were performed by the agent, "transporting him" from the initial state to final (objective) state.

When searching for data space, we must define a problem. This definition consists of the process of deciding what possible actions can be taken by the agent, an initial state, an end state (objective) and, depending on the problem, the possible states.

Having the problem, we must look for the best solution to solve it (from the initial to the final state). This solution can be found through different algorithms that receive a problem as input and return a solution in the form of a sequence of actions. A solution is the sequence of actions found, through a search mechanism, that led to the problem satisfaction.

Markov decision processes

The Markov decision process is a way of modelling processes where the transitions between states are probabilistic. It is possible to observe what state the process is in, and it is possible to intervene in the process periodically by performing actions. Each action has a reward (or cost), which depends on the state of the process.

Modelled processes that conform to Markov properties are characterized by the effect of an action on a state, depending only on the action and the current state of the system, not on how the process reached the current state. These are called "decision-making" processes because they model the possibility of an agent interfering in the system by carrying out actions.

The figure below shows the functioning dynamics of a system modelled as a Markov decision process.

Figure 1 - Modelling a system with a Markov decision process

Learning by reinforcement

Learning by reinforcement is indicated when you want to obtain a policy of actions when you do not know the world a priori. The agent must interact directly with the environment to gather information, which will be processed by an appropriate algorithm to carry out the actions that lead to reaching the objective.

The agent must learn which actions maximize the gains obtained but must also act to achieve this maximization by exploring actions that have not yet been carried out or regions that have not been studied in the environment.

There is mathematical modelling associated with the agent types here described above, but in the scope of this article, this characterization was left out. Below is detailed a practical example using a reactive agent.

A practical example

In a practical context, an example of an application developed in the context of implementing reactive agent models and architectures is given here, based on a people simulation platform.

The development was designed for an agent to operate in an environment composed of three types of elements: targets, bases and obstacles. The environment is static, changing only the agent's action. The agent must have the ability to take and drop targets. When the agent picks up a target, it carries that target as cargo until it lands. The targets can be landed on the bases.

The objectives to be achieved, with this development, are summarized in:

  • Design and implementation of a reactive agent for collecting targets, based on given stimulus rules;
  • Design and implementation of a receptive agent for collecting targets, capable of searching for targets, based on behavioural schemes with hierarchical coordination of behaviours;
  • Design and implementation of a reactive agent for collecting targets, capable of guiding the search for targets based on potential fields;
  • Design and implementation of a reactive agent for collecting targets, able to guide the search for targets based on potential fields and to explore the environment, with a memory of previous situations.

To achieve the objectives listed above a reactive agent was implemented through an extended class of the "Agent" class existing on a PAS platform. An agent behaviour was set, for this behaviour to be defined, it was necessary to adapt (override) some of the methods of the original "Agent" class.

Figure 2 - Class diagram and implementation of a reactive agent

As figure 2 shows, the reactive agent was specified through the platform's "Agent" class, implementing the execute method (overriding the base class), and the PSA platform will call this method.

With the "perceive ()" method, the image that the agent "observes" is captured, passing this perception on to the "react ()" method, that will define the action to be taken.

For the agent to react, a behavioural scheme with hierarchical coordination of behaviours was defined and implemented. Thus, it is possible to associate the agent with a set of behaviours that reflect the various possible actions to be carried out to develop the multiple reactions to be simulated by a hierarchy scheme (collect/approach/avoid with or without memory/search/navigate). Since, for each iteration of the agent's internal mechanism, reactions (behaviours) are carried out hierarchically, being previously tested whether a specific action can be performed or not. If the reaction is not possible, the next reaction is also tested, repeating this process for all reactions supported by the agent. This type of architecture is called Subsumption Architecture.

Conclusion

There are different ways to apply intelligence simulation, having been here defined different types of agents that allow the definition and characterization of the environment that surrounds them, as well as, how to approach and implement an AI layer.

The different ways of implementing agents have increasingly grown to a cost-based approach (e.g. time), to perform tasks and optimize the route to the response. Agents can be used for a specific and or narrow problem, and it is possible to create intelligence in the way the answer is achieved and optimized it over time, improving the success over time of the result obtained.

Powered by ChronoForms - ChronoEngine.com

Get in touch