PhD Programme
“Challenges on Computer Science”
Multiagent
Information Systems
AUTHOR
Telecommunication
Engineer
DIRECTORA
Dª Ana Mª García
Serrano
Doctor of Computer
Science
April 2006
This article
constitutes a summary of a broader technical report on the state of art of the
Agent Oriented Software Engineering (AOSE) applied to the domain of the
information systems, more concretely, the enterprise information systems (EIS),
which have special attributes related to its greater complexness and
performance requirements, along with an increasingly need of integration.
The area of
enterprise applications integration (EAI) deals with the big picture of the information systems infrastructure, and
therefore is of special interest for us, as multiagent technology aims to
provide a software paradigm for the development of solutions at a higher level
of abstraction than that of other paradigms, for example, object orientation.
At the same
time, industry and standards bodies, are promoting the Service Orientation
paradigm with the Web Services set of standards. The idea, already present in
CORBA, is adapted to nowadays environment, and benefits from the ubiquitous web
protocols and the development of the XML standards.
Services,
somehow fill the gap between objects and agents, as they are active entities
that encapsulate functionality and control, but in contrast to agents, their
behavior is predictable and rigid, while intelligent agents add to this its
proactivity, rationality (goal orientation) and social abilities, and are best
suited for complex environments, where non trivial decisions have to be made
and coordination problems have to be resolved at runtime.
Typically, in
nowadays information systems infrastructures, we don’t find isolated, specific
systems anymore, but a network of information systems connected through a
middleware layer, giving support to a variety of business processes. So Service
Orientation comes to change this picture, blurring the interfaces of the
systems, as system modules (services) will cooperate indistinctly with other
services, thus reducing or completely eliminating integration costs.
We see two
orthogonal ways of cooperation between SOA/WS and agent technologies.
First, agents
provided with a service interface, will interact with service oriented
architectures, being able to act in the service composition or service
orchestration layer, or just interfacing a multiagent system that delivers a
complex service in the SOA framework. Indeed this reduces to the same thing if
we conceive a complex service, made up of single services, but perceived as
same things from the consumer point of view.
Second, SOA and
Web Services are technologies that agent platforms could take advantage of,
although this way is not further analyzed as it is beyond our interest, while
we study multiagent systems at a macro scale, and platform design is more
related to the internal design of the agents, also referred as the micro level.
Going further
from this point, delivering of complex services and business process management
with agents, we will show how semantic languages may be used to create new
services on the fly, made up of different services, not necessarily known at
design time. This step has important drawbacks that constitute important
challenges for the future decade. Nevertheless, there are already some
investigations at this respect that we will cover.
We end up with a
case study, consisting of a data mining multiagent system in which the process
is defined following the CRISP standard. We take the opportunity to share our
vision of a flexible deployment model where, while maintaining distribution
inherent to multiagent systems, a central repository holds system configuration
and code libraries, thus allowing us (or a coordinator agent) to change the
agent behavior and organization at runtime, and secondly, facilitating system
maintenance and control.
Keywords
Information
Systems, can be defined as “an assembly
of computer hardware, software, firmware, or any combination of these,
configured to accomplish specific information-handling operations, such as
communication, computation, dissemination, processing, and storage of
information” [
On the
definition of Enterprise Information Systems or EIS, we can add to the
definition of IS some particular characteristics such:
·
Capacity and
Performance.
Enterprises require typically to manage significantly higher volumes of that
than other (personal) information systems. This volume is proportional to the
number of customers, providers and services of the businesses supported by the
system. Of course, handling these big volumes of data must be done within a
given set of performance requirements.
·
Complex and Flexible. Derived of the complexity and dynamism of
the nowadays business environment, EIS tend to be complex systems, that need to
be adapted constantly to changing needs.
·
Multi-user. Most (if not all) EIS are used for
information processing and exchange among different users of inside and outside
of the company. Therefore, EIS are usually multi-user systems.
·
Security. Information is a strategic asset of any
enterprise. EIS must be secure and comply with the security policies
established by the company and its legal environment.
·
Interoperability. The evolution of EIS have been in
parallel to the evolution of interoperability. From dedicated application to
database-centric systems, and from middleware-based infrastructures to service
oriented architectures. As the software technologies evolve, more and more
complex scenarios are supported, and information flows not only among users,
but among departments, and nowadays, also from Business to Business (B2B) and
from Business to Customers (B2C). As EIS have grown larger and more complex,
integration has become increasingly important.
·
Data Access and Integration Layer. This low level layer manages the access
to data, that may be stored in a database application or legacy system. The
database server is sometimes referred as the backend of the system, as this is
hidden to the clients.
·
Middleware or Business Layer. Composed by services or high level API
that encapsulate business objects and methods. These components may communicate
synchronously or asynchronously. For synchronous (event) communication, there
are two flavors, with session handling and memory-less services. For
asynchronous communication, message delivery facilities are offered by the architecture,
such as message-relay servers, message queues, subscription services, etc.
·
Application and Presentation Layer. Web integration, graphical user
interface, mobile devices gateways… are some examples of the many clients that
may have the business layer. The main benefit of separating the business layer
from the application layer is decoupling the interface and presentation logic
from the internals of the system, easing evolution of the shell.
·
MDW. SERVERS APPLICATION
SERVERS -
Session
& Transaction Mngmt. -
Business
Logic and Services -
Integration
and Data Access FRONT-END -
Thin
clients -
Thick
clients Low Performance Mdw / App Server Mdw / App Server Mdw / App Server High Performance Server Data
Figure 1. Multilayered architecture
·
The
middleware layer must be scalable and sized properly to handle a large number
of transactions. Multi-threaded architectures, queue management systems and
load-balancing are mechanisms typically used.
Enterprise
Application Integration (EAI) refers to the process and systems used to link
different software systems across an organization (and beyond).
In the last
years, the scope of system integration has evolved to support not just
department business functions (e.g. billing, contact center) but to support
end-to-end business processes, business to business operations, and business to
customer.
System
integration is complex because it requires coordination between different teams
of developers and maintainers (and managers). Business data shared across
systems must be replicated or
synchronized to keep data coherence. And configuration management must
be coordinated to keep the interface coherence.
Instead of bilateral
interfaces, whose number increase exponentially with the number of systems, a
multilateral approach is possible by introducing a middleware layer of interoperable
services.
Programming the
business logic in middleware components that can be called in a standard way by
other systems improves reuse and maintainability, and this is the starting
point of the emergence of the Service Oriented Architectures.
Synchronous
interfaces are typically designed upon the concept of event or remote procedure
calls, whereas asynchronous communication is usually supported by a message
relay platform. Whereas synchronous communication is usually preferred for real
time applications, where immediate response is required, asynchronous messaging
optimize the use of resources by queuing messages, and scale better.
J2EE is an
event-based architecture, with support for message communication, in the
multiagent FIPA standard architecture, the agents communicate asynchronously
via message passing.
Increasingly
complex tasks assigned to the nowadays information systems require a higher
level of integration between systems across different functional units of the
organization, and more and more frequently, even with other systems outside the
organization (customers, providers and partners, etc.).
This promotes
new paradigms of software engineering to emerge, to be able to think the
systems architecture at a higher level of abstraction, in terms of processes
and services, as object orientation becomes a too fine grained approach for our
goal.
Service Oriented Architectures (SOA) is a
software engineering paradigm that aims to maximize software components reuse,
these components (services) are loosely coupled, high-level functional units, not
necessarily tied to a concrete application. Services can be implemented to
interface legacy systems, however, the ultimate goal of the SOA paradigm is to
build a infrastructure of interoperable services used across the enterprise and
in B2B transactions.
In SOA, the IT
infrastructure is conceived as the sum of services, instead of the sum of
systems. Middleware is not needed, or is everywhere, as services are offered /
used across the organization. The client-server model is replaced by
peer-to-peer networks, and the application boundaries blur. SOA aims to reduce
integration costs and time-to-market, ease manageability and promote
interoperability inside and outside the company.
SOA respond to the need of improving the integration
and interoperability of systems, with SOA the activities of development and
integration get closer, as it substitutes the old concepts of system and
middleware for a network of interoperable, homogenous services. SOA services
have to be well defined, with an standardized interface, loosely coupled,
self-contained, always-on, provisionable, coarse-grained, user’s
context-independent, composable and with measurable quality [EBPML-SOA].
The Web Services standard from W3C, adopts this
paradigm and defines a set of technologies for service definition, transport
mechanisms and service discovery, among others, with the following advantages
on previous approaches:
·
Objects
are extendable, whereas services are composeable.
·
Services
are coarse-grained.
·
The
concept of container is substituted
for domain
or directory.
·
Client-Server
model is replaced by peer-to-peer
model.
·
Services
are late-bound.
Service discovery and message content is independent of the invocation
mechanism, and can be resolved at runtime. This is a key difference between WS
and CORBA, DCOM or J2EE
This features allows seamless service discovery and service access inside and outside an organization. Service composition and coordination languages have been developed to describe business processes in a formal, executable way.
SOA architectures
usually adopt the typical three-layer approach:
·
Connectivity. data access, legacy middleware and
systems interface, component-based entities (EJBs).
·
Orchestration. data services, process services,
composite services.
·
Presentation. User interface, presentation,
personalization, web portal, client applications, etc.
BPM deals with
the analysis, design, implantation, execution and optimization of end-to-end
business processes. Business process coordinate task flows, resource access and
information sharing.
Although these
concepts have more to do with business management than with computer science,
there is no doubt of the great impact that information technology has by
automating business processes and enabling information flow across an
organization. The relationship between BPM-BPR and software engineering is
materialized in the analysis and design phase of the system. Once the system is
deployed, BPR will define new requirements so the systems will have to evolve
to adapt to the new situation.
Through new
high-level software paradigms, EIS systems will be easier to adapt to follow
BPM and BPR requirements, which finally results in lowering the time-to-market
and cost to offer new products and services.
The concepts of
orchestration and choreography of services come to formally define business
processes starting from the services which compose them, as its basic units.
Standard languages, like BPEL4WS or WSCI, enable this formalization, which is
exposed to the rest of the organization to achieve processes coordination. We
can distinguish three related concepts: service composition, service
orchestration and service choreography. Service composition consists of
creating new services upon existing services.For example, we can create a travel booking service on top of a flight booking service and a accommodation booking service. Service
orchestration and choreography refers the way in which a business process
execution interacts with internal and external services, the messages exchange
and the underlying business logic. Service orchestration permits to design
composite services from the point of view of a central controller which
coordinates the whole process. In contrast, service choreography describes the
service coordination from a global, collaborative point of view, with
distributed control. To describe this type of coordination, the languages WSCI
and the W3C standard, WS-CDL, has been developed.
Orchestration
and choreography complement each other, the first describes the interactions
from the point of view of a participant and its interaction protocol, while the
second describes the interaction from a global point of view.
Both concepts
are the basis on which next generation of systems will build upon.
Nowadays,
service orientation paradigm is being adopted in most IT strategies, in
parallell with other standards such as J2EE, .NET or CORBA.
Business Process Management tools are available as
part of middleware products such as TIBCO or Oracle BPEL.
However, service
orchestration and service choreography have not been widely adopted yet.
Looking further
into the future, we foresee that service composition technologies will be
widely adopted by the industry. From that point on, the next challenges to
overcome regard dynamic service composition and intelligent resolution of
conflicts.
In the next sections
we will show how multiagent technology is probably the next step to take.
Agent oriented
software engineering is a new paradigm of software modeling based on the
concept of agent. Agents are software entities with autonomous control, capable
of perceiving its environment and react to events, and at the same time, agents
are proactive and rational, which means that agents can start a workflow in
order to satisfy their own objectives. Agents typically have social abilities
that describe how they interact and cooperate to achieve their goals.
More
specifically, we can distinguish the following classes of agents:
·
Rational agents. If they act according to their
objectives, and never against them. [GALLIERS-88].
·
Intentional Agents or
BDI. In the BDI model
introduced by [BRATMAN-87], agents have a representation of the environment
that is updated with stimulus perceived by sensors. The model of the world is
represented by a set of believes. The
actions of the agent are determined upon the agent’s desires, which are positive states the agent wants to reach by
means of its intentions. The model
was extended by Georgeff, Rao and Kinny [RAO-GEORGEFF-91].
·
Intelligent agents, if they are intelligent and able to
learn. Regarding the definition of intelligent systems, there isn’t any
established definition [TURING-50, BROOKS-91, ETZIONI-93] but we can have, in
one opposite, the reactive agents that are only capable of react to stimulus,
and on the other hand, BDI agents with reasoning capabilities, able to organize
and cooperate with others, according to plans in order to satisfy their goals.
·
Mobile agents. They can move across the network in order
to accomplish their tasks in a more efficient or reliable way [GRAY-95].
·
Truthful agents. If they never communicate false
information on purpose [GALLIERS-88].
·
Benevolent agents. They don’t have objectives that will make
them lie or go against their own objectives. [ROSENSCHEIN-95].
·
Adaptability /
Learning agents. Such
agents that improve their performance with the time.
Regarding the
system architecture, we can have the following types of agent systems:
·
Reactive
architectures. Agents
act upon a conductive model, that is, of the type stimulus – response. Its
behaviour is based on elementary situations and basic interactions. A typical
example are ant-colonies organization,
where the importance of the joint behaviour is greater than that of its
individual components (this property is referred as the emergence function or emergent behavior of the system). Internally,
the internal architecture usually consists of layers that process sequentially
the stimulus, according to living priorities (subsumption architectures,
[BROOKS-91]).
·
Deliberative agents. The architecture of intentional agents
typically include subsystems to handle the environment representation, historic
memory, reasoning facilities, deliberative control, complex interactions,
social organization… These systems usually have a smaller number of agents than
systems based on reactive architectures.
·
Hybrid agents [FERGUSON-92]. They have characteristics of
both reactive and deliberative architectures. The architecture consist of a
perception subsystem and action subsystem, which interface with the
environment, and three control layers (reactive
layer, which handles stimulus-response behaviours, a planning layer, responsible for local planning and scheduling, and modelling layer, which holds the model
of the environment and the peers) embedded in a control framework interfacing
each layer.
·
Blackboard
architectures. They
focus on the control problem, which actions must select an agent to solve a
problem, what information it needs, how to change the focus of attention … They
consist of a global database (blackboard) and solution elements, chained lists
of operators that, once applied to an initial state, yield the satisfaction of
a goal. KSAR (Knowledge Source Activation Record) represent the fire of a
single Knowledge Source (KS) producing an action. The KSAR is chosen by a
scheduler and it has a structure of the type condition – action, where the
condition is a state of the blackboard, and the action produces the creation of
new solution elements in the blackboard.
We can also
classify the agents regarding its social dimension. According to Demazeau, we
can describe an agent system with the following dimensions: Agent (A),
Environment (E), Interactions (I) and Organization (O).
·
Autonomous agents (A+E) would have little or null
social ability
·
Interactive agents (A+E+I) are able to interact with
peers
·
Social agents (A+E+I+O) have full social
abilities, as they organize and manage the relation with others.
Depending on the
special characteristics and functionality of the agents, we can have the
following examples of agent-based systems:
·
Collaborative agents. Autonomous, intentional and social
agents, they are part of multiagent systems where they cooperate and negotiate
with other agents.
·
Interface agents. They serve as Human-Computer
interface (HCI) systems. Typical functions include self-learning, proactivity,
natural language processing, etc.
·
Search agents. Used in information systems to
search, classify and organize information.
· Mobile agents. Able to migrate from nodes of the agent platform. They can be used to reduce network load, increase fault tolerance, load balancing…
Finally we can
classify an agent-based system by the environment in which it operates; the
following properties can be addressed: accessible / inaccessible, deterministic
/ non-deterministic, static / dynamic, discrete / continuous environment.
Agent
orientation is a powerful paradigm to describe complex systems and interaction
because it is situated at a higher level of abstraction than other paradigms,
like object orientation or functional decomposition. This higher level of
abstraction makes the model of the system closer to the physical reality being
modeled, but at the same time, it is more complex to define a generic framework
or methodology to standardize the procedure and deliverables (set of models) of
a agent-oriented software development project. Much research has been done in
the past on agent-oriented methodologies, being some principal examples, in
chronological order: MAS-Common KADS, Gaia, MaSE, Tropos, MESSAGE, Vowels,
Ingenias,… among others.
As a result of
the social ability of agents, software systems are made up of cooperating
agents. We can differentiate two classes of agent systems regarding how the
control is established. A multiagent
system is defined, in an strict sense [SYKARA-98], as a system of software
agents with distributed control, otherwise, when the control is centralized in a coordinator agent, it is not
a true multiagent systems, in the Sykara sense, but should be referred just as
an agent-based system.
Despite of this
distinction, generally speaking, the term multiagent system is used regardless
of its internal organization and type of control, and so we’ll do in the rest
of this article.
Multiagent
systems have interesting features with application to enterprise systems:
·
Its
inherent distribution of data, problem-solving methods and responsibilities, as
it happens in real life business processes.
·
Integrity
of organization structure with autonomy of its components.
·
Complex
interactions, coordination, negotiation and information sharing between agents.
·
Flexibility
of the execution flow, which can be modified in runtime due to the reactivity
property of the agents (they perceive the environment and adapt its behavior)
and also because of its proactivity and rationality (they are able to adapt
their plans to situation changes).
·
Agent-based
systems adapt naturally to the service orientation paradigm, as agents can
deliver services as a minimum of its possibilities, and on the other hand,
agents be used for service orchestration (centralized control) and choreography
(distributed control). The suitability of agents for this tasks is justified
because of its higher abstraction and flexibility discussed earlier in this
article, which fits the needs to deal with complex business logic and dynamic
environment of enterprise systems, which have to adapt to continuous changing
needs, in a cost-effective way.
·
A
multiagent system is, in fact, service oriented, as long as agents posses roles
and interact with each other, offering services by virtue of their roles. On
the other hand, proactivity, sociability and rationality is specially suited
for the development of complex services requiring this characteristics, for
example, a human-computer interface service (HCI) or an electronic auction web
portal.
In sum, we think
that agent orientation can be viewed as a natural evolution of service oriented
programming that may boost the development of enterprise, distributed
information systems, increasing its manageability and flexibility to adapt to
changing business needs and better suited to adapt to a changing environment.
On the other
hand, there are situations where agent technology may not be a right choice, for example, environments where information
and control is centralized and the interaction between systems and between
systems and humans is trivial.
Even if this is
not the case with our system, this conditions will still apply as we change our
focus to lower abstraction layers, that will be implemented typically in a
object-oriented language or using functional decomposition.
Applying a SWOT
analysis to the multiagent technologies in the enterprise environment, we have
come to the following results:
·
Strengths. ¿Qué ventajas aporta un sistema
de información multiagente?
-
Distribution and Modularity. Agents have local control (autonomy) and
encapsulate knowledge (PSM, rules, goals) which simplifies the design of
complex systems, promoting reusability and modularity. At the same time, the
system analysis becomes closer to reality, being more understandable and thus
reducing the gap between software analysts and domain experts.
-
Flexibility. The coordination of its components can be made in
runtime, allowing to develop highly adaptable systems.
-
Abstraction and encapsulation. Agents encapsulate internal knowledge
that is hidden from outside, as objects do with private properties and methods.
Some businesses applications, like auctions, take profit of this advantage.
·
Weaknesses. Which disadvantages imply using
multiagent technologies?
-
Performance / Capacity. Multiagent systems, specially BDI
architectures, doesn’t seem suitable to perform well with high volumes of load,
for example, enterprise OLTP –online transaction processes – environments with
thousands of simultaneous users with millions of transactions per day. One
workaround could be, instead of using BDI agents to manage each transaction
intelligently, taking into account the changes in the environment, use them to
configure the high-performant system, in a continuous fashion, as the next
figure shows:
Figure 3. Use of SMA for dynamically adaptation of high performant system.
-
Complexity. The cost of developing a simple MAS is higher than
the cost of developing a simple object-oriented system, as a MAS includes by
default a number of aspects (agent interaction and organization, social
abilities, etc.). This is justified when the system-to-do is complex enough.
-
Maturity. The research work and experimental projects should
drive us to a number of results about architectures, platforms, methodologies
and standards, which will result in the concentration of the available options,
which, in turn, will become widely adopted, enabling its adoption for the
industry players.
·
Oportunities. Which emerging opportunities has the
multiagent technology?
-
Service Oriented Architectures. Service oriented architectures, and the web services standards will bring a new
generation of systems made up of
services readily available for agents. Agents will take profit of
existing services to handle functional tasks, while the proactivity and
intelligence of agent technology will be applied at the next level of
abstraction, at the business process management level. Inversely, agents can
have a web service interface to expose complex services in a SOA environment.
-
Semantic Web and Ontology Description
Languages. The use of
ontologies in multiagent systems is already applied, however, it is still
necessary to have more mature tools for ontology design, merging and mapping.
Automatic ontology mapping advances will be of great importance in order to
achive automatic service composition, which in turn has application in open,
non-predictible environments, and also for exception handling and conflicts
resolution. The InfoSleuth project
is a good example of the possibilities that ontologies offer to multiagent
systems in the domain of information retrieval and integration.
·
Threats. Which upcoming events may arise that
would negatively affect the adoption of multiagent technology?
-
Convergence & Emergence Problems. Agent coordination doesn’t work as
expected, deliberative processes are not convergent. Or there are emergent
behaviours that were not taken into account. These problems may extend the
development time, and the cost of the projects based in agent technology. To
avoid these problems, it is necessary to acquire the necessary experience and a
well stablished methodological approach.
-
Reliability. The multiagent system takes unpredicted decisions
with negative consequences. To minimize this threat, it is necessary to design
the system for robustness, introducing organizational rules or global
constraints to enforce agents to act within the limits for which they were
designed. Traceability and logging allows to debug wrong behaviours, and if
this is not enough to ensure the expected, convergent operation of the
organization of agents, a special control role or coordinator could be added
for the purpose.
-
Maintenance and Support. The tool, platform or standards adopted
stop being supported by the respective organisms, which results in the end of
the support and obsolescence of our system or environment. Although this threat
is always present, the fact that FIPA has been incorporated to IEEE reduces
significantly this threat.
If we try to
classify the information systems regarding its functionality, we can mention a
wide variety of systems applied to the
different business functions: billing, accounting, resource planning, sales and
marketing, customer care…
Let’s review how
well multiagent systems can perform, or what opportunities or advantages this
technology can offer.
·
Decission Support
Systems. Decission
support systems have to evaluate a situation from different points of views,
integrating heterogeneous, distributed data. There are already successful
examples of MAS for this purpose, such as ADEPT.
·
Computer Supported
Cooperative Work (CSCW).
At cooperative work, it’s necessary to manage the information flow among
people, based on a workflow specification. With multiagent technology, it’s
possible to model the workflow and the organizational structure in a direct,
natural way.
·
Customer Relationship
Management & Relational Marketing. The systems for relational marketing typically
combine business rules and data mining to help design marketing campaigns or
improve customer relationship management. Simulations could be carried over by
multiagent systems to this purpose. The emergency function of MAS would allow
to observe phaenomena derived from the group behaviour that could be otherwise
ignored. On the other hand, CRM systems are indeed CSCW systems so the previous
considerations apply too.
·
ERP and Billing. Billing systems, in dynamic, open markets
like the telecommunications industry, have to support frequent changes to rate
and bill according to new pricing plans and special offers. Risk analysis and
distributed databases access, such as blacklists, are tasks that multiagent
systems could be well suited to perform. In the telecoms industry, the growing
variety of pricing plans and its complexity (in terms of discounts, credits,
time frames, etc.) and its continuous change suggest the use of multiagent technology
because of its features of adaptability and reconfigurability. Performance
requirements, on the other hand, would have to be taken into account as these
systems have to handle great volumes of data, in the order of millions of data
records per day.
·
Web Portals. Web portals are complex systems that
involve from the presentation layer to data access and integration layer,
although the current trend is to build three-layered systems where the business
and integration layers are reused, so the web portal would be responsible of
just the presentation layer, being dependent on a business layer facade.
Anyhow, the ever growing adaptation of portals to the visitor’s profile, and
the complexity of the presentation flow are factors that motivate the adoption
of agent technology, whose advantages have been already proved in both web and
classic user interfaces.
Obviously, this
analysis is not either complete nor exhaustive but shows a glimpse of the
domains of enterprise systems where multiagent technology could best fit by
providing an advance, cost-effective solution.
Middleware
software have an increasingly high weight in enterprise systems development, as
computer-based systems tend to cover a wider range of business processes. On
the other hand, as integration of these processes use to involve different
departments, development becomes more complex, from requirements gathering to
the system integration tests. In addition, once the development goes live,
incidences that affect system integration are more difficult to manage, as
sometimes it’s not clear which system is responsible for a communication
mismatch.
We postulate
(and will try to justify) that multiagent technology could help at building
more flexible, adaptive and robust information infrastructures.
According to
[REITBAUER-05], we can set three necessary requirements for an advanced
application integration:
·
Implementation
of loosely coupled elements, with stable integration logic.
·
Design
of autonomous systems with complex interactions.
·
Implementation
of interactions in heterogeneous systems.
Service oriented
architectures have services as their basic entity. Services are coarse-grained,
self-defined and loosely coupled components which encapsulate some high-level
functionality. They are accessible through an standardized interface.
By contrast,
agents are autonomous, proactive entities, at a higher abstraction than
services because agents are responsible for achieving goals that usually
involve coordination with other agents, use of distributed knowledge and
problem solving methods, and invocation of lower level components, like
services. The higher level of abstraction of agents makes them suitable for the
development of complex systems. Another advantage of the agent-oriented
paradigm is that, by designing in terms of actors, roles and goals, the system
models are more understandable to domain experts, easing requirements gathering
and also maintenance and evolution of the system.
We propose the
combination of MAS with SOA by delegating the service orchestration and
choreography to intelligent agents that would adapt the workflow to changes in
the environment. In this way, integration logic would be implemented by means
of goals and semantic description of services that would be used for agents to
decide the optimum workflow of tasks. There are some articles related to this
idea, [SINGH-03], [KORHONEN-03], [VIDAL-04] y [REITBAUER-05] to cite some.
The next figure shows the evolution of
application integration. From isolated systems with vertical functionality, we
move to concrete data exchange. With process orientation, coarse-grained
services are orchestrated according to a business process formal language.
Finally, in a multiagent system scenario, dynamic composition of services
allows for process optimization and improves fault-tolerance and information
systems’ global management.
To be able to
implement this type of complex interactions, first we need to use a common
syntax to represent content. XML is a standard syntax to represent content, and
it has served as a basis to develope other standards as the Services Oriented Access
Protocol (SOAP), Web Services Description Language (WSDL) or ontology languages
such as OWL and RDF.
Once we have
this solved, we need to define the semantics of the communication acts and the
interaction flows. In multiagent systems, we have FIPA’s communication acts and
interaction protocols.
We will go
through the details of dynamic composition of services in the next chapter,
after introducing workflows and workflow management systems.
There are several definitions for the term workflow. In [CASATI], workflow is referred to a set of
activities involving coordinated execution of multiple tasks by different
processing entities. According to [SWIFT], a workflow defines the flow of information and control in a process,
usually involving different entities that follow a pre-defined set of rules or
specification of tasks, cooperating towards a common goal. The WorkFlow
Management Coalition (WfMC), defines a workflow
as the automation of a business process, in
whole or part. It also states that a workflow is concerned with the automation of
procedures where documents, information or tasks are passed between
participants according to a defined set of rules to achieve, or contribute to, an
overall business goals [WFMC].
A Workflow Management System (WFMS) is
one which provides procedural automation of a business process by management of
the sequence of work activities and the invocation of appropriate human and/or
IT resources associated with the various activity steps [WFMC-95].
The use of WFMS involve the specification, modelling,
análisis and coordination of structured work. Automated workflow management
reduce the latency of tasks by reducing human resources, standardizes business
processes, that must be formally specified and documented, being predictable
([CASATI-95], [DICKSON-01]).
This leads at the end to an improvement in the overall
process quality and productivity.
Some characteristics of business processes are individualism (multiple organizations or
departments trying to maximize their own respective benefits while being part
of the general activity), physical distribution, macroscopic management
(decentralization on the task assignment, information and resources), autonomy
(of the groups incide of the organization), concurrency in the execution of
tasks, dynamic adaptability ([ADEPT]).
This features shape WFMS, that we can study from
different points of view ([ALTY-95]):
·
Information
Infrastructure. It is
necessary a distributed, open environment to allow the necessary coordination
and cooperation hended to carry over a business process. Application services
distributed across different contexts must be shared and accessed in a seamless
way.
·
Information
Management. It deals
with the way processes are modelled to be controlled automatically. For the
system to be robust, the information flow (what)
should be separated from the responsability model (who), in other words, there should be distinguished the aspects
regarding information from those regarding organization. It could be convenient
to have an intencional model (whys)
explaining the goals that lye behind each task, to easy future reengineering
and dynamic adaptation. Another aspects to consider are time management (when), that enables to reason about
aspects regarding time, and exception handling (what-ifs) [ERIC-93].
·
Information
Presentation. Information
systems must be able to present the information in the most useful way. To do
this, it could be neccessary to transform the internal models to
user-friendlier model.
According to its
historical evolution, WFMS can be classified as follows: [SINGH-99]:
·
Closed systems. The first applications used to implement
semi-automated workflows were closed applications to automate manual tasks in a
direct-way. The business objects and control information is intermingled, so
evolution of these systems is difficult.
·
Database oriented. The development of databases allows to
open the specification of business information. Information is decoupled from
processes. However, control information is still embedded in the applications.
·
Workflow Management
Tools. Nowadays tools
enable the convenient separation of control information. Processes are viewed
at two granularity levels:
- At the high-level, there are work units that are composed by the workflow tool.
- At the unit-level, these units are implemented by specific applications.
·
Agent-based tools. Multiagent systems for workflow
management that we propose, would have several advantages that we will
highlight later on this chapter.
There are many
common workflows that worth a mention [SINGH-99]:
·
Document Management. Systems
based in the electronic document metaphora. The processes are the same, but
paper is substituted by online forms.
·
Groupware. These systems support cooperation among
members of an organization. Usually, they incluye document management but with
added functionality for its creation, dissemination and version control and
synchronization.
·
Control-logic
specification. Workflows
are considered as a set of coordinated activities. The workflow specification
defines how these activities are coordinated.
·
Distributed
applications. Virtually
any distributed program could be referred as a workflow tool, in wide sense,
specially when there exists any metamodel of the workflow.
·
Transactions. Workflows can be considered as extended
transactions, following a database-oriented point of view. This point of view
is useful when we focus on data integrity. Indeed, ACID properties (Atomicity,
Consistency, Isolation and Durability) are properties applicable to this kind
of workflows.
·
Coherent computations. It considers that workflows are made of selected
tasks which are sorted in order to ensure the overall coherence. Data
consistency is important only because it helps ensuring that the workflow
behavior is coherent. And this is not always necessary, for example, in the
case that data consistency is lost but the user is informed of this fact.
Cooperative
information systems are multiagent systems with organizational and data base
abstractions that operate in open environments [SINGH-99].
Multiagent
systems have features that make them suitable for workflow management:
·
Inherent distribution of data, knowledge, problem solving
methods and responsabilities.
·
Integrity of organizational structure with autonomy of its parts.
·
Complex interactions among agents, coordination, negotiation,
information sharing.
·
Execution flexibility. The solution of a problem is not
pre-defined. Agents perceive the environment and are sensible to its changes.
Agents are also proactive.
·
System design closer to real life situation we are modelling. By using the
agent orientation concepts such as actors, roles, goals, organization… the
model of the system is closer to reality, so narrowing the gap between analysis
and design. The communication between system engineers and domain experts should
benefit from this fact.
Multiagent
systems for workflow management have the following distinctive features
[SINGH-99]:
·
The
environment include related information resources.
·
Mechanisms
to associate semantic information to these resources, and to keep this
information consistent when these resources are accessed and modified.
·
They
are open when it’s time to add new
resources, flexible to the evolution
of those resources, intelligent to
ensure valid states and consistent behavior, and they adapt to adjust its behavior to unexpected events.
To design this
kind of systems, it is necessary to decouple the information (data and control)
from the system itself.
To define a
workflow, we usually use several types of models, to describe the dynamics of
the workflow, the data model and the organization structure. According to [SINGH-99]
we need to relate the following models to properly define the workflow system:
·
Architecture
/ Model Chart
·
Entity-Relation
Diagram, Object/Class Model
·
Activity
decomposition
·
Control,
Data and Materiel Flow Diagrams
·
Context
Diagram
Figure 5. Metamodels used to describe a CIS [SINGH-99]
Workflow
definition languages, such as BPEL enable us to define workflows using the XML
syntax with defined entities to express:
·
Sets
of participating services and variables
·
Correlations
and ordering (precedence)
·
Exception
handling and error recovering
·
Flow
control
·
Service
invocation and response
Workflows are suitable to implement business process because
they are predictable, well defined and fault tolerant. However, they are too
rigid to adapt to unexpected events.
The BPEL (Business
Process Execution Language) is a programming language to describe business
processes. It has two parts, business
protocols definitions to describe the interface of a business process from
the external point of view, and executable
business processes to implement the internal logic
and behaviour of a service. BPEL adopts a
centric approach to define the workflow when describing how a business process
combines a set of services to fulfill its business goal. This is called service orchestration.
A different
approach takes WS-CDL, a standard from W3C, that complements existing
description languages like WSDL and BPEL to define choreographies of services. A choreography is a contract between
several parts, that describes from a global point of view the external
behavior, in terms of messages exchanged between the services and the clients.
Dynamic
composition of services allows for a greater flexibility that the previous
settings. To achieve this, we need to complement the service description in
WSDL with semantic information about what the services are about, and what they
do. In [VIDAL-04] DAML-S[1] is
proposed to specify the service’s IOPEs (inputs,
outputs, preconditions and effects) to enable dynamic composition of
services, which can be accomplished at different levels.
To be able to
implement a multiagent system for workflow management, according to the same
article, it is convenient to translate from BPEL to PNML, an XML syntax for the
representation of Petri Nets. As both BPEL and PNML are XML documents, the
transformation may be defined with XLST templates. A parser replaces each BPEL
construct with a PNML module, finally all the modules are integrated in a
single PNML.
Figure 7. Transformation of BPEL4WS to PNML [VIDAL-04]
Moving beyond
functional equivalence, the following scenarios are described:
·
Substitutions. Service substitutions are possible by
comparing its DAML-S descriptions, to look for a similar or more general
service. Substitutions with more specific services would be feasible, but could
break the worflow if some required information is missing.
·
Similarity Matching. At this level, services are replaced by
similar services, based on domain knowledge, as the service’s IOPEs use domain
ontologies that the system must know to refine the search.
·
Sustitución
contextual.
Se trata de elegir un servicio de reemplazo basado en su lugar en un workflow,
más que en sus entradas y salidas.
·
Adaptation. The agent selects the service based not
only in the information provided by the ontologies, but also takes into account
its previous experiences regarding quality of service.
·
Multiagent Systems
with Workflows. In this
scenario, the agents become service providers with its characteristics of
proactivity, autonomy, selfish behavior… Agents may then diverge somewhat from
the existing workflows, however they don’t create entirely new workflows.
Figure 8. Spectrum of possible service composition behaviors
[VIDAL-04]
The main
challenge we face deals with decoupling the system models while maintaining its
coherence, as they are interrelated. In the interactions among components,
there is information exchange whose format, structure and grammar must be
shared among the participants. The structured set of interactions
(conversations) also has to be defined by interaction protocols.
Interaction
Oriented Programming (IOP) [SINGH-99], consists of a set of techniques around
interactions and is based on key concepts such autonomy and agent
heterogeneity. There are three layers:
·
Coordination
·
Compromise
·
Colaboration
Compromise regards societies and the roles that agents play, its
capacities and compromises they acquire, and their authority. Agents may
instantiate abstract societies autonomously adopting roles in them. The
creation, operation and dissolution of societies is managed by agents acting
autonomously but meeting the compromises they adhere to. Compromises may also
be cancelled, if the meta-compromises associated to its cancellation are given. The main
contribution of IOP is to formalize ideas from different disciplines,
separating them in a explicit conceptual metamodel that serve as a basis for
programming and methodology development, being computable [SINGH-99].
Virtual
organizations, in our context, refers to normative
multiagent systems (NMAS). It arises when individuals and institutions need
to coordinate resources and services across institutional boundaries. Users
become part of a virtual organization with shared goals and norms [BOELLA-05].
Traditional
client-server architectures have global norms with centralized control.
Peer-to-peer systems, in contrast, have local control without global norms.
Virtual organization constitute a new paradigm that combines local control with
global norms.
Some properties
of these organizations are [BOELLO-05]:
·
Dynamics.
Agents may join and leave them.
·
Interactives.
Agents may draw up contracts with mutual obligations.
·
Commitments
are enforced by mechanisms such as sanctions.
·
There
are roles specific to normative systems (subjects, defender, normative system,
etc.)
MAS are a
natural approach to WFMS when:
·
There
are different actors with different responsabilities and objectives.
·
Their
relationships are complex, maybe there is some kind of negotiation.
·
Eventually,
conflicts arise when sharing resources.
·
Knowledge
and resources are distributed.
·
Dynamic
environment: changes in the processes are frequent, or unexpected events occur.
·
We
want to formalize the intentional model of the workflow, to ease future
business process reengineering.
Complex
interactions require to define the comunication among the components at a
semantic level. On the other hand, dynamic composition of services require to
formalize the workflow and the semantic description of its tasks or services,
and having the algorithms to evaluate its substitution and replacement.
Virtual
organizations are multiagent systems with organizational abstractions that make
them suitable to implement workflows through different organizations.
These
organizations uses to have a dynamic composition. There are open research lines
about creation and destruction of VOs, the mechanisms that enable trust
relationships and how to ensure that the norms are met [PREECE-05].
Our vision is
that the most important potential of MAS in the domain of information systems
regards business process management and integration, allowing for:
·
Autodiscovery of services. Agents will discover the available (web)
services to get their job done.
·
Automatic Invocation. Given the service description (WSDL) the
agent knows how to invoke the service, and can build the required artifacts
(stubs) to do it, at runtime.
·
Dynamic Composition. Selection, composition and learning are the
different degrees of complexity that agents will master in the next generation
EAI.
·
Goal oriented,
Rule-based behaviour. At this point, the system is able to invent the workflow to fullfill the
business goals, being this freedom always limited by the rules of the domain of
application as well as by organizational rules. Both set of rules are
orthogonal, independent of each other, we can name vertical rules to the first ones, related to the domain knowledge,
and horizontal rules to the second ones, as they apply to all kind of agents. To maintain the ruleset, user agents would
interface our MAS with the system administrators and domain experts.
·
Ontology-enabled. Ontologies will be needed to decouple the
design of the MAS from the data and processes. They will be used to know the
semantics of the web services that will allow for dynamic composition, and they
provide the glue to handle the data
used as input and output of each task. Ontology-mapping is key to get
interoperability in open environments.
·
Fault-Tolerance, Data Independency. Agent-based workflow systems are also
better in terms of fault-tolerance, if the systems is able to replace a service
in case of unavailability. A change in the data model could be handled if the
information does not have implications in the control of the process or it
enriches the previous model[2].
Addition of new
pluggable services is obtained by setting directories (like UDDI) and we have
the information about the service, not only operative knowledge (how to invoke
the service) but semantic knowledge (what is the service for). Also, context
information (provided by languages such as BPEL or WS-CDL) is needed to know
the place of the service in a workflow.
The Pi-calculus
is the mathematical foundation of BPEL4WS, BPML, WSFL, XLANG, XPDL, WS-CDL, and
WSCI. Another different valid approach are Petri nets, which may be computed
using the XML-based language PNML. On the favor for Petri-Nets, there is their
greater expression power [AALST-05]. In the same line, the G-Nets are used for
multiagent systems as explained in [XU-00] and [XU-01]. Finally, the paper
[VIDAL-04] propose the translation from BPEL to Petri-Nets to implement an
agent-based workflow system.
To add a
semantic description to a web service, the most popular language is DAML-S /
OWL-S.
We will put
everything together in an example, without descending to the details of a
concrete, working example. That’s why we use the term abstract example. Suppose we have an IT service-oriented
infrastructure. The services are grouped by business functions such as billing,
ordering, customer care, etc. They all have a Web Service interface and are
publicly available in our domain of study.
Our goal is to
design, at a high level of abstraction, a multiagent system to integrate the
services to implement business processes.
We have to build
a service model being computable by agents, and devise an agent architecture to
implement a generic business process management system.
Figura 9. Web Services interfacing various business areas from
inside and outside.
We would adopt
the Petri-Net / DAML-S approach proposed in [VIDAL-04] to model the workflow
and the service constraints (Inputs,
Outputs, Pre-Conditions and Effects, or IOPEs).
The effects are related to the agent’s
mental-state changes which define the intentional model (the whys of the process).
We have to
define a goal hierarchy model, role model (agent model), interaction and
organization models, following an agent-oriented methodology of our choice.
According to the evaluation made in [GOMEZ-05], Ingenias would be our choice.
Now our goal is
to build a multiagent system that reuses knowledge, has an effective management
of agents, can be reconfigured to meet changing business requirements and is
always on, like many large-scale systems. Also, it should be scalable and allow
for load-balance.
To this purpose,
we propose to design a meta-organization independent
of the application domain, where agents are coordinated hierarchically.
Each coordinator role has system-implications
such as the number or types of agents it can instantiate and manage, as in a
human organization. Each role also has to define the platform services
adscribed and a time to live (TTL). We propose to use a TTL for each agent, in
the same way it is used in the communication protocols like IP, to balance the
rate of creation and destruction of agents, which ensures stability and avoid
infinite loops. We haven’t found this idea in any agent-oriented methodology up
to now.
The coordinator, using the PSM database,
decompose top-level goals into partial goals, which are associated to roles of
lower degree than that of the coordinator. The coordinator will grant these
roles with access to resources and services that he, in turn, has been granted
to.
It is important
to note that, in our model, the PSMs are centralized in a database, and are
public inside a functional unit or agency.
This promotes reusability and manageability.
The roles and
number of instances can be established in advance or, in more complex settings,
could be resolved in runtime.
Either way, what
is important from our point of view is to decouple the role configuration from
the system source code, being instead stored as centralized, computable
configuration data. In this way it is easier to change the configuration of the
system, and to assign roles in runtime.
Figure 10. Relation between agents, roles, tasks and agencies
[FARHOODI-97]
Autodiscovering
of services that fulfill the partial goals of the agents can be implemented by
a platform service or a hierarchy model similar to the internet system for domain
names resolution (DNS): the agents would ask their supervisor whom to ask for
help, and he would answer or scale up the question to the next level. This
schema benefits from the locality principle (if we group the services properly,
most of the interactions will happen in the neighbourhood) which is efficient
and scales well. Helping agents would receive credits as a compensation, with which they could in turn receive
help from others. In contrast, agents unwilling to cooperate would be
negatively reported to their supervisor and could be sanctioned.
Let’s imagine this trivial workflow to
illustrate simple process reengineering:
Figure 11. A simple workflow (I)
We have defined
PSM that describe tasks (K, L, M, X, Y, Z) that satisfy goals. So the top-level
coordinator will decompose the main goal (satisfied by the task Z) into
pre-requisites recursively. Afterwards, the tasks are assigned to roles
according to the task type, subject or application domain.
Then, the
necessary agents for the roles defined are instantiated.
Given the
prerequisites of each task, the resulting workflow is that depicted in the
previous figure.
Due to changes
in the business, one of the tasks is modified. The “X” task now depends in an
available resource, named “N”, instead of depending on the output of task “M”.
Even if “M” is
still required for the workflow to complete, thanks to the rule-based behavior
of agents, the new workflow will execute the task “X” in parallel with K, L, M,
saving an important amount of time.
Figure 12. Reorganización automática de
agentes (II)
The key
difference between this approach and the current trend on business process
modelling is that, in our approach, the expert does not design the workflow, but the rules and PSMs that lie behind.
In case that
human validation is needed due to the criticity of the business process, there
would be no impediment to include a user validation step, without loosing the
optimality of a machine-generated workflow. To this purpose, a simulator would
be needed to figure out the workflow in advance.
Another advantage
of this approach is that the system is always on, as we stated at the
beginning, and is reconfigured according to the PSM repositories.
If an agent
behavior is not as expected, the coordinator or supervisor could either
undeploy or upgrade it with the new version of the affected PSM installed in
the corresponding repositories. The low level domain knowledge could also be
downloaded from the agents from a central database, like a plug-in or dynamic
load library, to meet our goal of generic agent deployment.
The next step,
most difficult but feasible, would be to incorporate learning techniques into
the agents (specially in the higher levels of the hierarchy) so that they are
able to update the PSMs and upgrade the operator agents.
Autonomy and
local control of the software agents is compatible with the existence of
globally accessible repositories. In our opinion, ontologies, PSMs, workflows,
and other models etc. should be centralized at a system or agency level.
This not only
allows for reuse but also eases the system management, as the agent’s behavior
can be controlled in the same way that procedures established by the directors
control the human behavior in an organization.
In this
architecture, the coordinator role is in charge to decide the number of agents
to be created, depending on the tasks to be done and the load of the system.
Load balancing is implemented in this point.
As we stated,
subordinated agents report to their coordinator but are autonomous to pursue
their goals, yet always meeting the horizontal organizational rules.
Additionally, a credit reward system and time-to-live mechanism is introduced
to help achieve system stability.
When an agent
needs help from others, it uses its acquaintance model, which is enriched with
references that come (usually) top-down, in a hierarchycal system similar to
the internet DNS.
The
concretization of this architecture is left as an open research works, oriented
to the development of new designs and architectures for multiagent information
systems.
From our point
of view, there is already a solid foundation from the perspective of AI and AOSE
to support the development of enterprise multiagent systems, and there have
already been many successful projects in this field. In the last decade, an
important amount of research has been done regarding requirement engineering,
process engineering, agent-oriented architectures, platforms and methodologies,
service oriented architectures, ontology description languages, automatic
ontology mapping, etc…
So in our
opinion the main challenge now is to be able to apply this knowledge to real
life situations, demonstrating its potential and paving the way for industrial
adoption.