Verifying requirements for resource-bounded agents
EPSRC grant EP/E031226
Principal investigator: Natasha Alechina, co-investigator: Brian Logan
Visiting fellows: Piergiorgio Bertoli, Chiara Ghidini, Luciano Serafini
Summary
The project will provide theoretical foundations and practical tools for
analysing resource requirements for systems of reasoning agents, such
as, for example, agents which reason using ontological rules to provide
a web service. The resources we shall consider include the time,
memory, and communication bandwidth required by the agents to select the
next action to perform.
The trend towards ever smaller agent platforms means that resource
utilisation is
becoming an increasingly important factor in agent design and deployment.
However the complex, often distributed, derivations implied by modern
agent designs make it hard for agent developers to predict system resource
requirements a priori. The development of formal frameworks and
practical verification tools to exploit them is therefore key to the
successful development of provably correct agent designs for emerging
resource-limited agent paradigms such as mobile agents and platforms
such as PDAs and smart phones.
The project builds on the previous joint work between the investigators and
visiting researchers using model-checking techniques to verify requirements.
Background
Consider an agent that has a finite knowledge base and some rules of
inference which allow it to derive new information from its knowledge base.
It is intuitively clear that some derivations require more time and/or
memory than others (e.g., to store all the relevant information, or to store
intermediate results), and that two agents with the same knowledge base and
the same set of inference rules, but with different amounts of
memory, may not be able to answer the same queries or may take different
amounts of time to answer them. If two agents need to communicate in order to answer a query, then the number and size of messages that must be exchanged will depend on the query, and the time and memory availale to the agents.
The computational resources required by a reasoning agent to achieve a
goal or answer a query is of considerable theoretical and practical
interest. From a theoretical point of view, it is related to the
questions investigated in proof complexity
[Haken85], [Alekhnovich02], of the lower bounds on the size of
proofs in deductive systems, and of lower bounds on memory required to
verify them. From a practical point of view, the question of whether
an agent has sufficient resources (memory, time or communication
bandwidth) to achieve its goal(s) is clearly a major concern for the
agent developer. As agent tasks become more open ended, the amount of
resources required to achieve them becomes harder to predict a priori.
For example, the reasoning capabilities of agents assumed by many web
service applications is non trivial (e.g., reasoning over complex
ontologies or about business processes described by a set of business
rules) and the time, memory and communication requirements
correspondingly difficult for the agent developer to determine a
priori. At the same time trends towards mobile agents and agents
which run on mobile devices such as PDAs and smart phones imply more
processor and memory efficient agent designs (e.g., the Micro-FIPA-OS
platforms). Such devices typically have a relatively small amount of
physical memory (and no virtual memory), which must be shared between
the OS, the agent platform and other applications running on the
device. While increased bandwidth and more powerful handheld devices
will undoubtedly become available, the rapid growth in, e.g., the
number and complexity of ontologies, seems likely to outstrip any
increases in hardware capabilities, at least for the foreseeable
future.
Despite the importance of the topic, there has been relatively little work
in this area.
There is an extensive body of work on epistemic logics
(logics of knowledge and belief), for
example [Fagin95]. However, most epistemic logics assume that the
agent's knowledge is deductively closed, and therefore do not try to model
the time and space restrictions on the agent's ability to derive
consequences of its beliefs.
Where communication is considered, it is usually modelled via axioms that
instantaneously transmit knowledge from one agent to the other (e.g., Ki
A -> Kj A - if agent i knows that A, then agent j
knows that A (see e.g. [Fagin95].
The reality is rather different, and
communication needs bandwidth and takes time.
There is a growing body of work where the agent's deduction steps are
explicitly modelled in the logic, for example
[Duc:97], [Alechina04], [Agotnes2005], which makes it
possible to model the time it takes the agent to arrive at a certain
conclusion; a different kind of limitation on the depth of belief reasoning
allowed is studied in [Fisher99]. Both the time and space limitations
on the agent's knowledge were considered in step logics
[ElgotDrapkin91]. However, rather than being interested in
verification of space requirements for solving a certain problem,
[ElgotDrapkin91] is concerned with restricting the size of short term
memory to isolate any possible contradictions, therefore avoiding the
problem of swamping: deriving all possible consequences from a
contradiction.
The problem of formal verification of multi agent systems has lead to a
growing body of work, especially in the area of multi-agent model-checking
[Benerecetti98], [Lomuscio04]. This work,
however, is again mainly focused on logically omniscient agents, that
is, agents who instantaneously believe all the logical consequences of
their basic beliefs, and no time and space limitations are taken into
account.
Research in proof complexity investigates
worst case upper and lower time and memory bounds for deductive systems
(for example, resolution). However, although general
bounds on time and space complexity may be known for some of the systems
we propose
to investigate, it is still important to know more precise requirements for
specific knowledge bases, which may be substantially different from the
worst case bounds. The situation is similar to, for example, research on
response times of rule-based programs. Although theoretical bounds on
termination for rule-based programs are known, there is still a significant
practical interest in precise bounds for response time of specific rule
sets.
In our previous joint work [Alechina06b] , [Albore06],
[Alechina06f] , we
have taken some preliminary steps towards the automated verification of
resource requirements of reasoning agents. In
[Alechina06b] , [Alechina06f] , we investigated whether an agent
with a knowledge base KB, has sufficient memory to derive a given formula
A. We represent a reasoning agent as a finite state machine in which
the states correspond to the formulas currently held in the reasoner's
memory and the transitions between states correspond to the application of
an inference rule. We introduced a logical language in which we can
formulate properties of memory-bounded reasoners, and specified and
verified properties of a rule-based agent and an agent reasoning in
classical logic. To check whether a reasoner has enough memory
to derive a formula A, we specify the FSM as input to the model-based
planner MBP [Cimatti03] and check whether the reasoner has a
plan (a choice of memory allocations and inference rule applications), all
executions of which lead to states containing A. We also investigated
examples of trade-offs between time and memory requirements for rule-based
reasoners (larger memory enables shorter derivations).
In [Alechina06b] , we made an assumption that backtracking when
reasoning by cases does not require extra memory. In more recent work
[Albore06], we extended our model of a classical reasoner to
incorporate a more realistic account of memory use for backtracking.
However, while this work represents a significant advance on the state
of the art in epistemic logics, many key questions remain unanswered.
Specifically in our work to date, we have considered only single
agents (and no communication costs), which reason using relatively
simple logical formalisms, and have mostly focused on a single
resource (memory).
We also assumed that each formula has a constant size. While this
assumption is routinely made when analysing the space complexity of systems
such as resolution which do not have an unrestricted conjunction
introduction rule [Alekhnovich02], this assumption is too strong
for systems which allow conjunction introduction
(space complexity becomes constant).
In addition, our model checking
work has adopted simple direct encodings of finite state machines into the
representational language used by the model-based planner MBP.
which we use for automatic verification. While sufficient for small
problems, this approach is unlikely to scale to the verification of non
trivial agent systems.
Aims and objectives
The aim of the proposed project is to define a framework for the
representation, specification and verification of resource-bounded agents.
This will involve addressing the limitations of our preliminary joint work,
namely the issues of expressiveness and scalability. We will
extend our approach to allow the verification of agents which reason using
richer logical formalisms (specifically description logics) and to consider
the interaction between different resources (time, memory and communication
bandwidth). In addition we will address the problem
of scalability, by developing new encoding schemes which internalise the
agent's reasoning strategy and memory model within the model checker. The
resulting enhanced tools will allow a more realistic evaluation of the
feasibility of our approach for larger problems.
While the
temporal aspects of reasoning have been considered in previous work, to the
best of our knowledge, there has been no detailed treatment of memory
requirements and no systematic investigation of resource trade-offs in
resource-bounded reasoners. The development of such formal frameworks and
practical verification tools to exploit them is key to the successful
development of provably correct agent designs for emerging resource limited
agent paradigms such as mobile agents and platforms such as PDAs and smart
phones.
The objectives of the project are:
- To define epistemic logics for the representation of real
(resource-bounded) reasoning/planning agents in a cooperative setting.
We will consider agents with different reasoning capabilities (e.g.,
agents reasoning in propositional logic, description logics, or
rule-based agents).
- To define computational models for the logics above. In particular
we will define models to express reasoning strategies used by the agents and
alternative memory models required to support different kinds of
reasoning strategies.
- To investigate trade-offs between multiple resource bounds
(memory, time and communication bandwidth).
- To develop tools to support the automatic verification of resource
bounded agents.
- To demonstrate the scalability and expressiveness of our approach by
applying the framework and tool to one or more example scenarios, for
example, agents which reason using ontological rules/description logic
to provide a web service.
Methodology
1. Defining logics
We will build on our preliminary work with the researchers at ITC-irst
and others
[Alechina06b] , [Albore06], [Agotnes06], [Alechina06e], [Alechina06f] .
The main methodological point here is that we interpret beliefs
syntactically (as formulas, not as sets of possible worlds).
The language of the logics will contain a syntactic belief operator,
temporal modalities to describe the transition system, and other modalities
similar to the ones introduced in [Agotnes06] to express
memory limitations.
Preliminary results show that such logics are computationally more
tractable than standard temporal epistemic logics.
2. Defining computational models
To model an agent reasoning about a knowledge base KB and a formula
A in logic L, we will represent the states of the agents as
assignments to finitely many formulas (all subformulas of KB and
A). Transitions correspond to applications of inference rules of L
to formulas which have a value true or false in the state. The
constraint on memory corresponds to a restriction that at most n
formulas can have a value true or false in any state (for the case
when formulas are assumed to have constant size); or that the combined
size of all formulas defined in a state can be at most n. The
approach to modelling multi-agent systems and communication will be
based on our approach to a simpler case of rule-based agents in
[Alechina06e]. The environment can be modelled as a distinguished
agent.
3. Investigating trade offs
In our earlier work, we found examples of propositional formulas which
require less time (computational steps) to prove with larger memory, and
are still provable but require longer derivations with smaller memory.
A key objective of this project is a systematic investigation of such
trade-offs. We will investigate trade-offs between memory and
communication costs, and between communication costs and time measured
as the number of transitions required by the multi-agent system to
achieve the goal, provided the reasoners execute in parallel. Our
approach will be informed by work in proof complexity on the
relationships between the size of proofs and space required to verify
them. We will also draw on the notions of communication complexity
[Yao79] to represent communication costs as a function of the
number or size of formulas which reasoners have to exchange to solve a
common task.
4. Developing an automatic tool
In our previous work, the verification of resource requirements
exploited the brute-force breadth-first search algorithms provided by
MBP. While this guarantees the proofs obtained
are of minimal length, it presents two major drawbacks. First, in spite
of MBP's effective symbolic machinery, scalability is limited by the
huge number of states that are visited using such a search strategy.
Second, we are not able to model the particular reasoning strategy
adopted by a given reasoning agent; our results implicitly refer to the
smartest possible agent for the problem under consideration.
We will design a language in which we can express the different
reasoning strategies that may be used by reasoning agents, and to extend
MBP so that this additional input constrains the search space.
The idea of driving search by hints or
strategies has been explored in the area of classical planning.
TLPlan [Kabanza] and TALPlanner [Kvarnstrom] use
temporal logics as a means to describe rules to control forward chaining. A
linear temporal logic LTL has also been used in [Albore] to constrain the
search performed by MBP in partially observable planning domains.
However, it is not straightforward to adapt their techniques for
our purpose, because they assume a deterministic model, and LTL may not
be suitable for representing search strategies for tree-structured proofs.
We will need to identify a general and flexible language to represent
such strategies.
The scalability of our current verification approach is also negatively
influenced by the fact that we explicitly represent the memory model as a
part of the planning domain. The resulting combinatorial explosion is
therefore due in large part to the fact that a single epistemic state can be
associated with several different configurations of, e.g., memory usage.
We therefore intend to `internalise' the representation of the memory
model used in the search within MBP's algorithms, so that the model given to
MBP abstracts away from resource-related constraints. These will be
considered exclusively during the search, separately from the
handling of epistemic states. The effective combination of logical and
resource representations within the search algorithms is far from trivial,
and requires a careful redesign of MBP to take full advantage of symbolic
techniques. However work on integrating arithmetics and logics within
symbolic architectures, such as in the MathSat system [Bozzano] have
shown that substantial performance improvements can be achieved.
5. Evaluation
We will evaluate scalability and expressiveness
of our approach by using two case studies, which will determine the relationship
between the size of a knowledge base and time required by \mbp to verify
resource requirements for answering typical queries. Our preliminary plan for
the case studies is to test optimisations to MBP on propositional
reasoning examples, such as business rules and to test ontological
reasoning capabilities on a set of ontological definitions and rules
such as the formalisation of the MIT Process Handbook in [Grosof03].
However, since at the moment there are several different proposals for
incorporating rules into ontology languages, and
it is not clear which one will achieve widespread acceptance in the community,
we will determine the exact language and content of the example ontology
during the second year of the project.
We are planning to make use of the HPC cluster at Nottingham for to
run the computationally intensive model checking experiments entailed
by the case studies.
References
- [Agotnes06]
T. Agotnes and N. Alechina.
Knowing minimum/maximum n formulae.
In Proc. 17th European Conference on Artificial Intelligence
(ECAI 2006), 2006.
- [Agotnes05]
Thomas Agotnes and Michal Walicki.
Complete axiomatizations of finite syntactic epistemic states.
In Proceedings of the 3rd International Workshop on Declarative
Agent Languages and Technologies (DALT 2005), July 2005.
- [Albore06]
A. Albore, N. Alechina, P. Bertoli, C. Ghidini, B. Logan, and L. Serafini.
Model-checking memory requirements of resource-bounded reasoners.
In Proc. 21st National Conference on Artificial Intelligence
(AAAI 2006). AAAI Press, 2006.
- [Albore]
A. Albore and P. Bertoli.
Safe LTL Assumption-based Planning.
In Proc. International Conference on Planning and Scheduling
(ICAPS'06), 2006.
- [Alechina06f]
N. Alechina, P. Bertoli, C. Ghidini, M. Jago, B. Logan, and L. Serafini.
Model-checking space and time requirements for resource-bounded
agents.
In Proc. 4th Workshop on Model Checking and Artificial
Intelligence (MoChArt 2006), 2006.
- [Alechina06b]
N. Alechina, P. Bertoli, C. Ghidini, M. Jago, B. Logan, and L. Serafini.
Verifying space and time requirements for resource-bounded agents.
In Peter Stone and Gerhard Weiss, editors, Proc. 5th
International Joint Conference on Autonomous Agents and Multi-Agent Systems
(AAMAS 2006), pages 217--219, Hakodate, Japan, May 2006. IEEE Press.
- [Alechina06e]
N. Alechina, M. Jago, and B. Logan.
Modal logics for communicating rule-based agents.
In Proc. 17th European Conference on Artificial Intelligence
(ECAI 2006), 2006.
- [Alechina04]
N. Alechina, B. Logan, and M. Whitsey.
A complete and decidable logic for resource-bounded agents.
In Proc. 3rd International Joint Conference on Autonomous Agents
and Multi-Agent Systems (AAMAS 2004), pages 606 -- 613, 2004.
- [Alekhnovich02]
M. Alekhnovich, E. Ben-Sasson, A. A. Razborov, and A. Wigderson.
Space complexity in propositional calculus.
SIAM J. Comput., 31(4):1184--1211, 2002.
- [Benerecetti98]
M. Benerecetti, F. Giunchiglia, and L. Serafini.
Model Checking Multiagent Systems.
Journal of Logic and Computation, Special Issue on Computational
and Logical Aspects of Multi-Agent Systems, 8(3):401--423, 1998.
- [Bozzano]
M.Bozzano, R. Bruttomesso, A. Cimatti, T. Junttila, P. van Rossum, S. Schulz,
and R. Sebastiani.
Mathsat: A tight integration of SAT and mathematical decision
procedure.
Journal of Automated Reasoning, to appear.
- [Cimatti]
A. Cimatti, M. Pistore, M. Roveri, and P. Traverso.
Weak, Strong, and Strong Cyclic Planning via Symbolic
Model Checking.
Artificial Intelligence Journal, 147(1,2):35--84, July 2003.
- [ElgotDrapkin91]
J. Elgot-Drapkin, M. Miller, and D. Perlis.
Memory, reason and time: the Step-Logic approach.
In Philosophy and AI: Essays at the Interface, pages 79--103.
MIT Press, Cambridge, Mass., 1991.
- [Fagin95]
R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi.
Reasoning about Knowledge .
MIT Press, Cambridge, Mass., 1995.
- [Fisher99]
M. Fisher and C. Ghidini.
Programming Resource-Bounded Deliberative Agents.
In Proc. 16th International Joint Conference on Artificial
Intelligence (IJCAI'99), pages 200--206. Morgan Kaufmann, 1999
- [Grosof03]
Benjamin N. Grosof and Terrence C. Poon.
Sweetdeal: representing agent contracts with exceptions using xml
rules, ontologies, and process descriptions.
In Proc. 12th Int. World Wide Web Conference, WWW2003, pages
340--349. ACM, 2003.
- [Kabanza]
F. Kabanza, M. Barbeau, and R. St-Denis.
Planning Control Rules for Reactive Agents.
Artificial Intelligence, 95(1):67--113, 1997.
- [Lomuscio04]
M. Kacprzak, A. Lomuscio, and W. Penczek.
Verification of multiagent systems via unbounded model checking.
In Proc. 3rd International Joint on Autonomous Agents and
Multiagent systems. (AAMAS04), 2004.
- [Kvarnstrom]
J.Kvarnstrom and P.Doherty.
TalPlanner: A temporal logic based forward chaining planner.
Annals of Mathematics and Artificial Intelligence, 30:119--169,
2001.
- [Yao79] A. Chi-Chih Yao.
Some complexity questions related to distributive computing
(preliminary report).
In Conference Record of the 11th Annual ACM Symposium on Theory
of Computing, pages 209--213. ACM, 1979.