DisCSP-Distributed Constraint Satisfaction Problem

1. Introduction.

Constraint programming is a programming approach used to describe and solve large classes of problems such as searching, combinatorial, planning problems, etc. Lately, the AI community has shown increasing interest in the distributed problems that are solvable through modeling by constraints and agents. The idea of sharing various parts of the problem among agents that act independently and collaborate among themselves to find a solution by using messages proves itself useful. It was also lead to the formalized problem known as the Distributed Constraint Satisfaction Problem (DCSP) [4][5].

Definition 1.- CSP model. The model based on constraints CSP-Constraint Satisfaction
Problem, existing for centralized architectures, consists in:

-n variables X_{1}, X_{2},..., X_{n}$, whose values are taken from finite, discrete
domains D_{1}, D_{2},..., D_{n}, respectively.

-a set of constraints on their values.

The solution of a CSP supposes to find an association of values to all the variables so that all the constraints should be fulfilled.

Definition 2.-The DCSP model. A problem of satisfying the distributed constraints (DCSP) is a CSP, in which the variables and constraints are distributed among autonomous agents that communicate by transmitting, messages.In this sitee we will consider that each agent A_{i} has
allocated a single variable x_{i}.

The agents are autonomous and their communication is based on the following communication model:

Definition 3.- the assignment. It is called assignment for variable xi a pair (xi,v), where v is a value from the xi domain.

Definition 4. - the list agent_view. The list agent_view of an agent Ai is a set with the newest assignments received by the Ai agent for distinctive variables.

Definition 5.- the nogood list. The Nogood list is a set of assignments for distinctive variables for which looseness was found.

There exist complete asynchronous searching techniques for solving the DCSP, such as the ABT (Asynchronous Backtracking)[1][5], Asynchronous Search with Aggregations (AAS) [3] and DisDB (Distributed Dynamic Backtracking)[1] . There is also the AWCS (Asynchronous Weak-Commitment Search) [5] algorithm which records all the nogood values. This technique allows the agents to modify a wrong decision by a dynamic change of the agent priority order. The ABT algorithm has also been generalized by presenting a unifying framework, called ABT kernel [1]}. From this kernel two major techniques ABT and DisDB can be obtained. These techniques are remarked because of the diversity of applied ideas concerning the agents’ work in an asynchronous and competitive way, but they also ensure the completeness or a better efficiency, too.

2. The Asyncronous Backtracking.

According to the IT literature the backtracking algorithm distributed in an asynchronous way (Asynchronous Backtracking -ABT), existing for the DCSP model, is considered the first complete algorithm for the asynchronous search. It is the first complete algorithm that is asynchronous, distributed and competitor, in which the agents can roll up in a competitive and asynchronous way, published in [5]. This algorithm is based on sending nogood messages among agents for doing an intelligent backtracking and for ensuring the completeness of the algorithm.

In this algorithm [5], each agent instantiates its variables in a concurrent way and sends the value to the agents with which it is directly connected, using direct communication channels which function according to the FIFO principle. It is also considered a global statically order among the agents, in which the Ai agent has a smaller priority than Aj, if i>j. In this way Aj can impose first the favorite values.

The Asynchronous Backtracking algorithm uses 3 types of messages:
• the OK message, which contains an assignment variable–value, is sent by an agent to the estimate in order to see if the value is good.
• the nogood message which contains a list (called nogood) with the assignments for the looseness, is sent in case the estimator agent finds an unfulfilled constraint.
• the add-link message, sent to announce the necessity to create a new direct link.
ABT requires constraints to be directed. A constraint causes a directed link between the two constrained agents: the value-sending agent, from which the link departs, and the constraint-evaluating agent, to which the link arrives. To make the network cycle-free there is a total order among agents, which is followed by the directed links.
Each agent keeps its own agent view and nogood store. Considering a generic agent self, the agent view of self is the set of values that it believes to be assigned to agents connected to self by incoming links. The nogood store keeps nogoods as justifications of inconsistent values.
Agents exchange assignments and nogoods. When self makes an assignment, it informs those agents connected to it by outgoing links. Self always accepts new assignments, updating its agent view accordingly. When self receives a nogood, it is accepted if it is consistent with self's agent view, otherwise it is discarded as obsolete. An accepted nogood is added to self's nogood store to justify the deletion of the value it targets. When self cannot take any value consistent with its agent view, because of the original constraints or because of the received nogoods, new nogoods are generated as inconsistent subsets of the agent view, and are sent to the closest agent involved, causing backtracking. If self receives a nogood mentioning another agent not connected with it, self requires to add a link from that agent to self. From this point on, a link from the other agent to self will exist. The process terminates when achieving quiescence, meaning that a solution has been found, or when the empty nogood is generated, meaning that the problem is unsolvable.

3. The ABT Family

Starting from the algorithm of asynchronous backtracking (ABT), in [1] several derived techniques were suggested, based on this one and known as the ABT family. They differ in the way that they store nogoods, but they all use additional communication links between unconnected agents to detect obsolete information. These techniques are based on a common core (called ABT kernel) hence some of the known techniques can be obtained, including the algorithm of asynchronous backtracking, by eliminating the old information among the agents. In [1], [2] the starting point is a simple procedure that includes the main characteristics of the asynchronous search algorithms, procedure introduced for the first time in [1] and which is supposed to meet the distributed constraints. Starting from this procedure, which forms the unifying framework, one can reach the known algorithms or variants that are close to them: asynchronous backtracking (ABT), Distributed Dynamic Backtracking (DisDB), Distributed Backtracking algorithm (DIBT).
The ABTkernel algorithm requires, like ABT, that constraints are directed – from the value-sending agent to the constraint-evaluating agent – forming a directed acyclic graph. Agents are ordered statically in agreement with constraint orientation. Agent i has higher priority than agent j if i appears before j in the total ordering. Considering a generic agent self, ?-(self) is the set of agents constrained with self appearing above it in the ordering. Conversely, ?+(self) is the set of agents constrained with self appearing below it in the ordering.
The ABT kernel algorithm is a new ABT-based algorithm that does not require to add communication links between initially unconnected agents. The ABT kernel algorithm is sound but may not terminate (the ABT kernel may store obsolete information). In [1], [2] were suggested several solutions for the elimination of the old information among agents, solutions that are summarized hereinafter.
A first way to remove obsolete information is to add new communication links to allow a nogood owner to determine whether this nogood is obsolete or not. These added links were proposed in the original ABT algorithm.
A second way to remove obsolete information is to detect when a nogood could become obsolete. In that case, the hypothetically obsolete nogood and the values of unrelated agents are forgotten. These two alternative ways lead to the following four algorithms:
• Adding links as preprocessing: ABTall (DIBT). This algorithm adds all the potentially useful new links during a preprocessing phase. New links are permanent.
• Adding links during search: Yokoo's ABT. This algorithm adds new links between agents during search. A link is requested by self when it receives a Back message containing unrelated agents above self in the ordering. New links are permanent.
• Adding temporary links: ABTtemp. This algorithm adds new links between agents during search, as ABT. The diference is that new links are temporary. A new link remains until a fixed number of messages have been exchanged through it. After that, it is removed.
• No links: ABTnot - Distributed Dynamic Backtracking. No new links are added among the agents. To achieve completeness, this algorithm has to remove obsolete information in finite time. To do so, when an agent backtracks forgets all nogoods that hypothetically could become obsolete.

4. AWCS

The AWCS algorithm [5] is a hybrid algorithm obtained by the combination of ABT algorithm with WCS algorithm, which exists for CSP. It can be considered as being an improved ABT variant, but not necessarily by reducing the nogood values, but by changing the priority order. It deliberately follows to record all the nogood values (which are fewer) to ensure the completeness of the algorithm, but also the avoidance of some unstable situations.
The authors show in [6], [7] that this new algorithm can be built by the dynamical change of the priority order. The AWCS algorithm uses, like ABT, the two types of ok and nogood messages, with the same significance. There is a major difference in the way you treat the ok message. In case of receiving the ok message, if the agent can’t find a value to its variable that should be consistent with the values of the variables that have a greater priority, the agent not only creates and sends the nogood message, but also increases its priority in order to be maximum among the neighbors.
The AWCS algorithm can be improved by applying a learning schema. In [3], Hirayama and Yokoo introduce the term of nogood learning or leaning that consists in obtaining and registering nogood values. In [3] is the authors present and analyze a schema called “resolvent-based learning”, that applies to the AWCS algorithm and brings good results regarding the number of necessary cycles for problem solving .
The nogood learning technique induced in [3] is a new method of learning the nogood values applicable to DCSP, based on adapting the loop-back techniques to the DCSP frame. The idea is that for each possible value for the failure variable, we select a nogood that forbids that value and than a new good is built outside the one obtained by unifying the selected nogoods. The authors of that method compare this nogood as resembling to the notion of resolvent from the propositional logic, hence the naming for this method: resolvent based on learning.

[1]. C. Bessiere, I. Brito, A. Maestre, P. Meseguer, Asynchronous Backtracking without Adding Links: A New Member in the ABT Family. Artificial Intelligence, 161:7-24, 2005.

[2]. Muscalagiu, I., Jiang, H., Popa, H. E. Implementation and evaluation model for the asynchronous techniques: from a synchronously distributed system to a asynchronous distributed system. Proceedings of the 8th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC 2006), Timisoara, Romania, IEEE Computer Society Press, (2006) 209--216.

[3]. M.C. Silaghi, D. Sam-Haroud and B. Faltings. ”Asynchronous Search with Aggregations”, in Proceedings of the 17th National Conference on Artificial Intelligence- AAAI’00, Austin, Texas, 2000, pp. (917-922).

[4]. G. Solotorevsky, E. Gudes and A. Meisels. “Modeling and solving distributed constraint satisfaction problems (dcsps)”, in Proceedings of Constraint Processing-96, New Hamphshire, October 1996.

[5]. M. Yokoo, E. H. Durfee, T. Ishida, K. Kuwabara. The distributed constraint satisfaction problem: formalization and algorithms. IEEE Transactions on Knowledge and Data Engineering 10(5), page. 673-685, 1998.