Jump to content
Wikipedia The Free Encyclopedia

Resolution proof compression by splitting

From Wikipedia, the free encyclopedia

In mathematical logic, proof compression by splitting is an algorithm that operates as a post-process on resolution proofs. It was proposed by Scott Cotton in his paper "Two Techniques for Minimizing Resolution Proof".[1]

The Splitting algorithm is based on the following observation:

Given a proof of unsatisfiability π {\displaystyle \pi } {\displaystyle \pi } and a variable x {\displaystyle x} {\displaystyle x}, it is easy to re-arrange (split) the proof in a proof of x {\displaystyle x} {\displaystyle x} and a proof of ¬ x {\displaystyle \neg x} {\displaystyle \neg x} and the recombination of these two proofs (by an additional resolution step) may result in a proof smaller than the original.

Note that applying Splitting in a proof π {\displaystyle \pi } {\displaystyle \pi } using a variable x {\displaystyle x} {\displaystyle x} does not invalidates a latter application of the algorithm using a differente variable y {\displaystyle y} {\displaystyle y}. Actually, the method proposed by Cotton[1] generates a sequence of proofs π 1 π 2 {\displaystyle \pi _{1}\pi _{2}\ldots } {\displaystyle \pi _{1}\pi _{2}\ldots }, where each proof π i + 1 {\displaystyle \pi _{i+1}} {\displaystyle \pi _{i+1}} is the result of applying Splitting to π i {\displaystyle \pi _{i}} {\displaystyle \pi _{i}}. During the construction of the sequence, if a proof π j {\displaystyle \pi _{j}} {\displaystyle \pi _{j}} happens to be too large, π j + 1 {\displaystyle \pi _{j+1}} {\displaystyle \pi _{j+1}} is set to be the smallest proof in { π 1 , π 2 , , π j } {\displaystyle \{\pi _{1},\pi _{2},\ldots ,\pi _{j}\}} {\displaystyle \{\pi _{1},\pi _{2},\ldots ,\pi _{j}\}}.

For achieving a better compression/time ratio, a heuristic for variable selection is desirable. For this purpose, Cotton[1] defines the "additivity" of a resolution step (with antecedents p {\displaystyle p} {\displaystyle p} and n {\displaystyle n} {\displaystyle n} and resolvent r {\displaystyle r} {\displaystyle r}):

add ( r ) := max ( | r | max ( | p | , | n | ) , 0 ) {\displaystyle \operatorname {add} (r):=\max(|r|-\max(|p|,|n|),0)} {\displaystyle \operatorname {add} (r):=\max(|r|-\max(|p|,|n|),0)}

Then, for each variable v {\displaystyle v} {\displaystyle v}, a score is calculated summing the additivity of all the resolution steps in π {\displaystyle \pi } {\displaystyle \pi } with pivot v {\displaystyle v} {\displaystyle v} together with the number of these resolution steps. Denoting each score calculated this way by a d d ( v , π ) {\displaystyle add(v,\pi )} {\displaystyle add(v,\pi )}, each variable is selected with a probability proportional to its score:

p ( v ) = add ( v , π i ) x add ( x , π i ) {\displaystyle p(v)={\frac {\operatorname {add} (v,\pi _{i})}{\sum _{x}{\operatorname {add} (x,\pi _{i})}}}} {\displaystyle p(v)={\frac {\operatorname {add} (v,\pi _{i})}{\sum _{x}{\operatorname {add} (x,\pi _{i})}}}}

To split a proof of unsatisfiability π {\displaystyle \pi } {\displaystyle \pi } in a proof π x {\displaystyle \pi _{x}} {\displaystyle \pi _{x}} of x {\displaystyle x} {\displaystyle x} and a proof π ¬ x {\displaystyle \pi _{\neg x}} {\displaystyle \pi _{\neg x}} of ¬ x {\displaystyle \neg x} {\displaystyle \neg x}, Cotton [1] proposes the following:

Let l {\displaystyle l} {\displaystyle l} denote a literal and p x n {\displaystyle p\oplus _{x}n} {\displaystyle p\oplus _{x}n} denote the resolvent of clauses p {\displaystyle p} {\displaystyle p} and n {\displaystyle n} {\displaystyle n} where x p {\displaystyle x\in p} {\displaystyle x\in p} and ¬ x n {\displaystyle \neg x\in n} {\displaystyle \neg x\in n}. Then, define the map π l {\displaystyle \pi _{l}} {\displaystyle \pi _{l}} on nodes in the resolution dag of π {\displaystyle \pi } {\displaystyle \pi }:

π l ( c ) := { c , if  c  is an input π l ( p ) , if  c = p x n  and  ( l = x  or  x π l ( p ) ) π l ( n ) , if  c = p x n  and  ( l = ¬ x  or  ¬ x π l ( n ) ) π l ( p ) x π l ( p ) , if  x π l ( p )  and  ¬ x π l ( n ) {\displaystyle \pi _{l}(c):={\begin{cases}c,&{\text{if }}c{\text{ is an input}}\\\pi _{l}(p),&{\text{if }}c=p\oplus _{x}n{\text{ and }}(l=x{\text{ or }}x\notin \pi _{l}(p))\\\pi _{l}(n),&{\text{if }}c=p\oplus _{x}n{\text{ and }}(l=\neg x{\mbox{ or }}\neg x\notin \pi _{l}(n))\\\pi _{l}(p)\oplus _{x}\pi _{l}(p),&{\text{if }}x\in \pi _{l}(p){\text{ and }}\neg x\in \pi _{l}(n)\end{cases}}} {\displaystyle \pi _{l}(c):={\begin{cases}c,&{\text{if }}c{\text{ is an input}}\\\pi _{l}(p),&{\text{if }}c=p\oplus _{x}n{\text{ and }}(l=x{\text{ or }}x\notin \pi _{l}(p))\\\pi _{l}(n),&{\text{if }}c=p\oplus _{x}n{\text{ and }}(l=\neg x{\mbox{ or }}\neg x\notin \pi _{l}(n))\\\pi _{l}(p)\oplus _{x}\pi _{l}(p),&{\text{if }}x\in \pi _{l}(p){\text{ and }}\neg x\in \pi _{l}(n)\end{cases}}}

Also, let o {\displaystyle o} {\displaystyle o} be the empty clause in π {\displaystyle \pi } {\displaystyle \pi }. Then, π x {\displaystyle \pi _{x}} {\displaystyle \pi _{x}} and π ¬ x {\displaystyle \pi _{\neg x}} {\displaystyle \pi _{\neg x}} are obtained by computing π x ( o ) {\displaystyle \pi _{x}(o)} {\displaystyle \pi _{x}(o)} and π ¬ x ( o ) {\displaystyle \pi _{\neg x}(o)} {\displaystyle \pi _{\neg x}(o)}, respectively.

Notes

[edit ]
  1. ^ a b c d Cotton, Scott. "Two Techniques for Minimizing Resolution Proofs". 13th International Conference on Theory and Applications of Satisfiability Testing, 2010.

AltStyle によって変換されたページ (->オリジナル) /