Automated Reasoning

Automated Reasoning

UCLA CS 264A

Scott Mueller (lvl 19)
Week 1

Preview this deck

Linear resolution strategy

Front

Star 0%
Star 0%
Star 0%
Star 0%
Star 0%

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Active users

6

All-time users

7

Favorites

0

Last updated

3 years ago

Date created

Sep 30, 2020

Cards (96)

Week 1

(9 cards)

Linear resolution strategy

Front

Resolve two nodes only if one of them is root or an ancestor of the other

Back

Propositional logic sentence completeness

Front

A sentence is complete iff it is true at exactly one world

Back

Propositional logic sentence consistency

Front

A sentence is consistent iff there is at least one world at which it is true

Back

Boole's expansion theorem, often referred to as the Shannon expansion

Front

$$(P \land \Delta|P) \lor (\lnot P \land \Delta|\lnot P)$$

Back

Deduction theorem

Front

\(\alpha\text{ implies }\beta, \alpha \models \beta,\text{ iff }\alpha \Rightarrow \beta \text{ is valid}\)

Back

If \(\alpha\) is a sentence that doesn't mention \(P\), then \(\Delta \models \alpha\) iff what?

Front

$$\Delta \models \alpha\text{ iff }\exists P \Delta \models \alpha$$

Back

Propositional logic sentence validity

Front

A sentence is valid iff it is true at every world

Back

Existential quantification

Front

$$\exists P \Delta = (\Delta|P) \lor (\Delta|\lnot P)$$

AKA:

  • Forgetting \(P\) from \(\Delta\)
  • Projecting \(\Delta\) on all variables but \(P\)

$$\Delta \models \exists P \Delta$$

If \(\alpha\) is a sentence that does not mention \(P\), then \(\Delta \models \alpha\text{ iff }\exists P \Delta \models \alpha\)

Back

Refutation theorem

Front

\(\alpha\text{ implies }\beta, \alpha \models \beta,\text{ iff }\alpha \land \neg \beta \text{ is inconsistent}\)

Back

Week 2

(12 cards)

Directed resolution

Front

With variable ordering \(P_1, P_2, \ldots, P_n\):

  1. \(P_j\)-resolvents should be generated only after all \(P_i\)-resolvents have been generated, \(j > i\)
  2. The generation of \(P_j\)-resolvents cannot involve clauses that mention variables \(P_i\), \(j > i\)

Refutation complete for any variable ordering.

Back

SATII algorithm

Front

Variables named \(P_1, \ldots, P_n\)

SATII(Δ, d = 0: depth):
if Δ={}: return {}
if Δ={{}}: return FAIL
if L=SATII(Δ|Pd+1,d+1)≠FAIL:
	return L∪{Pd+1}
if L=SATII(Δ|¬Pd+1,d+1)≠FAIL:
	return L∪{¬Pd+1}
return FAIL
Back

Assertion level

Front

Second highest level in the asserting clause or \(-1\) if it doesn't exist

Back

Asserting clause

Front

Negation of conflict set leading to conflict-driven clause with one variable setting at the last decision level

Back

First UIP

Front

UIP closest to contradiction

Back

Decision tree algorithm for ordered buckets

Front
genDecTree(Γi, …, Γj; term: α = true):
if Γi|α, …, Γj|α are all empty:
	return node labelled true
if Γj|Pj∧α contain empty clause:
	H ← false
else
	H ← genDecTree(Γi, …, Γj-1; Pj∧α)
if Γj|¬Pj∧α contain empty clause:
	L ← false
else
	L ← genDecTree(Γi, …, Γj-1; ¬Pj∧α)
return node Pj, high/low children H/L
Back

Conflict set

Front

Collection of variable settings that can be blamed for a contradiction

Back

Empowering clause

Front

Clause entailed by knowledge base, when added to knowledge base empowers unit resolution to discover a contradiction after setting a particular variable

Back

DPLL+ algorithm

Front
DPLL+(Δ):
D←() {empty decision sequence}
Γ←{} {empty learned clauses}
while true do:
	if unit res contradict in (Δ,Γ,D):
		if D=(): return false
		else
			α←asserting clause
			m←assertion level clause α
			D←first m+1 decisions in D
			add clause α to Γ
	else
		if liters l and ¬l not implied
			by unit res from (Δ,Γ,D):
			D←D;l
		else
			return true
Back

DPLL algorithm

Front
DPLL(Δ: clausal form):
(I,Γ)=UNIT-RESOLUTION(Δ)
if Γ={}: return I
if {}∈Γ: return FAIL
choose a literal l in Γ
if L=DPLL(Γ∪{{l}})≠FAIL: return L∪I
if L=DPLL(Γ∪{{¬l}})≠FAIL: return L∪I
return FAIL
Back

SATI algorithm

Front

Variables named \(P_1, \ldots, P_n\)

SATI(Δ, d = 0: depth, n: num vars):
if d=n and Δ={}: return {}
if d=n and Δ={{}}: return FAIL
if L=SATI(Δ|Pd+1,d+1,n)≠FAIL:
	return L∪{Pd+1}
if L=SATI(Δ|¬Pd+1,d+1,n)≠FAIL:
	return L∪{¬Pd+1}
return FAIL
Back

Unique implication point (UIP)

Front

Variable setting at the highest decision level which lies on every path from the latest decision to the contradiction

Back

Week 3

(8 cards)

Unsatisfiable core

Front

Subset of clauses of original CNF that are unsatisfiable

For each C∈Δ:
	Δ'=Δ\C
	if Δ' is UNSAT:
		Δ=Δ'
Back

Compensation clauses (unweighted)

Front

Adjusts incorrect costs induced by classical resolution. Resolving \(x \lor C_1 = (l_1, \ldots, l_m)\) and \(\lnot x \lor C_2 = (o_1, \ldots, o_n)\) yields compensation clauses:

$$x \lor C_1 \lor \lnot o_1\\x \lor C_1 \lor o_1 \lor \lnot o_2\\\vdots\\x \lor C_1 \lor o_1 \lor o_2 \lor \cdots \lor \lnot o_n\\\lnot x \lor C_2 \lor \lnot l_1\\\lnot x \lor C_2 \lor l_1 \lor \lnot l_2\\\vdots\\\lnot x \lor C_2 \lor l_1 \lor l_2 \lor \cdots \lor \lnot l_m$$

Back

Random K-SAT CNF generation

Front

\(m\) clauses are randomly generated of length \(k\) with \(n\) variables. When \(\frac{m}{n}\) is small, \(P(\text{SAT})\) is close to 1, when \(\frac{m}{n}\) is large, \(P(\text{SAT})\) is close to 0, and the \(P(\text{SAT}) = 0.5\) transition occurs at \(\frac{m}{n} \approx 4.24\).

Back

GSAT local search algorithm

Front
GSAT(Δ,Tm,Tr): {Tm,Tr→moves,restarts}
i←1
while i≤Tr:
	i←i+1
	w*←random world
	c*←num clauses Δ falsified by w*
	j←1
	while j≤Tm:
		j←j+1
		if c*=0: return w*
		else
			w*←rand neighbor falsifying
				smallest num clauses Δ
			c*←#clauses falsified by w*
return FAIL
Back

Counting DPLL (CDPLL)

Front

Uses exhaustive DPLL

CDPLL(Γ, n: number variables):
if Γ={}: return 2ⁿ
if {}∈Γ: return 0
choose literal l in Γ
(I⁺,Γ⁺)=UNIT-RESOLUTION(Δ∪{{l}})
(I⁻,Γ⁻)=UNIT-RESOLUTION(Δ∪{{¬l}})
return CDPLL(Γ⁺,n-|I⁺|)+CDPLL(Γ⁻,n-|I⁻|)
Back

MaxSAT resolution (weights \(1\))

Front

Directed resolution rules and resolve clauses on \(x_i\) only if they don't contain complementary literals on \(x_j \neq x_i\). Resolve over \(x\) by removing original clauses \(x \lor C_1=(l_1,\ldots,l_m)\) and \(\lnot x \lor C_2=(o_1,\ldots,o_n)\) and adding resolvent and compensation clauses.

 

With \(k\) empty clauses as a result, \(k\) is guaranteed to be the minimal cost and remaining clauses \(\Gamma\) is guaranteed to be satisfiable. Every world satisfying \(\Gamma\) has cost \(k\).

Back

Asserting clause learning algorithm in modern SAT solvers for certifying UNSAT

 

Front

Guaranteed to derive the first UIP asserting clause from the conflict

lc=clause that became empty from UR
while (lc is not asserting):
	l=literal in lc falsified last
	lc=resolvent of lc∧ ¬l's reason
return lc {learned clause}
Back

2-literal watching scheme

Front

A literal is asserted if it evaluates to true under current variable setting and resolved if it evaluates to false, otherwise free. L₁ and L₂ interchangeable:

For each clause watch literals L₁,L₂
Once L₁→resolved, search unresolved L
If none found, check other literal L₂
If L₂ free, then unit and L₂ implied
If L₂ resolved, all resolved→empty
If L₂ asserted, clause subsumed
Back

Week 4

(32 cards)

Simple-disjunction

Front

Children of each or-node are leaves that share no variables, aka clause

Back

DNNF clausal entailment

Front

Because of refutation theorem, look for inconsistency when conditioning knowledge base on negation of clause and then conjoining negation of clause. Works on partial decomposable NNF too if clause contains variables that didn't allow for full decomposability.

Back

MODS language

Front

DNF of every satisfying model

Back

NNF determinism

Front

For each disjunction \(C\), each \(2\) disjuncts of \(C\) are logically contradictory

Back

E-MAJ-SAT

Front

Is there an \(\mathbf{X}\)-instantiation under which the majority of \(\mathbf{Y}\)-instantiations are satisfying?

Back

Prime implicates (PI)

Front

CNF such that:

  • No clause subsumes another
  • If a clause is implied by the CNF, it must be implied by a clause in the CNF

Polytime for:

  • CO
  • CE
  • ME
  • VA
  • IP
  • SE
  • EQ
Back

DNNF minimum cardinality

Front
  1. Assign cardinality 1 to negative variables and 0 to positive variables
  2. or-nodes ← minimum cardinality of disjuncts
  3. and-nodes ← sum cardinality of conjuncts
Back

Model counting (#SAT)

Front

How many satisfying assignments?

Back

DNNF language

Front

Decomposable subset of NNF. Polytime for:

  • CO (linear)
  • CE
  • ME
Back

f-NNF languages

Front

Flatness, height of each sentence is at most 2. CNF and DNF are examples.

Back

FBDD language

Front

Free BDD subset of BDD and d-DNNF. Has test-once property.

Back

DNNF model enumeration (ME)

Front

Linear in the number of models because CD + CE is linear. Condition on \(X\), check consistency, condition on \(\lnot X\), check consistency, and recurse.

Back

BDD language

Front

Binary decision diagram, subset of d-NNF

Back

MAJ-SAT

Front

Are majority of instantiations satisfying?

Back

s-NNF language

Front

Smooth subset of NNF. No extra polytime capabilities.

Back

NNF decomposability

Front

For each conjunction \(C\), the conjuncts of \(C\) do not share variables

Back

Prime implicants (IP)

Front

DNF such that:

  • No term subsumes another
  • If a term implies the DNF, it must imply a term in the DNF

Polytime for:

  • VA
  • IP
  • SE
  • EQ
Back

Structured decomposability

Front

Each and-node has the same distribution of variables among its branches mapped to a sub-tree in a v-tree. Enables conjoining DNNF circuits in polytime.

Back

OBDD language

Front

Ordered BDD subset of FBDD. Polytime for (all 8 queries, 6 inherited):

  • SE
  • EQ
Back

Weighted model counting (WMC)

Front

Summed weights of satisfying assignments. Each literal (positive and negative variants) gets a weight, multiply weights for each model/assignment.

Back

sd-DNNF minimum cardinality models algorithm

Front

Use DNNF minimum cardinality algorithm and throw out non-minimal cardinality disjuncts

Back

DNNF sub-circuit

Front

Choose one disjunct for each or-node

Back

d-DNNF language

Front

Subset of d-NNF and DNNF. Polytime for:

  • VA
  • IP
  • CT
  • EQ?
Back

d-NNF language

Front

Deterministic subset of NNF. No extra polytime capabilities.

Back

DNNF satisfiability

Front

Set all variables to true because SAT can be propagated to leaves:

  • SAT(A ∨ B) iff SAT(A) or SAT(B)
  • SAT(A ∧ B) iff SAT(A) and SAT(B) (because decomposability)
  • SAT(X) = true
  • SAT(¬X) = true
  • SAT(true) = true
  • SAT(false) = false
Back

DNNF existential quantification

Front

AKA projection, replace all variables to be existentially quantified with true for the same reason as satisfiability

Back

NNF decision node

Front

Labeled true, false, or is an or-node having the form \((X \land \alpha) \lor (\lnot X \land \beta)\), where \(X\) is a variable and \(\alpha\) and \(\beta\) are decision nodes. \(\text{dVar}(\text{decision node})\) denotes the variable \(X\).

Back

MAJ-MAJ-SAT

Front

Is there a majority of \(\mathbf{X}\)-instantiations under which the majority of \(\mathbf{Y}\)-instantiations are satisfying?

Back

Simple-conjunction

Front

Children of each and-node are leaves that share no variables, aka term

Back

sd-DNNF language

Front

Subset of s-NNF and d-DNNF

Back

DNF language

Front

Subset of f-NNF and DNNF with simple-conjunction

Back

NNF smoothness

Front

For each disjunction \(C\), each disjunct of \(C\) mentions the same variables

Back

Week 5

(10 cards)

Cutset in caching for OBDD

Front

Cutseti is the set of clauses mentioning a variable ≤ vi and a variable > vi

Back

OBDD Apply function

Front

Fxi=1 and Fxi=0 are \(F|x_i\) and \(F|\lnot x_i\), respectively:

Apply(F,G,*):
if F,G are sinks: return F*G
if (F,G) in cache:
	return cache[F,G]
else
	let xi be 1st var in F,G
	h = Apply(Fxi=1,Gxi=1,*)
	l = Apply(Fxi=0,Gxi=0,*)
	if h = l: return h
	n = unique-node(xi,h,l)
	cache[F,G] = n
	return n
Back

sd-DNNF counting graph with constraints

Front

Set variables to \(1\) or \(0\) based on constraints. If a variable constraint is added or removed, multiply the difference from its previous value with its partial derivative and add to previous count.

Back

OBDD transformations

Front
  • CD
  • SFO
  • ∧B
  • ∨B
  • ¬
Back

Decision-DNNF language

Front

Subset of d-DNNF where or-nodes are always decision nodes, but not necessarily and-nodes or leaves

Back

Theorem relating number of sub-functions to number of reduced OBDD nodes

Front

The number of distinct sub-functions obtained by fixing \(x_1, \ldots, x_{j-1}\) to constants and which depend on \(x_j\) is the number of reduced OBDD nodes labeled \(x_j\)

Back

Reduced OBDD

Front

Canonical OBDD as there is a unique reduced OBDD for fixed variable order. Apply:

  • Deletion rule: {Z₁, Z₂} → X →⤑ Y ⇒ {Z₁, Z₂} → Y
  • Merge rule: {A, B} → X → sinks, {C, D} → X → sinks ⇒ {A, B, C, D} → X → sinks
Back

sd-DNNF weighted model counting algorithm

Front

Multiply and-gates and add or-gates just like unweighted model counting, but with weights for variables

Back

OBDD conjoin algorithm

Front

Because of Shannon/Boole's expansion, condition OBDD \(f\) and OBDD \(g\) on the first variable appearing in either according to the variable order. That variable becomes the decision node with the high child \(f \land g\) each conditioned on the positive variable and the low child \(f \land g\) each conditioned on the negative variable. Recursively do the same to those children.

Back

sd-DNNF model counting algorithm

Front

All variable leaves and true has 1 model, false has 0 models, or-nodes add counts of their disjuncts, and and-nodes multiply counts of their conjuncts

Back

Week 6

(4 cards)

SDD apply operation

Front

With \((\mathbf{X}, \mathbf{Y})\)-partition of \(f\):

$$(p_1, q_1) \ldots (p_n, q_n)$$

and \((\mathbf{X}, \mathbf{Y})\)-partition of \(g\):

$$(r_1, s_1) \ldots (r_m, s_m),$$

the \((\mathbf{X}, \mathbf{Y})\)-partition of \(f \circ g\):

$$\forall i,j: (p_i \land r_j, q_i \circ s_j)\ |\ p_i \land r_j \neq \text{false}.$$

Therefore, apply is \(O(m \times n)\), polytime.

Back

SDD compression

Front

\((\mathbf{X}, \mathbf{Y})\)-partition is compressed if no equal subs:

$$f = (g_1, h_1) \ldots (g_n, h_n)\\h_i \neq h_j\text{ for all }i \neq j$$

\(f(\mathbf{X}, \mathbf{Y})\) has a unique compressed \((\mathbf{X}, \mathbf{Y})\)-partition

Back

Number of vtrees over \(n\) variables?

Front

$$\begin{aligned}n! \times C_{n-1} &= n! \times \frac{(2(n-1))!}{n!(n-1)!}\\&= \frac{(2(n-1))!}{(n-1)!}\end{aligned}$$

Back

Number of dissections over \(n\) variables of a vtree

Front

$$C_{n-1} = \frac{(2(n-1))!}{n!(n-1)!}$$

Back

Week 7

(21 cards)

System description in model-based diagnosis

Front

Knowledge-base that describes how a system is supposed to behave

Back

Classifier bias

Front

Classifier is biased iff one of its decisions is biased. In other words, one of its decisions has a sufficient reason with at least one protected feature.

Back

IP form

Front

Representation of knowledge-base as a disjunction of its prime implicants

Back

Monotone classifier

Front
  • Positive instance remains positive if some features flipped from \(-\) to \(+\)
    • \((+, -, -, +) \Rightarrow (+, -, +, +), (+, +, -, +), (+, +, +, +)\)
  • Negative instance remains negative if some features flipped from \(+\) to \(-\)
    • \((+, -, -, +) \Rightarrow (+, -, -, -), (-, -, -, +), (-, -, -, -)\)
  • Example is Susan's correct answers include Jack's correct answers
    • Susan should pass if Jack passed

 

Back

Health state \(\psi\)

Front

Term over all health variables \(ok_1, \ldots, ok_n\). It is a diagnosis iff \(\psi\) is consistent with \(\Delta \land \alpha\).

Back

Classifier robustness

Front

What is expected robustness over all possible inputs?

$$\text{model-robustness}(f) = \frac1{2^n} \sum_\mathbf{x} \text{robustness}_f(\mathbf{x})$$

Back

Computing min-cardinality diagnoses

Front

Represent health condition \(\text{health}(\Delta, \alpha)\) as a DNNF and do min-cardinality in linear time

Back

Monotone circuit

Front

All occurrences of a variable in a circuit are either always positive or always negative. Existential quantification in linear time.

Back

Positive instance vs negative instance of propositional formulas

Front

Positive instance is a world \(w\) such that \(w \models \Delta\)

Negative instance is a world \(w\) such that \(w \models \lnot \Delta\)

Back

Instance robustness

Front

How many features do we need to flip so that classifier's decision is flipped?

$$\text{robustness}_f(\mathbf{x}) = \min_{\mathbf{x}': f(\mathbf{x}') \ne f(\mathbf{x})} d(\mathbf{x}, \mathbf{x}')$$

Linear time in OBDD

Back

Implicate implicant duality

Front

\(\alpha\) is a prime implicate of \(\Delta\) iff \(\lnot\alpha\) is a prime implicant of \(\lnot\Delta\)

Back

Decision bias

Front

Decision on instance \(X\) is biased iff it can be different on an instance \(Y\) that disagrees with \(X\) on protected features only

 

Decision is biased iff each of its sufficient reasons contains at least one protected feature

Back

Normal in regards to system observations

Front

For system description \(\Delta\) with \(n\) components, system observation \(\alpha\) is normal iff \(\Delta \land \alpha \land ok_1 \land \ldots \land ok_n\) is consistent

Back

Health variable

Front

Variable for each component that captures the state of that component

Back

Complete reason

Front

Disjunction of all sufficient reasons

Back

Health condition \(\text{health}(\Delta, \alpha)\)

Front

$$\exists X_1, \ldots, X_m (\Delta \land \alpha)$$

Back

Compute reason circuit from decision-DNNF

Front

Linear time. Equivalent to complete reason. Instance must be positive, otherwise need to work with negation of decision-DNNF.

  1. Add consensus nodes to each or-node
    1. \((\lnot A \land \alpha) \lor (A \land \beta) \Rightarrow (\alpha \land \beta)\)
  2. Kill each and-\(\alpha\) incompatible with instance
Back

System observation in model-based diagnosis

Front

Term over non-health variables (variables other than \(ok_i\))

Back

Hamming distance between instances

Front

# variables instances disagree on

Back

Minimum cardinality explanation on positive instance

Front

Minimum cardinality in this case means to minimize necessary positive features. Linear time for DNNF (both \(\Delta\) and \(\lnot\Delta\) DNNF forms required).

  1. Condition on negative features
  2. Assign \(\infty\) to false, \(1\) to positive literals, and \(0\) to negative literals and true
  3. Compute minimum cardinality
  4. Kill disjuncts with non-minimal values
  5. Enumerate
Back

Sufficient reasons with PI explanations

Front

Given a positive instance, select compatible prime implicants. Given a negative instance, select compatible prime implicants of negation of boolean function. Example:

  • Boolean function \((A\!+\!\overline{C})(B\!+\!C)(A\!+\!B)\)
  • Prime implicants \(AB, AC,B\overline{C}\)
  • Instance \(AB\overline{C}\), Decision \(1\)
  • Sufficient reasons \(AB, B\overline{C}\)

  • Prime implicants of negation \(\overline{A}C, \overline{B}\overline{C},A\overline{B}\)
  • Instance \(\overline{A}BC\), Decision \(0\)
  • Sufficient reasons \(\overline{A}C\)
Back