 # CSCI 161 - Unit 2

Context free grammars

Martin Rios-Cardenas (lvl 5)
PDAs

### Preview this deck

δ(q_s,a,x) = {(q_r,y)}

Front     ### 0.0

0 reviews

 5 0 4 0 3 0 2 0 1 0

Active users

1

All-time users

2

Favorites

0

Last updated

11 months ago

Date created

Oct 26, 2020

## Cards(71)

PDAs

(11 cards)

δ(q_s,a,x) = {(q_r,y)}
Front

We have a state q_s where we are transitioning.

We have an input character a that we read.

We have a symbol x that we pop.

Then,

we have the next state q_r

and we have the character y that we push.

Back

Why can't we simply design NFAs as PDAs?

Front

You definitely can.  Let's go back to this problem

L = {0^n 1^n | n >= 0}

We can design a two state NFA where the start state A transitions to accept state B if we read a 0. Since this is an NFA, we have no memory stack. This means in terms of PDA, we push and pop an epsilon, which means the "memory stack" isn't modified. We do this for all transitions and loops in the NFA.

However, the issue is the fact that our problem cannot read all possible strings and with accuracy. We can let a 0001111 slip by, which isn't part of our language. To combat this uncertainty, we use the memory stack to make sure we pass all strings that are part of our language. This means keeping track that the number of 0s is equal to the number of 1s

Back

What can you push into the PDA stack?

Front

We can push literally anything. In the first example we were presented, we pushed and popped only pebbbles. It doesn’t mean we can’t push the symbols in our alphabet. We can push and pop them because there’s no restrictions as to what to push and pop in the stack.

However, it’s important we push the $on the stack to record the beginning and end of a string. Back Why can't we simply design NFAs as PDAs? Front You definitely can. Let's go back to this problem L = {0^n 1^n | n >= 0} We can design a two state NFA where the start state A transitions to accept state B if we read a 0. Since this is an NFA, we have no memory stack. This means in terms of PDA, we push and pop an epsilon, which means the "memory stack" isn't modified. We do this for all transitions and loops in the NFA. However, the issue is the fact that our problem cannot read all possible strings and with accuracy. We can let a 0001111 slip by, which isn't part of our language. To combat this uncertainty, we use the memory stack to make sure we pass all strings that are part of our language. This means keeping track that the number of 0s is equal to the number of 1s. Back What is the formal definition of a PDA? Front A PDA is a 6-tuple represented as P = (Q,\sigma,\Gamma,\delta,q0,F)  where Q: set of states Sigma: alphabet Gamma: set of symbols allowed in the stack Delta: Q x Sigma_Epsilon x Gamma_Epsilon --> P(Q x Gamma_epsilon) q0: the start state that is part of Q F: the accept state(s) that is a subset of Q Back Explain what's going on here. Front We are transitioning from state q_s to state q_r where we read an a and we pop an x and we push a y. Back What does the Γ symbol mean in a PDA? Front It represents the set of allowed characters in a stack. Back How can we intuitively design a PDA as a FA with a memory stick given this problem L = {0^n 1^n | n >= 0} Front We have an NFA that looks like this. It includes two states with loops for 0 and 1 accordingly and epsilon transitions to an accept state. However, we need a memory stack to be selective about the symbols we use in the machine. It helps keep track of what we read. This stack uses push and pop as well to push and pop symbols in the stack. In this example, the stack pushes all zeros it sees (which are first). We can think of this as pebbles being pushed onto the stack. and then when it sees a 1, it pops all these pebbles until we are left with exactly none. If we try to pop an empty stack or we’re left with pebbles in the stack, then we know the grammar is not part of this language. We also know if we reach the accept state, we are fine. Else, we know the string isn’t on the stack. We will also include a symbol$ to represent the start and end of the stack. This will be useful to know when we’re at the beginning and end of a string. So the first thing we should be pushing is a $and the last thing we pop is the$.

If we name all the states, we can build a computational history with the PDA.

We also need to record our memory stack as well.

Back

What is the Chomsky Hiearchy?

Front

The idea is that we introduce Pushdown Automata and Context free grammars as part of this scope of context free languages. We can create a PDA with a CFG and vise versa. There’s some connection between NDA and PDAs and regex with CFGs.

Back

In Delta, what is the Q’s purpose in the PDA?

What about the Q in P(Q x Gamma_epsilon)?

What about the Gamma_epsilon in P(Q x Gamma_epsilon)?

Front

Q is where the start is at.

Q in P(Q x Gamma_epsilon) is where the accept state is at.

Gamma_epsilon is where the pop is at.

Gamma_epsilon in P(Q x Gamma_epsilon) is where the push is at.

Back

Design the correct form of a PDA for

L = {0^n 1^n | n >= 0}
Front

Instead of the basic transitions, we're adding a new way to transition where we read each symbol of the string and we push and pop a symbol to the stack from the allowed symbols.

For example, A->B with reading epsilon, popping epsilon, and pushing $. In this diagram, let's go over what's going on. B: We are looping with 0's, popping epsilons, and pushing pebbles. We can think of it as simply adding pebbles. B-->C: When we see anything that's not a 0, we are reading, pushing, and popping an epsilon. We can think of it as a free transition w/o any stack movement. C: We are looping with 1s, popping pebbles, and pushing epsilon. In this example, since we are pushing a pebble for every 0 we see, when we pop the pebble, we are making sure exactly all of them are popped. This will tell us that we have the same amount of 0s and 1s. C-->D: When we read anything that's not a 1, we pop$ and push an epsilon to reach state D

D: Accept state. We cannot have anything on stack except epsilons. If we don't have the exact number of 0s and 1s and in the exact order, we will reach a dead state.

Back

CFGs

(22 cards)

What are ambiguously defined strings in CFGs?

Front

A string is ambiguously defined by a CFG if it has at least 2 leftmost derivations in the grammar.

Back

Is this a violation of the CFG rules?

S-->(S1|S2)S3
Front

Yes. We can only concatenate without parentheses.

The correct way would be

S --> S1S3 | S2S3
Back

When is a language context free?

Front

A language is context free iff some CFG generates it.

Back

Is this grammar ambiguous?

Language: Arithmetic expressions over ∑ = {a,+,x,(,)}
G5: <EX> --> <EX> + <EX> | <EX> x <EX> | (<EX>) | a
Front

We know a grammar is ambigous if it can generate a string that has two leftmost derivations.

Lets use the string: a + a x a

Lets find two leftmost derivations.

1. <EX> => <EX> + <EX> => a + <EX> => a + <EX> x <EX> => a + a x <EX> => a + a x a
2. <EX> => <EX> x <EX> => <EX> + <EX> x <EX> => a + <EX> x <EX> a + a x <EX> => a + a x a

These derivations tell us the following.

1. Our string a + a x a is derived ambiguously
2. Our grammar is ambiguous because it generates a + a x a ambiguously
Back

Is this string 001 ambiguously generated with the following grammar? Is the grammar unambigous?

Language: {0^m 1^n | m ≥ n}
CFG: S -> ZT
T -> 0T1 | epsilon
Z -> 0Z | epsilon
Front

No because 001 doesn't 2 or more different leftmost derivations.

If we try with more strings, we are unable to generate different parse trees since all of them follow the same structure. Thus, this grammar is umambiguous.

Back

Is this example a CFL?

{0^n 1^n | n ≥ 0}
Front

Yes, because it's generated by S --> 0S1 | epsilon. We can derive different strings with different numbers for n.

For all n ≥ 0, S derives 0^n 1^n b/c S => epsilon 0^0 1^0 = epsilon

and if n > 0, S yields u1 = 0S1, which yields u2 = 00S11, ... , which yields un = 0^n S 1^n

(and these are the only strings derived by S)

We can use a tree to illustrate CFGs and understand how we can derive these strings. We can think like recursion.

Back

Is this language inherently ambiguous?

Arithmetric expressions over ∑ = {a,+,x,(,)}
Front

We know an inherently ambiguous language is only generated by ambiguous grammars. If we can generate a language from an umambiguous grammar, the language is not inherently ambiguous.

We already have a grammar G4 that generates this language as an unambiguous grammar. Therefore, our language is not inherently ambiguous.

Back

Is this grammar umambiguous?

G4: <EX> --> <EX> + <T> | <T>
<T> --> <T> x <F> | <F>
<F> --> (<EX>) | a
Front

We know a grammar is ambiguous if it generates a string ambiguously.

We know a string derived ambiguously has two leftmost derivations of the same string.

Lets try our string : a + a x a

If we try to do two leftmost derivations, we get the following:

1. <EX> => <EX> + <T> => <T> + <T> => <F> + <T> => a + <T> => a + <T> x <F> => a + <F> x <F> => a + a x <F> => a + a x a
2. <EX> => <T> => <T> x <F> => <F> x <F> => (<EX>) x <F> => (<EX> + <T>) x <F> => (<T> + <T>) x <F> => (<F> + <T>) x <F> => (a + <T>) x <F> => (a + <F>) x <F> => (a + a) x <F> => (a + a) x a

Since we generated two unique derivations of a + a x a, the string is umambigously generated under G4. If we try more strings, we won't be able to find any ambiguous strings. Therefore, our grammar is umambigous since it only generates string unambiguously.

Back

What are rules in CFGs?

Front

We have rules that generate the strings that are part of a certain language.

We typically write them with the variable to the left, a right arrow, and a string mixed with variables and terminals.

We cannot have terminals on the left.

Also, the first rule has the start variable unless denoted.

ex.

These are rules
S-->epsilon
S-->0S1
Back

Give an ambiguous CFG for

L = { a^i b^j c^k | i,j,k ≥ 0 and i = j or i = k }
Front

Let's take a divide and conquer approach to this problem.

We want a CFG that generates the same amount of a's and b's with c being like a c* and able to generate any amount of c's

We want a variable that generates the a's and b's and a variable that generates the c's.

Let's use these rules:

S --> MC

M --> aMb | epsilon // will function like 0^n 1^n

C --> cC | epsilon // like C*

However, we also want to generate strings with the same amount of a's and c's and b as b*.

This means we need a union with our previous language and the new language we're about to create.

Before we begin, we need to create new rules for our new language.

Rules:

N --> aNc | B

B --> bB | epsilon

We also need to modify our start rule:

S --> MC | N

Thus, we have the following rules:

S  --> MC | N

M --> aMb | epsilon

C --> cC | epsilon

N --> aNc | B

B --> bB | epsilon

Now, we can check whether our new grammar is ambiguous.

Lets use the string abc

We need two leftmost derivations of abc from two parse trees.

Yet, since the professor bailed on the trees, we know abc is ambiguously generated.

Thus, our CFG is ambiguous.

Back

What are variables?

Front

Variables are symbols that represent rules. They are represented by upper-case letters. You can use them to substitute in rules as well.

ex where we have S as a variable

S--> epsilon
S--> 0S1

Back

Is this example context free? Why?

{0^n 1^n | n ≥ 0}
Front

The example is context free because we can derive it by

S --> 0S1 | epsilon

Back

Do CFGs have leftmost derivations?

Front

Yes. Some CFGs under some grammars have multiple leftmost derivations.

Back

What are terminals?

Front

Terminals are symbols that are part of the string we generate. Typically denoted by lowercase letters and other symbols. We include them in rules mixed with variables or by themselves.

ex.

We have symbols epsilon, 0, and 1 here as terminals.
S-->epsilon
S-->0S1
Back

What is the atom CFG for CFGs that are like regexes?

Front

S-->a, the language {a}.

Back

What is the formal definition of a context-free grammar?

Front

A 4-tuple where:

V: disjoint finite set of variables
∑: disjoint finite alphabet of terminals
R: finite set of rules
Each R is of the form
Var (element of V) --> string over (V U ∑)^*
S: start variable that is an element of V

In addition, for any strings u,v,w that are elements of ( V U ∑ )*, if there is a rule

A-->w,

we say uAv yields uwv, denoted uAw => uwv.

Also, if there are intermediate u1,…,uk that are elements of ( V U ∑ )* where k ≥ 0 s.t.

u => u1 => u2 => … => uk => v, we say u derives v, denoted u =>* v.

Back

How do you come up with a canonical way of linearizing a parse tree into a chain of derivations?

Front

We use depth-first search.

Back

Why do we call CFGs grammmars?

Front

Since a grammar is a set of rules for putting strings together, we know this corresponds to a language. CFGs are part of a language, but they don't represent the language overall. All we need to know for a grammar is the following:

• a set of variables
• a set of terminals
• a list of rules

With grammar, we can generate strings that are part of a language. In CFGs, we generate strings that are part of a context-free language (CFL).

Back

What is the CFG for

(a U b) c*
Front

S --> S1S2

S1 --> a|b

S2 --> cS2 | epsilon

or

S --> aS1 | bS1

S1 --> cS1 | epsilon

Back

Are CFLs closed under the regular operations?

Front

Yes. If we draw a diagram of the Chomsky hiearchy, we know regular languages are their small circle inside the CFL circle. Thus, we can use union, concatenation, and star, which make CFLs closed as well.

Here's the formal proof.

Union:

Pf: Givern CFGs G1 and G2, construct G w/ L(G) = L(G1) U L(G2) and w/ new rules S --> S1 | S2.

ex. S1 --> 0(S1)1 | epsilon

where L1 = {0^n 1^n | n ≥ 0}

S2--> 1(S2)0 | epsilon

where L2 = {0^n 1^n | n ≥ 0}

We want our string to be derived by either S1 or S2, Thus, we generate the rule S-->S1 | S2 to commit to either S1 or S2 to be the "start variable."

Our grammar now has the following:

S --> S1 | S2

S1 --> 0(S1)1 | epsilon

S2 --> 1(S2)0 | epsilon

Therefore, G can generate the language L = L1 U L2.

Concatenation:

Pf: Given CFGs G1, G2, construct G w/ L(G) = L(G1) o L(G2) and w/ new rule S --> S1S2

ex. L1 o L2 = { 0^n 1^n 0^m 1^m | n,m ≥ 0 }

This is possible since we can have different number of 0s and 1s for L1 and L2.

Star:

Given CFG G1, construct G w/ L(G) = L(G)* and w/ new rules S --> S1S | epsilon and S1 --> 0(S1)1 | epsilon

ex. S1-> 0(S1)1 | epsilon

where L1 = {0^n 1^n | n ≥ 0}

Back

What's the CFG for this problem?

{0^m 1^n | m ≥ n}
Front

We know m ≥ n, so we can break it down into three parts

0^(m-n)

0^n

1^n

We can replace m-n with k where k ≥ 0.

This means 0^k is 0*

Thus, we have 0*{0^n 1^n}

From here, we can generate the rules. If you like, you can build a tree to see how would the CFG work.

S --> (S1)(S2)

S1 --> 0(S1) | epsilon

S2 --> 0(S2)1 | epsilon

If you need proof these rules work, lets derive 001.

We know m ≥ n and since we already know about 0*, we can assume that in this string, m = 2 (00)

Then, n =1 (1).

If we drew a parse tree, this would be the result.

Back

What is the general definition of the language of a CFG?

Front
L(G) = {w is an element of ∑* | S =>* w}
Back

Chomsky Normal Form

(7 cards)

If a CFG is in Chomsky normal form, does it generate any CFL?

Front

We will prove this.

PF: Start w/ any CFG, We will systematically identify and resolve form problems.

We have problems with

S --> 0S1 | epsilon

where the start variable is on the right side. In other words, we have start recursion problems.

We also have problems with the long RHS rules since we are only allowed to have two variables, one terminal, or just epsilon.

Back

What is the Cocke-Younger-Kasami algorithm?

Front

The Cocke-Younger-Kasami (CYK) algorithm parses a string under some CNF grammar, if possible. It keeps track of all possible derivations of the input string's substrings under the provided grammar. Its efficiency shows that the term "does it compile" is a poly-time decidable question for any languages whose syntax can be described by a CFG.

The pseudocode is annoying to understand so we'll leave it alone. Instead, we will parse 001 from the following CFG:

S -> ZM | t_0 Z | 0 | t_0 V | t_0 t_1 | epsilon
M -> t_0 V | t_0 t_1
Z -> t_0 Z | 0
V -> M t_1
t_0 -> 0
t_1 -> 1

We have an input string I of 3 characters a1,a2,a3

We have a grammar of 6 variables where we start with R1...R6

We have an array of booleans represented by P[n,n,r]. All elements are false. A 3D array.
For the array, we design a box of 3x3.

For every square in the box, if we can derive from that square, we set to true. By placing the Variables that derive from that square, we set it to true.

On the left, we have the length of the substrings labeled.

On the top, we have the terminals labeled.

We are trying to find out the bottom-left corner is true or false to see whether the string is part of the language and derived by the CFG.

Here's how it should look like

The way we fill in each square is the following:

l = 1, 0: We can derive 0 from either S, Z, or t_0.

l = 1, 1: We can derive 1 from either S or Z

l = 2, 00: We can derive 00 from either S or Z

l = 2, 01: We can derive 01 from either S or M

l = 3, 001: We can derive 001 from S

We are pretty much trying to see if every variable is able to derive the corresponding substring from each square.

During the process, we don't care if S can derive from anywhere except the bottom-left corner. Yet, we need to list S regardless.

It's helpful if we draw arrows from the lower row variables  to the variables that derive the particular substring.

For examples in l=2, s1, we can derive Z to t_0 that are on l = 1, s = 1,2. We would draw two arrows pointing Z to both t_0s.

We would continue to draw arrows and see if the bottom-left corner has the necessary arrows to point to all squares in row 1.

We can draw a parse tree from here.

Therefore, 001 is in our language.

Back

If a CFG is in Chomsky normal form, does it generate any CFL?

Front

We will prove this.

PF: Start w/ any CFG, We will systematically identify and resolve form problems.

We have problems with

S --> 0S1 | epsilon

where the start variable is on the right side. In other words, we have start recursion problems.

We also have problems with the long RHS rules since we are only allowed to have two variables, one terminal, or one epsilon.

We also can't allow mix of terminals and variables.

First, we will create intermediate epsilon and unit rules to resolve the start recursion problem. In other words, we're using an algorithm to Chomsky normalize a CFG.

Algorithm for Chomsky normalizing a CFG:

1. Remove start recursion.
1. Add a new start variable and a new rule S_0 --> S.
2. This will let us keep S and give us a start variable that goes to it.
2. Consolidate epsilon
1. For any rule A-->epsilon where A != start variable, remove the rule and for any occurrence of A on the RHS of any rule, add a new rule with A removed.
2. This will make us remove the epsilon on S and add a new rule 01 to S's right side since we wanted S removed. However, we need to continue recognizing 0S1 since it could've given us 01 with S-->epsilon.
3. We also add epsilon to S_0 since you could previously reach epsilon with S.
3. Collapse unit rules
1. For any rule A-->B, we remove it and whether we have B --> u, we add a new A-->u
2. Since we have S_0 --> S, we need to remove it. We do so and replace S with 0S1 | 01, which are the rules generated by S. We still keep S.
4. Decomposing long rules
1. For any long rule A --> v1 v2  … vk where k ≥ 3 and vi is an element of (V U 2), remove it and add var A1, ..., A(k-2) where we get rules A --> v1A1, A1 --> v2A2, ... , A(k-2) --> v(k-1)v(k). The idea is that we use A(k-2) -->v(k-1)v(k) to decompose long rules
2. In our example, we have a long rule 0S1. Since 0S1 is of length 3, if we use the rule A(k-2) --> v(k-1)v(k), we have A(3-2) --> v(3-1)v(3) where A(1) -->v(2)v(1).
3. Thus, we get a new rule V --> S1 and we replace all the rules that contain this new variable.
5. Clean up terminals
1. For any u that's an element of the alphabet in the RHS w/ 2 characters, add new variable U with a rule U --> u and replace every RHS rule u with 2 characters with U.
2. In other words, whenever you see a terminal next to a variable on the RHS, replace the terminal with a variable that becomes a rule to access it.
3. In our example, we replace all terminals with rules. We get Z --> 0, N --> 1.
6. Final result
1. S_0 --> ZV | ZN | epsilon
2. S --> ZV | ZN
3. V --> SN
4. Z --> 0
5. N --> 1
Back

When is a CFG in Chomsky normal form?

Front

When every rule is of the form:

A--> BC w/ B,C being elements of V and {S} is only on the left side

In other words, we only have two variables on the right side and our start variable is only on the left side. We also can't have B and C be the start variable.

A--> a w/ a being an element of the alphabet

In other words, we only have one terminal.

S--> epsilon

In other words, the start variable points to epsilon.

Back

What is the Cocke-Younger-Kasami algorithm?

Front

The Cocke-Younger-Kasami (CYK) algorithm parses a string under some CNF grammar, if possible. It keeps track of all possible derivations of the input string's substrings under the provided grammar. Its efficiency shows that the term "does it compile" is a poly-time decidable question for any languages whose syntax can be described by a CFG.

The pseudocode is annoying to understand so we'll leave it alone. Instead, we will parse 001 from the following CFG:

S -> ZM | t_0 Z | 0 | t_0 V | t_0 t_1 | epsilon
M -> t_0 V | t_0 t_1
Z -> t_0 Z | 0
V -> M t_1
t_0 -> 0
t_1 -> 1

We have an input string I of 3 characters a1,a2,a3

We have a grammar of 6 variables where we start with R1...R6

We have an array of booleans represented by P[n,n,r]. All elements are false. A 3D array.
For the array, we design a box of 3x3.

For every square in the box, if we can derive from that square, we set to true. By placing the Variables that derive from that square, we set it to true.

On the left, we have the length of the substrings labeled.

On the top, we have the terminals labeled.

We are trying to find out the bottom-left corner is true or false to see whether the string is part of the language and derived by the CFG.

Here's how it should look like

The way we fill in each square is the following:

l = 1, 0: We can derive 0 from either S, Z, or t_0.

l = 1, 1: We can derive 1 from either S or Z

l = 2, 00: We can derive 00 from either S or Z

l = 2, 01: We can derive 01 from either S or M

l = 3, 001: We can derive 001 from S

We are pretty much trying to see if every variable is able to derive the corresponding substring from each square.

During the process, we don't care if S can derive from anywhere except the bottom-left corner. Yet, we need to list S regardless.

It's helpful if we draw arrows from the lower row variables  to the variables that derive the particular substring.

For examples in l=2, s1, we can derive Z to t_0 that are on l = 1, s = 1,2. We would draw two arrows pointing Z to both t_0s.

We would continue to draw arrows and see if the bottom-left corner has the necessary arrows to point to all squares in row 1.

We can draw a parse tree from here.

Therefore, 001 is in our language

Back

Convert this CFG to Chomsky normal form

S --> ZM
M --> 0M1 | epsilon
Z --> 0Z | epsilon
Front

Let's follow the rule of the CNF algorithm

1. Replace start variable.
1. In our example, we don't have any recursion in our start variable. In other words, we don't have a mix of terminals and variables. Just two variables. Don't replace start variable
2. Remove epsilon rules
1. M has a rule that goes to epsilon. We remove it from M and since S has a rule with M involved, we add a rule with only Z (since we "removed M"). Since we need a way to stop the recursion, we add 01 as a rule for M.
S --> ZM | Z where S->Z if n = 0 and S -> ZM if n > 0
M --> 0M1 | 01 where M generates 0^n 1^n | n ≥ 1
Z --> 0Z | epsilon

2.  We also have to remove the epsilon from Z. We replace the epsilon with 0. Since S has Z as part of the rules, we make a new rule where Z is not present. Same deal for the solo rule Z. In this case, we just add epsilon to S.

S --> ZM | Z | M | epsilon where {0^m 1^n | m ≥ n}, S->ZM if m-n>0, S->Z if m>0 and n=0, S->M if m=n and n>0, epsilon if n=m=0
M --> 0M1 | 01 where M generates 0^n 1^n | n ≥ 1
Z --> 0Z | 0 

3.    Remove unit rules

1.    Since we have unit rules with our start variable like S --> Z and S --> M, we need to replace them with the rules those variables generate.

S --> ZM | 0Z | 0 | 0M1 | 01 | epsilon
M --> 0M1 | 01
Z --> 0Z | 0

4.    Decompose long rules

1.  In theory, we need to decompose the long rules one by one. However, if all the rules we need to decompose are the same, we can just do it simultaneously

S -> ZM | 0Z | 0 | 0V | 01 | epsilon
M -> 0V | 01
Z -> 0Z | 0
V -> M1

5.     Decompose terminals

1. We replace all 0's with one variable and all 1's with another that are mixed with variables
S -> ZM | t_0 Z | 0 | t_0 V | t_0 t_1 | epsilon
M -> t_0 V | t_0 t_1
Z -> t_0 Z | 0
V -> M t_1
t_0 -> 0
t_1 -> 1
Back

Given this grammar

P -> 0P0 | 1P1 | epsilon

convert to Chomsky normal form and use the CYK algorithm to see if 0110 is in our grammar.

Front
1. We will convert
P -> 0P0 | 1P1 | epsilon

to Chomsky normal form.

1. We need a new start variable since P cannot be a start variable and be on the RHS.
S -> P
P -> 0P0 | 1P1 | epsilon
• Remove all epsilon rules
1. We need to remove where epsilon appears
2. In our case, we remove epsilon from P and place it on S since it appeared on P previously. We also add the base cases for P, 00 and  11.
S -> P | epsilon
P -> 0P0 | 1P1 | 00 | 11
• Remove unit rules
• We need to remove all unit rules
• We remove P from S and replace it with all the rules for P
S -> 0P0 | 1P1 | 00 | 11 | epsilon
P -> 0P0 | 1P1 | 00 | 11
• Replace long rules
• We need all long rules replaced since we can only have rules of two variables, one terminal, and one epsilon.
• In our case, we can just replace P0 and P1
S -> 0A | 1B | 00 | 11 | epsilon
P -> 0A | 1B | 00 | 11
A -> P0
B -> P1
• Replace terminals
• We need all terminals replaced with the exception of solo terminals
• We replace 0 and 1
S -> T_0A | T_1B | T_0 T_0 | T_1T_1 | epsilon
P -> T_0A | T_1B | T_0 T_0 | T_1T_1
A -> PT_0
B -> PT_1
T_0 -> 0
T_1 -> 1

Now we can run CYK on 0110.

We have the string length of 4 so we'll create a 4x4 grid (5x5 for order).

We fill boxes with x where its not possible to parse from.

We fill in the first row since we have rules for each terminal/character of the string.

Now we read from the next row.

We have to read from the top and top-right corners first and see if you can parse the substring with any rule.

For example, in s=1, l=2, we have no rules to parse 01. We move to the next box and then lower the row.

As we move down, we make sure that every box is in a row that when adding both boxes, their length is equal to the original box.

For example, we can parse A at l= 3, s = 2 since there is a P we can parse that has l = 2 and T_0 that has l =1 and combined = 3.

What we look for is if we can parse S from the lowest box. In this case we can.

Since we can't draw parse trees, here's how the parsing goes.

S => T0A => T0PT0 => T0T1T1T0 => 0T1T1T0 => 01T1T0 => 011T0 => 0110

Thus, 0110 is in our language.

Back

CFG -> PDA

(3 cards)

If a language is context free, then some pushdown automaton recognizes it.

Front

Let's start with a CFG and build a PDA that starts with S and simulates the derivation of a string by making variable substitutions on the stack while reading the input string.

Ex.

Stack simulating deriving 0011 under S -> 0S1 | epsilon

In this stack simulation, we first pushed a $to represent the start of the string. We then pushed an S since S is our start variable. Once we popped an S, we pushed 0S1 from right to left (1=>S=>0). We read a 0, which prompts us to pop the 0. We pop the S to push 0S1 again. We read a 0 and pop it from the stack. We pop the S and push an epsilon. We read a 1 and pop it. We read the other 1 and pop it. We pop the$ to reach our accept state.

Now that we have simulated the stack, we have a better idea on how to construct the PDA.

We typically start by adding the start state.

We then read an epsilon, pop an epsilon, and push the $to reach out next state. We then read an epsilon, pop an epsilon, and push the S to then reach our looping state. Note: we can push and pop epsilons as many times as we like. No limit with epsilons. We have a looping state responsible for two core functions: • Replace every rule for all variables where we have them with looping states • Match input characters with the terminals at the top of the stack We remain on the looping state deriving until we have matched all our terminals with our input characters. First, replace a variable A with string w for every rule that looks like A->w In our case we can either replace A->epsilon first or A->0S1 by making a loop on the looping state. We start with A->0S1 1. We create a new state that you can access by reading epsilon, popping S, and pushing epsilon 2. We create a new state that you can access by reading epsilon, popping epsilon, and pushing 1 3. We create a new state that you can access by reading epsilon, popping epsilon, and pushing S 4. We create a new state that you can access by reading epsilon, popping epsilon, and pushing 0 Thus, we have pushed 0S1 onto the stack. In order to replace A->epsilon, • we make a simple loop that we access by reading epsilon, popping S, and pushing epsilon or • We can access the state we created previously that is accessed by reading epsilon, popping S, and pushing epsilon and then reading epsilon, popping epsilon, and pushing epsilon. However, for the generic CFG to PDA conversion, we do the first option. Thus, we have pushed epsilon onto the stack. To match the input characters with the terminals at the top of the stack • We add a simple loop where we read 0, we pop 0, and we push epsilon • We add a simple loop where we read 1, we pop 1, and we push epsilon After popping every from the stack except$, we read an epsilon, we pop the $, and we push epsilon to reach the accepting state. Thus, our language is context free since our Back Given a CFG S -> SS | T T -> aTb | ab design a PDA Front Since we are designing a general PDA, let's get the basics over with 1. We have a start state 2. We transition to the next state by reading epsilon, popping epsilon, and pushing$
3. We transition to the looping state by reading epsilon, popping epsilon, and pushing S since S is the start state.
4. After we are done with pushing and popping terminals and variables and left with $on our stack, we can just draw out the next state by reading epsilon, popping$, and pushing epsilon.

For these rules, we will have separate states for looping. However, if one of our rules use the same amount of states as another, the second can just use those states and we can label the second's transitions over the first's.

For S->SS

1. From the looping state, we reach another state by reading epsilon, popping S, and pushing S
2. We go back to the looping state by reading epsilon, popping epsilon, and pushing S.

We now have SS in our stack

For S->T

1. From the looping state, we make a simple loop back to the looping state by reading epsilon, popping S, and pushing T.
2. However, we can also reach another state by reading epsilon, popping S, and pushing epsilon. Then, return to looping state by reading epsilon, popping S, and pushing T.

We now have T in our stack.

For T->aTb

1. From the looping state, we read an epsilon, pop a T, and push b to transition to another state
2. We transition to another state by reading an epsilon, popping an epsilon, and pushing a T.
3. We transition back to looping state by reading an epsilon, popping an epsilon, and pushing an a.

We now have aTb in our stack.

For T->ab

1. From the looping state, we read an epsilon, pop a T, and push b to transition to another state.
2. We transition back to the looping state by reading an epsilon, popping epsilon, and pushing a.
3. We can also use a third state where we push an epsilon, pop an epsilon, and read an epsilon.

We now have ab in our stack.

Now we need the simple loops when reading the terminals. We assume all the terminals are on top when being popped and we are able to pop all the input characters from the stack.

1. If we read an a, we pop an a, and we push epsilon
2. If we read a b, we pop a b, and we push epsilon.

After we popped everything except the $, we transition to the accept state by reading epsilon, Back How can a PDA accept a string w? Front A PDA accepts a string w = w1...wm that are elements of sigma_epsilon^m (the alphabet/input) if there exists a state sequence r0,r1,...rm that are elements of Q (the set of states) and the transcript S0,S1,...Sm that are elements of Gamma* (set of the stack) that records the stack movement after every movement s.t 1. r0 = q0 (start state, s0 = epsilon 2. (r(i+1),b) is an element of delta(ri,w(i+1),a) where a,b are stack character, a is pop, b is push, a and b can be epsilon, and t is an element of the rest of the stack where si = at, s(i+1) = bt for i = 0,...,m-1 3. rm is supposed to be the accept state The language of the PDA is L(P) = {w is an element of Σ* | P accepts w} Back PDA -> CFG (3 cards) Give the general idea for Lemma 2.27 Front For the PDA P where P = (Q,∑,Γ,δ,q0,{qf})  and satisfying the lemma 2.27 assumptions, construct a CFG G = (V,∑,R,S) with V = {Apq | p,q is an element of Q}, S = A(q0qf) and R = { • App -> ε for all p that are elements of Q • Apq -> AprArq for all p,q,r that are elements of Q • Apq -> aA(rs)b for all p,q,r,s that are elements of Q } where in addition • a,b are elements of the alphabet (input set) s.t there exists a u that is an element of the stack symbol set • (r,u) is an element of δ(p,a,ε) • (q,ε) is an element of δ(s,b,u) Back Consider the process from a PDA to a CFG for the following language {0^n 1^m | m ≥ n} Front Let's have a sanity check. We break down into a matching component where we take 0^n 1^n and an extra component one sided recursion that can create 1* Then, we can write with three variables. • A start variable that says do a matching component and do an extra component • A matching component (variable) that does the 2-sided recursion and epsilon • An extra component that generates 1's We want to start with a PDA that is going to have two phases done in sequence. Our start variable tells us that. We want to first match 0s and 1s, then generate as much 1s as I want. What we want is the following: • A start state • An accept state • Push$ at the beginning and pop it in the end
• Only push or pop (neither both or none)
• Accept the string when the stack is empty
• A state where we read a 0 and we push an x to the stack
• A state where we read a 1 and we pop x from the stack
• An intermediate state between the previous two states where we push and pop an @ to transition
• A state where we read 1's and keep pushing and popping @ between this looping state and another state
• A state where we transition from the first state where we read 1's to the second state where we read the same thing as well

We already showed you that PDA.

Now let's give you the equivalent CFG:

Since we know Asa needs an intermediate variable transition, we have Azk.

Asa -> epsilon Azk epsilon

We also need an intermediate variable for Azk

Azk -> AznAnk | AzkAkk

Let's expand more these matching components

Azn -> 0Azn1 | Amm
Amm -> epsilon

For Ank, since looping in k, we can allow recursion

Ank -> AnkAkk | Add
Akk -> 1Aee
Aee -> epsilon 

Thus, we have the following CFG

Asa -> epsilon Azk epsilon
Azk -> Azn Ank | AzkAkk
Azn -> 0Azn1 | Amm
Amm -> epsilon
Akk -> 1Aee | epsilon
Aee -> epsilon

Note: Azk -> AzkAkk is our X where we pretty much have a one sided recursion overall

Back

Any PDA has an equivalent CFG.

Front

Let's recall how we used to convert from DFA to regex

• We had the paths from the DFA become pieces for a regex
• ex
• p -> r -> q where p->q if a, r->r if b, r->q if c
• ab*c

We want to convert from PDA to CFG where

• the paths of the PDA become the rules of a CFG with empty stacks at the beginning and end

If we transition from a state p to a state q

• We want a variable Apq that derives a string w s.t w gets you from p to q starting and ending w/ an empty stack.

We want a start variable Asa, which can derive everything we can read from s to a. If Asa can derive everything we read, then we have the language of the PDA.

We have three assumptions about the starting PDA:

• P only accepts inputs on an empty stack. We won't reach the accept state until our stack is empty.
• P only has one accept state
• Each transition is a push XOR pop.
• We won't allow push and pop to occur simultaneously nor can we transition unless we push or pop

Formal Definition: Without loss of generality, assume the PDA only accepts strings on an empty stack, has only one accept state, and is such that every transition either pops from the stack or pushes to it (not both).

Example:

Lets assume we have a PDA for

{0^n 1^n | n ≥ 0}

We have the starting state s and the accept state a.

We want to create a rule Asa that parses everything on the string beginning and ending on an empty stack.

However, we need an intermediate state that will help us transition from s to a

Lets choose state z as the intermediate state.

Asa -> AszAza

However, we have an nonempty stack when we transition. Thus, we ignore that rule.

Next, let's choose state m as our intermediate state.

Asa -> AszAza | AsmAma 

However, we are still left with an nonempty stack. Thus, we ignore that rule.

If we think this through, we can begin and end with only a $on our stack from state z to state n. We would want to push$ and pop \$ before and after these states. Thus, we have

Asa -> AszAza | AsmAma | epsilon Azn epsilon
Azn -> 0Azn1 | epsilon Amm epsilon
Amm -> epsilon

which is a CFG.

Note: Amm -> epsilon can start and end at m with an empty stack and no input.

Back

Context Free Pumping Lemma

(12 cards)

How do we contradict the context free pumping lemma?

Front

Suppose A is context free with pumping length p.

Then by the context free pumping lemma, every s in A of length ≥p has a pumpable decomposition.

Let s=ab(cd)^{p}#^{p}c^{p}ef

Consider ANY decomposition  s=uvxyz with |vxy|≤p, |vy|>0.

We have that, now let's analyze the language:

• We want to "move around" |vxy| on the string and see where it contradicts the pumping lemma
• We want to place it in all cases where if we pump up or down, we decompose a string not in the language
• We also know vxy can be any variables respectively that are grouped together
• One observation is that if v or y is a,b,e,or f, we can't pump down because we would be missing one of these variables
• Another observation is if vy includes only c's or d's, then only the cd's duplicate or the c's on the right
• We can't have d and c be part if vxy since we have p pound signs in the middle, violating my |vyx| requirement
• Another observation is if vxy is all # signs since we're gonna have more # signs than c's on the right

The key observation is that with |vxy|≤p, |vy|>0, we cant pump up both (cd)'s and c's on the right.

1. Case 1: v or y includes the a, b, e, f. Then uxz is missing an a, b, e, or f, so it's isn't in A
2. Case 2: v or y includes some c's or d's. Then we only have c's or d's to the left of the #  or c's to the right. Then one side gets extra characters, which makes it unbalanced from the other side, so it's not in A
3. Case 3: v and y include only #'s. Then uvvxyyz has more #'s than c' s on the right, so it's not in A

These are the ONLY two possibilities for s = uvxyz with |vxy|≤p, |vy|>0, so there does not exist a bounded, nontrivial decomposition that pumps, contradicting the pumping lemma. Hence A is context sensitive.

Back

Define context free pumping lemma.

Front

If B is context free, then there exists a pumping length p > 0 s.t

• for all string s in the language of length at least p
• there exists a decomposition s = uvxyz s.t
• the length of vxy (our window) is less than or equal to p
• Note: this window won't occur at the beginning of the string but will occur within some fixed pea sized window
• the length of vy is greater than 0, and
• Note: one of the v or y is complicated (nontrivial)
• x can also be trivial
• one of the v or y has to actually exist
• They don't live next to each other
• for all i ≥ 0,
• s = u v^i x y^i z is in our language

Back

Example of a forward application of the context free pumping lemma

Front

We have a language B where

B = {ab (cd)^n # c^n ef | n ≥ 0}

Like how you give an FA to prove a regular language, you give a CFG to prove this language is context free.

We can also give a PDA, but the CFG has a better connection with the context free pumping lemma.

With our CFG, we start the following

• A start state S with an early component A that occurs once, a middle component C that has matching going on, and an a late component that occurs once.
S -> ACF
• We want A to go to our first two characters and F to our last two characters.
A -> ab
F -> ef
• Our C is generated by recursion where we keep the C on the RHS and with cd on the left of C and c on the right of C. Eventually, our recursion will stop with a # sign
C -> cdCc | #
• If we want to make our CFG look regular, we can generate our d's separately
C -> cDc | #
D -> dC

Now that we have setup our CFG, we can start by figuring out the pumping length p.

Since we can generate s = abcd#cef by pumping once, we have a pumping length p = 8.

• In other words, we have the repetition going on with cd and c. Since we need a copy of each, we also need the other characters around in order to pump. Thus, our smallest pump-able string in our language is
• ab (b/c we always have a copy to begin with) [u]
• cd [v]
• # [x]
• c [y]
• ef [z]

The pieces that are gonna serve as our context free repetition are the cd on the left and the c on the right. We want to pump these strings, so we'll call them v and y respectively.

The rest we can fill and make sure our string is fully generated.

The following correspond to each case

• i  = 1 case
• s will always correspond to i = 1
• s = uvxyz
• s = abcd#cef
• i = 0
• uxz corresponds to i = 0
• uxz = ab#ef
• i = 2
• uvvxyyz corresponds to i = 2
• uvvxyyz = abcdcd#ccef

Here's the PF idea by example picture:

We visualize a parse tree that generates this string so we can see where the pumping is occurring.

Fundamentally, every string in a context free language has some parse tree associated with its context free grammar.

We have a parse tree that derives A, C, and F from S. A generates ab and F generates ef.

Now, we want to recreate the parse tree when we pump the string s, i = 0.

We would derive cDc.

Then, we would derive D from cDc with D -> dC

Then, we would derive C from dC with pound.

That is how we get the string for s.

We can break it down and see how we get the five pieces.

Now since we have the parse tree, we notice how we have two C's. We know that D can generate C. Thus, we can make another copy of C generating D and D generating C. This is how we pump up to i = 2.

If we can pump up, we can also pump down to i = 0.

We simply remove the section where C generates D and instead, replace with a C that generates #.

Back

Definition of the regular pumping lemma

Front

If language A is regular, then

• there exists a pumping length p > 0. s.t.
• for all string s where the length of s is ≥ p, there exists a decomposition s = xyz s.t
• the length of xy is less than or equal to the length of p,
• the length of y > 0, and
• for all i ≥ 0, x y^i z is in the language A

Picture Proof:

If our language is regular, then some DFA recognizes it.

If we start at a starting state, we would move through all the states with a string long enough to pass through all and reach an accept state.

So if that string is at least the length of the machine (or the number of states), then its computational history has at least one more of those state because we started our zero and ended our n. In other words, by the Pigeonhole Principle, one of those states is going to be repeated at some point in the  first p characters.

We find that point of repeat, then call the first component before getting to the cycle x, then call the cycle y, and then call the last component z.

Example application (forward direction):

We assume our language is regular. Then, we just illustrate how if you take a long enough string, then there is a cycle somewhere.

We'll take for example the language A

A = ab(cd)*ef

We say our first characters have to be an a and a b, then have as many alternating c's and d's as we want, and then end with an e and an f.

Here's an FA for this.

This FA is a NFA that captures all the information we need and we can confirm the pumping lemma is p = 6.

Thus, our smallest string we can take with the pumping lemma is

• s = abcdef

We can then say the following

• x = ab
• y = cd
• z = ef

Now, for i = 0, we have s =xz

• xz = abef

which can take us to our accept state.

If i = 2, we have s = xyyz

• xyyz = abcdcdef

which can take us to our accept state.

So we have our y cycle in general, which corresponds to our cd cycle in our NFA.

Thus, our pumping lemma confirmed our language is regular.

Back

Given a language L where

L = {ab(cd)^n # c^n ef | n ≥ 0}

show that it contradicts the regular pumping lemma,

Front

Suppose L is regular with pumping length p.

Then by regular pumping lemma, every s in L of length ≥p has a pumpable decomposition.

We're gonna test if that's true.

Let s  = ab(cd)^{p}#c^{p}ef

Because of our pump ability, there exists some decomposition that satisfies my technical conditions and it pumps

Consider ANY decomposition s =xyz with |xy| ≤p, |y|>0.

We want to show that this decomposition doesn't pump.

Note: Normally, we like to have a string s=a^p... so we can argue that necessarily y=a^k for k>0,

but this time, we don't know for sure what y looks like, so we will enumerate all the cases.

Note: Your key idea can help give you partial credit.

The key observation is that if |xy|≤p, then y is to the left of #.

1. Case 1: y includes the a or the b
• In this case, xz is missing an a or a b, so it's not in the language.
• (We can't pump down)
2. Case 2: y doesn't include a or b, so it's entirely c's and d's on the left of #
• In this case, xyyz has >p c's or d's left on # and only p c's on the right, so it's on in L.
• (We can't pump up)

These are the ON,y two possibilities for s = xyz with |xy|≤p, |y|>0, so there does not exist a bounded,

nontrivial decomposition that pumps, contradicting the pumping lemma. Hence, L is nonregular.

Back

If a language pumps with a context free pumping lemma, is that a good reason to say L is a CFL?

Front

No. You use the pumping lemma to prove a language is not context free.

Back

Prove this language is not context free

D = { ww | w is a binary string }
e.g 110110
Front

Let's try s = 0^p 0^p.

• But any decomposition with |vy| even pumps

Let's try s = 1 0^p 1 0^p

Let's try s = 1^p 0^p 1^p 0^p

• If our v and y is 0, then the x = 1^p is too wide violating the |vxy| ≤ p
• If our v and y are 1's that are part of the first 1^p, then
• uvvxyyz = 1^(p+k) 0^p 1^p 0^p, which will offset our midpoint and move some 0s to the right of the midpoint, thus not ww and not in our language

Two cases for s = uvxyz, |vxy|≤p, |vy|>0:

1. Case 1: vxy affects the midpoint
2. Case 2: vxy is in the first half or the second half

uxy has a 0 from Block 2 or 1 from Block 3

uxz = 1^p 0^k1 1^k2 0^p with k1<p or k2<p not in the language.

Back

Theorem 2.34: The Pumping Lemma for context-free languages

Front

If A is a context free language, then there is a number p (the pumping length) where, if any s is any string in A of length at least p, then s may be divided into 5 pieces s = uxvyz satisfying the conditions

1. For each i ≥ 0, u v^i x y^i z is in A
2. |vy| > 0
3. |vxy| ≤ p

A simplified version:

For any context free language A,

there exists a pumping length p where

if any decomposition s = uvxyz of length ≥p has the following

1. A string in A for each i ≥ 0 for s = u v^i x y^i z
2. v or y are used
3. vxy is ≤ p

Even more simplified:

A context free language has a string with a certain threshold length that can be 2-sided pumped to any string length.

Contrapositive:  If for any threshold there is a long enough string in A that cannot be (1-/2-sided) pumped then A is not regular/context-free.

Let's clarify what's going on in the pumping lemma.

• When we pump, we are taking i copies of the pumpable pieces
• vxy is bounded
• v or y is nontrivial

To prove the contrapositive, we pick a particular long string and show that this string is

• bounded
• nontrivial
• and has some value i where it doesn't pump

This means our language is not regular/context free.

Back

What is the general template for an argument that language A is context-sensitive?

Front
Back

If a language pumps with a context free pumping lemma, is that a good reason to say L is a CFL?

Front

No.

Back

How do you prove a language is not context free (context sensitive)?

Front

a

Back

What is the general template for an argument that language A is context-sensitive?

Front
Back

Chomsky Hiearchy

(13 cards)

Why is

L_r = {a^n b^m | n,m ≥ 0}

regular and context free?

Front

L_r is regular because we can generate a regex with

• a*b*

that satisfies this language. We also know all regular languages are context free.

Back

Why is L_c = {a^n b^n | n≥0} context free but not regular?

Front

We need to show both.

1. CFG or PDA

S -> aSb | epsilon

Pumping lemma fails b/c imbalance

Back

How do we know regular languages are closed?

Front

We have the following closure properties:

• We can write A1 ° A2
• We can write A1 U A1
• We can write A1*

for all A1,A2 that are regular languages to show a language A is regular.

Back

What are the closure properties for context free languages?

Front

If we have two languages A1 and A2, we can only use the following properties to show A is in our language

• A1 U A2
• A1 ° A2
• A1*
Back

Are all regular languages context free?

Front

Yes, we can build a CFG or a PDA for regular languages,

Back

How do we prove a language is not context free (context sensitive) ?

Front

We contradict the context free pumping lemma to show the language is not a

Back

Is L2 = {a^n b^m c^n d^m | n,m ≥ 0}
a CFL b/c we have two things that match?

Front

If we attempted a PDA and we pushed the stack with one symbol for the a's we read and another symbol for the b's we read, we couldn't pop the first symbols from the stack when we read c's b/c the other symbols are in the way. Thus, we have matching that's overlapping. L2 is not a CFL..

However, we need pumping lemma contradiction

Take s = a^p b^p c^p d^p

For any s =uvxyz with |vxy|≤p and |vy|>0, we have 1 of the following

• Case 1: v or y include an a
• Case 2: ... .... ... b
• Case 3: ... ... ... c
• Case 4: ... ... ... d

Back

How do we prove a language is not context free?

Front

We contradict the context free pumping lemma to show the language is not a context free language.

Back

How do we prove a language A is regular?

Front
• We give a DFA/NFA/regex to show A is a regular language
Back

How do we know regular languages are closed?

Front

We have the following closure properties:

• We can write A1 ° A2
• We can write A1 U A1
• We can write A1*

for all A1,A2 that are regular languages to show a language A is regul

Back

Why is A = { a^n b^n c^n | n ≥ 0} neither regular nor context free?

Front

We contradict the CF pumping lemma to prove A is neither regular nor context free.

There is an imbalance of a's, b's, and c's b/c we cant keep an equal amount of each character when pumping.

Back

How do we prove a language is context free but not regular?

Front

We give the following:

• Give a CFG or a PDA to show the language is context free
• Contradict the regular PL to show B is not regular

Back

Where does

B = {a^n b^n c^m | n,m ≥ 0}

and

C = {a^n b^m c^n | n,m ≥ 0}

fall in the Chomsky hierarchy?

Front

B is context free b/c a PDA recognizes it.

B is not regular b/c it fails the regular PL.

C is context free b/c we can generate with a CFG

C is not regular b/c it fails the PL.

Back