Consider the language
\[ L = \{ \texttt{a}^i\texttt{b}^j\texttt{c}^i\texttt{d}^j \ |\ i,j\geq 0 \}\ .\]
In other words, \(L\) is the language of strings consisting of \(\texttt{a}\)’s followed by \(\texttt{b}\)’s followed by \(\texttt{c}\)’s followed by \(\texttt{d}\)’s in which the number of \(\texttt{a}\)’s matches the number of \(\texttt{c}\)’s and the number of \(\texttt{b}\)’s matches the number of \(\texttt{d}\)’s. Use the Pumping lemma for context free languages (Theorem 2.34 on page 125) to show that \(L\) is not a context-free language:
Note: This question is just for Dr. Tate’s section (section 2). Dr. Tong will provide an alternative question for her section. Consider the following CFG:
\[ \begin{array}{l} S \rightarrow T\dashv \\ T \rightarrow \texttt{(}T\texttt{)}\ |\ TT\ |\ \varepsilon\\ \end{array} \]
This CFG generates exactly the same language as the CFG in Example 2.55 of the textbook: the set of all strings of properly nested parentheses. Example 2.55 showed that grammar in that example was a DCFG. For this problem, show that the grammar above is not a DCFG. To do this, draw the full DFA \(DK\) and show precisely where it fails the DK-test conditions.
Consider Turing machine \(M_1\) from Example 3.9 (page 173). Give the sequence of configurations that \(M_1\) enters when started with each of the following input strings (note that this is similar to Exercise 3.2 in the book, just with different inputs — see the provided solution to 3.2a for guidance):
01
0#0
0#1#0
01#00
(Hint: For this problem and the next one, start by fully understanding how the Turing machine in Example 3.9 works!) Consider the language
\[ L = \{ \texttt{a}^n\texttt{b}^n\texttt{c}^n\ |\ n\geq 0\} \]
This was the first language that the textbook proved is not context free using the pumping lemma, in Example 2.36 on page 128. Since \(L\) is not context free, it can’t be recognized by any PDA. However, you can create a TM to recognize \(L\), which is what you’ll do for this problem.
Give an implementation-level description of a Turing machine that decides language \(L\) (note: “implementation-level” means written-out steps, like at the beginning of Example 3.7, Example 3.11, or in the textbook’s provided solution to Exercise 3.8a).
Draw out a full Turing machine description for a machine that decides language \(L\).
In some situations, we want to run multiple Turing machines, say \(M_1\) and \(M_2\) on the same input \(w\). However, what if \(M_1\) destroys the original input (like machine \(M_1\) in Example 3.9). How do you run \(M_2\) on the original input \(w\) if \(w\) has been destroyed by running \(M_1\)? This causes a problem for the (incorrect!) solution for Exercise 3.15a given in the textbook.
Let’s correct this problem by creating a TM that makes a copy of input \(w\) first, so \(M_1\) can work on the copy and leave the original input intact. In particular, we want to create a TM that starts with string \(w\in\{\texttt{0},\texttt{1}\}^*\) on its tape, and when it finishes running has \(w\texttt{@}w\) on its tape with the tape head positioned before the first character in the copy of \(w\) (\(\texttt{@}\) is just a new tape symbol).
Design a TM that does this. Start by sketching out how it works with an implementation-level description of the steps, and then draw out the full TM. Note that this is a “subroutine” that is used as part of a larger machine, so there are no “accept” or “reject” states – simply a “finished” state.