The University of Nottingham Homepage The University of Nottingham Homepage School of Computer Science Homepage

G52MAL Coursework Support Page

Contents


Course Help

If you're having trouble with the coursework, or understanding the lectures, there are a few things you can try:
  1. Ask your tutor either during or after the tutorial.
  2. Ask the lecturer either during (if you are brave) or after the lecture (if you are a coward).
  3. Read the appropriate section of the lecture notes.
  4. Consult the frequently asked questions, below.
  5. Get a copy of the recommended text book Introduction to Automata Theory, Languages and Computation (2nd or 3rd Edition). Read the relevant section and see if that helps. See the main web-page for the page references to each topic.
  6. If your question does not give away the answer to a coursework question, then post it on the G52MAL web-forum.

Coursework Feedback

When marked, your coursework will be available for collection in the tutorial. Please make sure that you pick up the coursework in person since the tutor may have questions about your answers. Failing to address them satisfatorily may cause you to loose marks.

Model solutions for each week's coursework will be released shortly after the hand-in deadline. I suggest you look at these.

Common Mistakes

  1. Do not use red pen, that is reserved for marking. If you want to use different colours to clarify your answer, use black and blue (or even green if need be).
  2. Do not attach a copy of the question sheet with your answers.
  3. Make sure you put the correct module code on the coversheet of your solutions. It causes a great deal of hassle for us trying to track down missing scripts.
  4. The most common error by far is omitting the marking of initial and accepting states. In a transition table the initial state should be marked with an arrow, and the accepting states should be marked with a star. In a transition diagram, the initial states should have unlabelled arrows pointing to them, while the accepting states should have either an additional circle around them, or an unlabelled arrow pointing away from them.
  5. Be careful of the distinction between the alphabet (Σ) and the set of words over that alphabet (Σ*). Σ is a set of symbols, whereas Σ* is a set of words.
  6. Similarly, be careful of the distinction between the transition function (δ) and the extended transition function (δ̂). Do not write one when you mean the other.
  7. ε (the empty word) is not a symbol in the language; it is a special symbol that denotes a word of zero length. Zero is an even number, and consequently ε is a word with an even number of symbols.
  8. Do not confuse the empty word (denoted ε) with an empty set (denoted {} or ∅). The language of the empty regular expression (also denoted ∅) is the empty language (∅). This is not the same as { ε} (which is a language consisting of one word, that word being the empty word).
  9. For all alphabets Σ, it is always the case that ε ∈ Σ*. However, ε may or may not be an element of any particular language, depending upon the language definition.
  10. If w is a word, then wε, εw, and w all denote the same word. When concatenating any word w with the empty word ε, the result is w. You should always perform such reductions when enumerating words.
  11. If you have multiple DFAs, then when you use a transition function (δ) (or other component of the DFA) you need to specify which one you are referring to by using the name of the DFA as a subscript. For example, if your your DFA was called D, you would refer to its transition function as δD. The same principle applies to NFAs, CFGs, PDAs, and so forth.
  12. A DFA has a transition for every symbol in the alphabet from every state. It also has precisely one start state. If this is not the case, then it is not a DFA. (And if you're supposed to be constructing a DFA then you've done something wrong. Most probably, you haven't read FAQ #3 below.)
  13. There is a lot of confusion between transition tables for NFAs and DFAs, and the subset construction.

    A transition table represents the transition function δ by having the state input to the function in the left-most column, the symbol input in the top row, and the output state(s) in the body of the table.

    Now, δ for DFAs takes a single state as input and produces a single state as output. Therefore the entries in the left-most column should be single states, as should the entries in the body of the table. Whereas for an NFA, the input is a single state (and symbol), but the output is a set of states. Consequently, the entries in the left-most column should be single NFA states, but the entries in the body of the table should be sets of NFA states.

    When performing the subset construction, you are constructing a transition table for a new DFA that is equivalent to the NFA you started with. Thus each entry (left column and body) should be a single state for it to be a valid DFA transition table. However, each state in this DFA represents a set of states in the NFA, and is named after that set of states. Thus each entry in the table will be named as a set of states, one of which will probably be named the empty set.

  14. Concatenation of languages is not a commutative operation. For example:
    {a}{b,c} = {ab,ac}
    {a}{b,c} ≠ {ba,ca}
    {b,c}{a} = {ba,ca}
    
  15. There seems to be some confusion over * for regular expressions. * means zero or more iterations of the entire regular expression it applies to. Thus (ab)* = {ε, ab, abab, ababab...}. * has higher precedence than + or concatenation. Thus a+b* = a + (b*) but a+b* ≠ (a + b)*. Similarly ab* = a(b*) but ab* ≠ (ab)*. (ab)* is not the same as a*b* (which would be any number of a's followed by any number of b's). Similarly, (a*b*)* ≠ a*b*, however (a*b*)* = (a + b)* (try writing out the words of both languages up to about length 2 or 3 and you'll see that this is the case).
  16. Be careful to distinguish between regular expression symbols and words in a language. Languages are written as sets of words, and use set notation. Regular expressions are constructed from +, * and parentheses. Do not use set notation for regular expressions, or regular expression notation inside a set. We highlight the alphabet symbols that appear in a regular expression by writing them in bold or by underlining them. We do this to distinguish between them and alphabet symbols in a word, so do not do the same for words in a language as we then lose this distinction.
  17. The symbol for the "directly derives" relation (⇒) is not the same symbol as the arrow used to define CFG productions (→). Make sure you use ⇒ when you mean "directly derives".
  18. When drawing derivation trees, don't repeat symbols. The next level down the tree should have replaced a nonterminal with one or more terminal symbols (or a single ε), and should not be repeating any symbols (except due to recursion in the grammar of course). In particular, repeating parentheses at multiple levels of the tree is a very common mistake. Parentheses are (in many grammars) terminal symbols with no special status over any other, and should be treated as such.

Hints and Tips

  1. When drawing transition diagrams, give some thought to how you lay out the states. Diagrams with a minimal number of crossed arrows are generally clearer, making it easier to see how the automaton functions.
  2. When performing a formal derivation (or other calculation), lay it out one step per line, with a hint in curly braces between each line explaining what step has been taken. This hint could be an equation, or reference to an equation (or property) by name or number. If using numbers, you need to have numerically listed the equations you are referring to somewhere.

Frequently Asked Questions

Index

  1. Do all languages contain the empty word?
  2. How do we construct a DFA when we have more than one initial state in the NFA?
  3. When we have a set of states in an NFA from which we can go nowhere, how do we construct an equivalent DFA?
  4. I've managed the transition table and transition diagram, how do I now construct the DFA?
  5. Do we need to produce minimal NFAs?
  6. How do we construct DFAs from a language description?
  7. How do we apply the extended transition function when we have more than one initial state?
  8. When using the Pumping Lemma, is it enough to prove it for one value of k?
  9. When proving the Pumping Lemma, what if there is a cycle within 'y'?
  10. I can't get rid of epsilon when deriving a word in the grammar.
  11. Why does the `directly derives' relation produce nonterminal symbols?
  12. How do I enforce correct associativity of binary operators in a CFG?
  13. Why does my PDA only accept the empty word?
  14. Why can't we always use limited backtracking for parsing?
  15. Will the module topics be distributed over the questions in the same way as in previous years?

FAQ

  1. Do all languages contain the empty word, or only if stated in the definition on the language?

    Only if stated in the definition of the language. In terms of finite automata, a language only accepts the empty word if at least one initial state is accepting.

  2. When using the subset construction, if the NFA has multiple initial states, how do I construct a corresponding DFA as it is only allowed one initial state?

    When using the subset construction to transform an NFA into a DFA, remember that each state in the DFA represents a set of states from the NFA. So the initial state of a DFA is the set of all initial states from the NFA.

  3. When using the subset construction, if we have a set of states in an NFA from which we can go nowhere, how do we construct an equivalent DFA?

    Each state in the DFA is a set of states from the NFA. If, for a given symbol, we can reach no state from the current set of states in an NFA, then the set of possible states is the empty set. This is still a valid set of states however, and needs to appear in the corresponding DFA. You also need to put on the transitions from this state, as a DFA must have a transition from every state for every possible input symbol. However, you will notice that these transitions always point back to the state containing the empty set.

  4. When using the subset construction, I've managed the transition table and the transition diagram, but I don't know how to construct the DFA after that?

    The transition diagram (and really the transition table as well, if properly annotated with information on start and accepting states) is one representation of the DFA! Yes, one can go the last little bit and write it out as a five tuple of set of states, alphabet, and so on, but there is little point (unless explicitly asked) since all (essential) information is contained in the transition diagram.

  5. When constructing NFAs, do we need to produce minimal transition diagrams?

    Minimising NFAs is not as easy as minimising DFAs. You can (and will often be asked to) remove states from an NFA from which an accepting state cannot be reached, but this does not guarantee a minimal NFA. (It can save you a lot of work though.) However, for the purposes of this module you will not be required to minimise NFAs directly. You will only have to minimise DFAs using the table filling algorithm; so in the case of an NFA you will be asked to convert it to a DFA first.

  6. I need to construct a DFA, but all I have is a description of a language. How can I do this?

    Sometimes you can intuitively deduce a DFA from a language specification, but sometimes it is not immediately obvious how to go about it. In such cases, you will need to construct the DFA in stages, until you can identify which states can be merged.

    1. Start by considering the empty word. You can create an initial state, and mark it as accepting or rejecting depending upon whether the empty word should be accepted by the language.
    2. Then create a new state for each symbol in the alphabet, with an arrow to each from the initial state, bearing the appropriate symbol. You should determine if each of these states should be accepting or not, and label them as such.
    3. Then repeat step (2) but for each of the newly created states, adding transitions from them for every possible symbol, to a new set of states.
    4. You should continue this process (which will lead to an exponential increase in the number of states for all alphabets of more than one symbol) until you deduce which states in the DFA are equivalent. This will require a moment of inspiration, and is by no means easy for many languages.
    5. Merge equivalent states in the DFA together.
    6. If there are no missing transitions then you have constructed a DFA, and can then minimise it intuitively, or formally using the table filling algorithm.
    7. If there are still missing transitions, then you should continue to add new states for these transitions, and then merge states together, until there are no missing transitions. If you are unable to do this, then the language may not be regular.
  7. I need to calculate δ̂ for a word in an NFA, but the NFA has more than one start state. Where do I start?

    If you compare the definitions of δ̂ for DFAs and NFAs, you will notice that in the case of a DFA it maps a state (and a word) to a state, but in the case of an NFA it maps a set of states (and a word) to a set of states.

    • δ̂D ∈ QD × Σ*D → QD
    • δ̂N ∈ ℘(QN) × Σ*N → ℘(QN)

    As δ̂N takes a set of states as input, not a single state, it would be invalid to begin with just a single state. If the NFA only has one start state q0, you would begin with the set containing q0. Thus when an NFA has several start states, you would apply δ̂N to the set containing those start states.

  8. When using the Pumping Lemma to prove a language is not regular, is it enough to prove it for one specific `k'?

    The pumping lemma says:

    Given a regular language L, there exists a constant natural number N such that all strings in the language (let w be a word in the language) that have length at least n (|w| ≥ n), can be split into three words, w = xyz, satisfying:

    1. y ≠ ε
    2. |xy| ≤ n
    3. ∀ k ∈ ℕ. xykz ∈ L

    Notice that the third assertion quantifies over all natural numbers k. Thus if the language is regular, it will be true for all k. So if we assert that L is a regular language, we only need to find a single instance of k to be a counter example. This disproves the assertion, thereby proving L is not regular by contradiction.

    I'd suggest trying very small values of k, such as 0 or 2, for ease of working. In most cases, any value of k should suffice. (But not k = 1, can you see why?)

  9. When proving the Pumping Lemma, what if there is a cycle within 'y'? Then surely |xy| may be greater than n?

    When choosing x, y and z, we choose y such that it is the first cycle. By this we mean that there can be no other cycles within y, nor any cycles within x.

    So, were there to have been a cycle within y, then that is the cycle we would have chosen as the first cycle.

  10. The word I'm trying to derive shouldn't have an epsilon in it, but whenever I try and derive it I always end up with one.

    The symbol epsilon is not a symbol in the alphabet. It instead denotes the empty word, which is the word made up of zero symbols. Whenever the empty word gets concatenated to any other word (whether that be another empty word, a single symbol, or a sequence of two or more symbols), the epsilon symbol can be removed. It is only required when representing an empty word.

  11. The `directly derives' relation would appear to allow words containing nonterminals, but isn't it supposed to only produce words that are accepted by the grammar (terminals only)?

    The "directly derives" relation ⇒ means derived in exactly one production step. It is a relation from a string made up of terminals and nonterminals to another string made up of terminals and nonterminals. This is expressed mathematically as:

    ⇒ ⊆ (N ∪ T)* × (N ∪ T)*

    So A ⇒ B is valid if we can derive B from A in one step by applying a production rule. A string is derivable if we can derive it from the start symbol by applying ⇒ zero or more times (written ⇒*). However, just because a string is derivable doesn't mean it is a word in the language. Only derivable strings consisting only of terminal symbols are words in the language.

  12. How do I enforce correct associativity of binary operators in a CFG?

    Generally, if we say a binary operator is associative, we mean that

    ∀ a b c. a ⊗ (b ⊗ c) = (a ⊗ b) ⊗ c

    However, when dealing with CFGs, the term is used with a slightly different meaning. We describe a binary operator as associating in a certain direction, based on the structure of the derivation for bracketless expressions involving multiple uses of that operator. This is best seen by example. Consider the expression

    a ⊗ b ⊗ c

    We say that ⊗ associates to the left if we want to consider the expression to have the structure

    (a ⊗ b) ⊗ c

    and that ⊗ associates to the right if we want to consider the expression to have the structure

    a ⊗ (b ⊗ c)

    We can also describe binary operators as non-associative if we desire bracketless expressions involving multiple uses of the operator to be invalid. For example,

    a ≤ b ≤ c

    When it comes to writing a grammar to enforce these associativity directions, they generally have the following forms:

    • For non-associative binary operators:

      R → E ≤ E | E

    • For binary operators that associate to the left:

      E → E ⊕ T | T

    • For binary operators that associate to the right:

      T → F ⊗ T | F

    To understand why this is the case, try drawing some derivation trees for simple expressions. First try to derive them with the desired structure, then try to derive them such that they associate in the other direction. You will soon see that only one derivation is possible (making the grammar unambiguous), and that it is the desired associativity.

    If the language has parentheses, then it will usually be the case that it should be possible to use them to override the default associativity rules. However, because the parentheses have to be explicitly included in the word, the grammar can remain unambiguous. Parentheses usually have the highest precedence, and make a recursive call to the production with the lowest precedence.

    R → E ≤ E | E
    E → E ⊕ T | T
    T → F ⊗ T | F
    F → ( R ) | P
    

    The standard convention is that parentheses can contain any expression. This allows for somewhat unintuitive words in the case of non-associative binary operators, such as (x≤3)≥y, but usually such words are accepted. When implementing a compiler, it is in the (later) type-checking phase that invalid expressions of that form are rejected.

  13. My PDA only seems to accept the empty word, but it looks like it should do more. Is this correct?

    While it's entirely possible that your PDA only accepts the empty word, there is also a common mistake that you could be making.

    Remember that ε-transitions do not read a symbol from the input word; they can be applied whenever the stack symbol matches. A common misconception is that they can only be applied when the stack is empty; this is not the case. And a common (but not certain) consequence of this mistake is that the PDA may appear to only accept the empty word.

  14. Why can't we always use limited backtracking for parsing?

    Here's an example that illustrates the problem with with limited backtracking.

    Consider

    A → aA | ε

    with parsing function
    type Token = Char
    
    parseA             :: [Token] -> Maybe [Token]
    parseA ('a' : ts)  =  parseA ts
    parseA ts          =  Just ts
    
    and

    C → ab

    with a parsing function
    parseC                   :: [Token] -> Maybe [Token]
    parseC ('a' : 'b' : ts)  =  Just ts
    parseC _                 =  Nothing
    
    Then we write a parsing function for

    S → AC

    in the usual way:
    parseS    :: [Token] -> Maybe [Token]
    parseS ts =  case parseA ts of
                   Nothing  -> Nothing
                   Just ts' -> parseC ts'
    
    Is it going to work? Consider the word "ab". It can be derived from "S" by using the productions "A → ε" and "C → ab". But
    parseS "ab" = Nothing
    
    Why? Because
    parseA "ab" = Just "b"
    
    That is, it consumed the first "a", and then
    parseC "b" = Nothing
    
    So maybe we should write parseA the other way around?
    parseA            :: [Token] -> Maybe [Token]
    parseA ts         =  Just ts
    parseA ('a' : ts) =  parseA ts
    
    Now,
    parseA "ab" = Just "ab"
    
    and
    parseC "ab" = Just ""
    
    So
    parseS "ab" = Just ""
    
    Success. But consider
    parseS "aab"
    
    This is now going to fail. Why? Because this time, we need parseA to consume exactly one "a" That is, the first time round, it should succeed according to the production

    A → aA

    and the second time round, it should succeed according to the production

    A → ε

    This example shows that there is no right order in which to try the productions for A. So limited backtracking is doomed to fail one way or another in general. It will work for certain cases, though, so it is not useless, but it has to be used with care.
  15. In the exam, will the module topics be distributed over the questions in the same way as in previous years?

    Question 1 (the compulsory multiple choice one) spans the entire module. The remaining questions tend to be more focused. But I'm not saying they are completely focused, and I cannot comment on the exact distribution of topics over questions.

    I would strongly discourage speculative, selective, revision of the kind "if I know DFAs and NFAs only, I'll be all set, as I then would have passed last year's exam". (As an example: I'm not saying that was the case, I have not checked.)


Last updated 3rd October 2011.

Valid XHTML 1.0 Strict