- 1 Higher-order functions
- 2 Environment diagrams
- 3 Sequences
- 4 Recursion
- 5 Data abstraction
- 6 Time complexity
- 7 Mutability
- 8 Mutable data-structures
- 9 Object-oriented programming
- 10 Iterables, iterators and generators
- 11 Scheme
- 12 Streams
- 13 Logic
- 14 Python syntax and semantics
- 15 Student guides
- 16 Composition
- 17 Debugging
- 18 Miscellaneous
Environment Diagram Rules
Environment Diagrams are very important in our understanding of how the computer interprets our code.
We will test you on this in every exam.
It will never go away.
Given that, master it as quickly as you can! :)
Below are the rules I follow when drawing environment diagrams. If you understand and faithfully follow these rules when drawing them, you'll never get them wrong.
One thing you haven't learned yet is nonlocal. You can skip that particular step for now (step 2 of Assignment).
Post here if you have any questions!
You can also take a look at this link for some examples of environment diagrams: http://albertwu.org/cs61a/notes/environments
For a different perspective on the rules, check out: http://markmiyashita.com/cs61a/sp14/environment_diagrams/rules_of_environment_diagrams/
A handout with detailed instructions on drawing environment diagrams is also available here (linked on the bottom of the course homepage): http://inst.eecs.berkeley.edu/~cs61a/sp14/pdfs/environment-diagrams.pdf
Environment Diagram Rules ========================= Creating a Function -------------------- 1. Draw the func <name>(<arg1>, <arg2>, ...) 2. The parent of the function is wherever the function was defined (the frame we're currently in, since we're creating the function). 3. If we used def, make a binding of the name to the value in the current frame. Calling User Defined Functions ------------------------------ 1. Evaluate the operator and operands. 2. Create a new frame; the parent is whatever the operator s parent is. Now this is the current frame. 3. Bind the formal parameters to the argument values (the evaluated operands). 4. Evaluate the body of the operator in the context of this new frame. 5. After evaluating the body, go back to the frame that called the function. Assignment ---------- 1. Evaluate the expression to the right of the assignment operator (=). 2. If nonlocal, find the frame that has the variable you re looking for, starting in the parent frame and ending just before the global frame (via Lookup rules). Otherwise, use the current frame. Note: If there are multiple frames that have the same variable, pick the frame closest to the current frame. 3. Bind the variable name to the value of the expression in the identified frame. Be sure you override the variable name if it had a previous binding. Lookup ------ 1. Start at the current frame. Is the variable in this frame? If yes, that's the answer. 2. If it isn't, go to the parent frame and repeat 1. 3. If you run out of frames (reach the Global frame and it's not there), complain. Tips ---- 1. You can only bind names to values. No expressions (like 3+4) allowed on environment diagrams! 2. Frames and Functions both have parents.
Why does [::-1] tuple work while the tuple [0:3:-1] doesn't?
I thought the -1 after the second semicolon meant that the interpreter is going to read the indexes "backwards".
The syntax of slicing is tup[start:end:step]:
- start from index start and end just before index end, incrementing the index by step each time
- if no step is provided, step = 1
- if step is positive, default values if not provided: start = 0, end = len(tup)
- if step is negative, default values if not provided: start = -1, end = one position before the start of the string
>>> (1, 2, 3)[::-1] # start at index -1, end one position before the start of the string (3, 2, 1) >>> (1, 2, 3)[0:3:-1] # start at 0 and go to 3, but step is negative, so this doesn't make sense and an empty tuple is returned ()
This is a helpful visualization from http://en.wikibooks.org/wiki/Python_Programming/Strings#Indexing_and_Slicing:
To understand slices, it's easiest not to count the elements themselves. It is a bit like counting not on your fingers, but in the spaces between them. The list is indexed like this:Element: 1 2 3 4 Index: 0 1 2 3 4 -4 -3 -2 -1
More info about slicing at http://stackoverflow.com/a/13005464/2460890.
Slicing with negative step
if the third example returns an empty tuple because you can't take negative steps from 0 to 4, shouldn't the second example also return an empty tuple?
Can someone explain why each example returns the respective answers?
>>> x= (1,2,3,4) >>> x[0::-1] (1,) >>> x[::-1] (4, 3, 2, 1) >>> x[0:4:-1] () >>> x[1::-1] (2, 1)
(For reference, the notation is x[start:end:step])
Python does something a very strange when the step is negative: if you omit the arguments to start and end, Python will fill them with what makes sense for a negative step. In the simple case of x[::-1], Python fills in the start with len(x)-1 and the end with -(len(x)+1). The end term is strange, but remember that the end term isn't included. We therefore can't use 0, but we can't use -1 either, since that clearly refers to the last element of the tuple. We need to fully wrap the negative index around, to refer to the element "before" the 0th index. This way, Python will start at the end of the tuple and proceed to the beginning of the tuple.
That's why x[0:4:-1] doesn't make sense: how can we start at 0 and end at 4, if we're proceeding backwards?
And that's why x[0::-1] makes sense (albeit, in a strange way): Python is proceeding from the 0 index to the beginning of the list. It includes the start index, which is why you see a 1 pop up.
Let me know if that was confusing!
Iterables, iterators and generators
Tail Recursion in Python
In this page, we’re going to look at tail call recursion and see how to force Python to let us eliminate tail calls by using a trampoline. We will go through two iterations of the design: first to get it to work, and second to try to make the syntax seem reasonable. I would not consider this a useful technique in itself, but I do think it’s a good example which shows off some of the power of decorators.
The first thing we should be clear about is the definition of a tail call. The “call” part means that we are considering function calls, and the “tail” part means that, of those, we are considering calls which are the last thing a function does before it returns. In the following example, the recursive call to f is a tail call (the use of the variable
ret is immaterial because it just connects the result of the call to
f to the return statement), and the call to
g is not a tail call because the operation of adding one is done after
g returns (so it’s not in “tail position”).
def f(n) : if n > 0 : n -= 1 ret = f(n) return ret else : ret = g(n) return ret + 1
1. Why tail calls matter
Recursive tail calls can be replaced by jumps. This is called “tail call eliminination,” and is a transformation that can help limit the maximum stack depth used by a recursive function, with the benefit of reducing memory traffic by not having to allocate stack frames. Sometimes, recursive function which wouldn’t ordinarily be able to run due to stack overflow are transformed into function which can.
Because of the benefits, some compilers (like
gcc) perform tail call elimination, replacing recursive tail calls with jumps (and, depending on the language and circumstances, tail calls to other functions can sometimes be replaced with stack massaging and a jump). In the following example, we will eliminate the tail calls in a piece of code which does a binary search. It has two recursive tail calls.
def binary_search(x, lst, low=None, high=None) : if low == None : low = 0 if high == None : high = len(lst)-1 mid = low + (high - low) // 2 if low > high : return None elif lst[mid] == x : return mid elif lst[mid] > x : return binary_search(x, lst, low, mid-1) else : return binary_search(x, lst, mid+1, high)
Supposing Python had a
goto statement, we could replace the tail calls with a jump to the beginning of the function, modifying the arguments at the call sites appropriately:
def binary_search(x, lst, low=None, high=None) : start: if low == None : low = 0 if high == None : high = len(lst)-1 mid = low + (high - low) // 2 if low > high : return None elif lst[mid] == x : return mid elif lst[mid] > x : (x, lst, low, high) = (x, lst, low, mid-1) goto start else : (x, lst, low, high) = (x, lst, mid+1, high) goto start
which, one can observe, can be written in actual Python as
def binary_search(x, lst, low=None, high=None) : if low == None : low = 0 if high == None : high = len(lst)-1 while True : mid = low + (high - low) // 2 if low > high : return None elif lst[mid] == x : return mid elif lst[mid] > x : high = mid - 1 else : low = mid + 1
I haven’t tested the speed difference between this iterative version and the original recursive version, but I would expect it to be quite a bit faster because of there being much, much less memory traffic.
Unfortunately, the transformation makes it harder to prove the binary search is correct in the resulting code. With the original recursive algorithm, it is almost trivial by induction.
Programming languages like Scheme depend on tail calls being eliminated for control flow, and it’s also necessary for continuation passing style.
2. A first attempt
Our running example is going to be the factorial function (a classic), written with an accumulator argument so that its recursive call is a tail call:
def fact(n, r=1) : if n <= 1 : return r else : return fact(n-1, n*r)
n is too large, then this recursive function will overflow the stack, despite the fact that Python can deal with really big integers. On my machine, it can compute
fact(1000) results in a sad
RuntimeError: Maximum recursion depth exceeded.
One solution is to modify fact to return objects which represent tail calls and then to build a trampoline underneath fact which executes these tail calls after fact returns. This way, the stack depth will only contain two stack frames: one for the trampoline and another for each call to fact.
First, we define a tail call object which reifies the concept of a tail call:
class TailCall(object) : def __init__(self, call, *args, **kwargs) : self.call = call self.args = args self.kwargs = kwargs def handle(self) : return self.call(*self.args, **self.kwargs)
This is basically just the thunk
lambda : call(*args, **kwargs), but we don’t use a thunk because we would like to be able to differentiate between a tail call and returning a function as a value.
The next ingredient is a function which wraps a trampoline around an arbitrary function:
def t(f) : def _f(*args, **kwargs) : ret = f(*args, **kwargs) while type(ret) is TailCall : ret = ret.handle() return ret return _f
Then, we modify fact to be
def fact(n, r=1) : if n <= 1 : return r else : return TailCall(fact, n-1, n*r)
Now, instead of calling
fact(n), we must instead invoke
t(fact)(n) (otherwise we’d just get a TailCall object).
This isn’t that bad: we can get tail calls of arbitrary depth, and it’s Pythonic in the sense that the user must explicitly label the tail calls, limiting the amount of unexpected magic. But, can we eliminate the need to wrap t around the initial call? I myself find it unclean to have to write that
t because it makes calling fact different from calling a normal function (which is how it was before the transformation).
3. A second attempt
The basic idea is that we will redefine fact to roughly be
t(fact). It’s tempting to just use
t as a decorator:
@t def fact(n, r=1) : if n <= 1 : return r else : return TailCall(fact, n-1, n*r)
(which, if you aren’t familiar with decorator syntax, is equivalent to writing
fact = t(fact) right after the function definition). However, there is a problem with this in that the fact in the returned tail call is bound to
t(fact), so the trampoline will recursively call the trampoline, completely defeating the purpose of our work. In fact, the situation is now worse than before: on my machine,
fact(333) causes a
For this solution, the first ingredient is the following class, which defines the trampoline as before, but wraps it in a new type so we can distinguish a trampolined function from a plain old function:
class TailCaller(object) : def __init__(self, f) : self.f = f def __call__(self, *args, **kwargs) : ret = self.f(*args, **kwargs) while type(ret) is TailCall : ret = ret.handle() return ret
and then we modify
TailCall to be aware of
class TailCall(object) : def __init__(self, call, *args, **kwargs) : self.call = call self.args = args self.kwargs = kwargs def handle(self) : if type(self.call) is TailCaller : return self.call.f(*self.args, **self.kwargs) else : return self.call(*self.args, **self.kwargs)
Since classes are function-like and return their constructed object, we can just decorate our factorial function with
@TailCaller def fact(n, r=1) : if n <= 1 : return r else : return TailCall(fact, n-1, n*r)
And then we can call fact directly with large numbers!
Also, unlike in the first attempt, we can now have mutually recursive functions which all perform tail calls. The first-called
TailCall object will handle all the trampolining.
If we wanted, we could also define the following function to make the argument lists for tail calls be more consistent with those for normal function calls:
def tailcall(f) : def _f(*args, **kwargs) : return TailCall(f, *args, **kwargs) return _f
and then fact could be rewritten as
@TailCaller def fact(n, r=1) : if n <= 1 : return r else : return tailcall(fact)(n-1, n*r)
One would hope that marking the tail calls manually could just be done away with, but I can’t think of any way to detect whether a call is a tail call without inspecting the source code. Perhaps an idea for further work is to convince Guido von Rossum that Python should support tail recursion (which is quite unlikely to happen).
 This is compiler-writer speak. For some reason, “elimination” is what you do when you replace a computation with something equivalent. In this case, it’s true that the call is being eliminated, but in its place there’s a jump. The same is true for “common subexpression elimination” (known as CSE), which takes, for instance,
a = b + c d = (b + c) + e and replaces it with a = b + c d = a + e
b+c is eliminated from the second statement, but it’s not really gone...
The optimization known as “dead code elimination” actually eliminates something, but that’s because dead code has no effect, and so it can be removed (that is, be replaced with nothing).
 In Scheme, all loops are written as recursive functions since tail calls are the pure way of redefining variables (this is the same technique Haskell uses). For instance, to print the numbers from 1 to 100, you’d write
(let next ((n 1)) (if (<= n 100) (begin (display n) (newline) (next (+ n 1)))))
where next is bound to be a one-argument function which takes one argument,
n, and which has the body of the
let statement as its body. If that
100 were some arbitrarily large number, the tail call to next had better be handled as a jump, otherwise the stack would overflow! And there’s no other reasonable way to write such a loop!
Continuation passing style is commonly used to handle exceptions and backtracking. You write functions of the form
(define (f cont) (let ((cont2 (lambda ... (cont ...) ...))) (g cont2)))
along with functions which take multiple such f’s and combines them into another function which also takes a single cont argument. I’ll probably talk about this more in another page, but for now notice how the call to g is in the tail position.
 This is basically a curried version of
 That is, Schönfinkelized.
Python syntax and semantics
Remember the differences between return and print.
returncan only be used in a
defstatement. It returns a value from a function. Once Python evaluates a
returnstatement, it immediately exits the function.
def foo1(x): return x def foo2(x): print(x) >>> foo2(1) # In foo2, we print 1 ourselves using the print function 1 >>> foo1(1) # HERE, THE PYTHON INTERPRETER PRINTS THE RETURN VALUE OF FOO1. CANNOT STRESS HOW IMPORTANT TO UNDERSTAND THIS 1 >>> foo1(1) + 1 2 >>> foo2(1) + 1 1 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
How function decorators work
I'm having difficulties understanding what exactly a function decorator is. Can someone elaborate and potentially provide me with an example other than the one in the readings?
So imagine you wanted your functions to print their arguments before they executed them. Here's one way to do this.
def loud(fn): def new_fn(*args): print(args) return fn(*args) return new_fn
Here's a function loud that takes in a function and returns a new function that when called, prints out its arguments, and then does what the old function does.
def sq(x): return x * x >>> sq(4) 16 >>> sq = loud(sq) # replace the old square with our loud one. >>> sq(4) (4,) 16
A function decorator does the same thing as the above. Assuming loud is defined, we can do this:
@loud def sq(x): return x * x >>> sq(4) (4,)
How to learn Computer Science
If you've never programmed before, or if you've never taken a class quite like 61A before, things right now might be scary. Everything is strange and new and there quite a lot to take in all at once. So if you're having a hard time so far, here are a few articles that might help.
Note: these articles are pretty long, so feel free to read them in multiple sittings.
At the beginning, everything seems a bit scary in CS. Michelle Bu, a Berkeley alum and a crazy good hacker, shares one of her experiences when she was a wee n00b in 21 Nested Callbacks.
Start here! "A Beginner's Guide to Computer Science" Written by Berkeley's own James Maa. James is known for his killer walkthroughs (check out his Productivity guide). This article gives you some background on learning CS and then provides a practical guide on how to learn effectively.
How do we learn? Mark Eichenlaub explains in this Introduction to Learning Theory. This is quite possibly the best introduction to Learning Theory.
Sometimes, you're stuck and you end up really, really frustrated. Marcus Geduld explains Why do we get frustrated when learning something?
General style guidelines from 61A website
Are we required to add any comments to our code to say what a function does, etc.? And does clarity of code count for this project, in which case should we write comments at the end of not-so-clear statements? Thanks.
Docstrings of each function are already provided. If you add a helper function, you should write a docstring for it.
The style guide on the course website advises: "Your actual code should be self-documenting -- try to make it as obvious as possible what you are doing without resorting to comments. Only use comments if something is not obvious or needs to be explicitly emphasized"
You should always aim to make your code "self-documenting," meaning it is clear what your code is doing without the aid of comments. You should try to keep the number of comments to a minimum, but if there are lines which you think are unclear/ambiguous, feel free to add a comment.
All projects in this class contain a 3 point component that is judged solely on your code "composition" -- i.e. whether your code is clear, concise, and easy to read.