Category: Pedagogy

# A Tablespoon of Salt

I enjoy cooking – I find it really relaxing. There’s something nice about getting to switch to autopilot and follow a recipe after a long day of work.

A few years back I found a soup recipe (I can’t remember which one, unfortunately) that looked really, really good. I invited some friends over for dinner on the promise that I’d trade them food and shelter for good conversation. Nowadays, I think I have a pretty good handle on the basic steps in making soup, but at the time I was still getting a handle on cooking and found myself essentially following recipes more or less verbatim.

One of the first steps in this particular recipe was to cook a bunch of aromatics over low heat, along with some salt to draw out the water and speed things up. The recipe recommended adding in a teaspoon of kosher salt. I misread the directions and instead added a tablespoon of table salt.

Needless to say, although everything smelled amazing, when it came time to serve it up we found that it was essentially inedible. I had to apologize for keeping people waiting, but we managed to salvage the evening by heading downtown to a restaurant where the food was made by trained professionals.

So what went wrong here?

There were two immediate errors. First, I had mixed up tablespoons and teaspoons. That made my measurements off by a factor of three. I also used table salt instead of kosher salt, which has smaller crystals that can pack more tightly, effectively multiplying the oversalting amount by something like an extra 25 – 40%. So I had essentially added in around four times the amount of salt necessary. Oops.

But let’s take a step back. Why didn’t I notice that I was about to make this mistake? Well, I was new to cooking, and I hadn’t made enough recipes to realize that a full tablespoon of table salt is a lot of salt. That’s three times the daily recommended salt intake for a person. (That point is particularly relevant, since I was making food for three people!) In a sense, I hadn’t built up an intuition that would cause me to pause for a minute, say that something didn’t feel right about adding that much salt, and then double-check the recipe.

It also didn’t help that I had never really stopped to think about the difference between kosher salt and table salt. I had just thought that salt was salt was salt and that was that. It had never occurred to me that different types of salt had different properties and different use cases, or that recipes would specify different types for a good reason. (Or, rather, that they often are different and often are there for good reason. In some cases, salt is salt is salt. Just not this case.)

It also didn’t help that I hadn’t yet built up enough cooking experience to realize just how powerful salt is. There are some ingredients you can accidentally triple without breaking things. If I used too much oil in this recipe, I probably wouldn’t have noticed because the last step was to immersion blend everything and chances are it would have (mostly) emulsified in. It just would have been super rich. Salt is not one of those ingredients, but I hadn’t learned that yet.

And finally, it really didn’t help that this was a recipe I’d never tried out before. At the time I was making this recipe, I hadn’t yet developed enough experience to be able to stop and taste something partway through a recipe and figure out whether it needed some sort of adjustment. In this case even, when I was thinking “wow, those aromatics by themselves are really unpleasantly salty!” I didn’t have the context to know “and that’s not what they’re supposed to taste like!”

The reason I’m mentioning this story is that it began with me making what, in some sense, is a pretty minor error: using the wrong amount of salt. But really, this error indicated that I still had a lot to learn about cooking, namely:

• A tablespoon of salt is a lot of salt – probably way more than what you’d ever need in soup for three people.
• Kosher salt is not the same as table salt, and the two aren’t one-for-one interchangable.
• Oversalting something can totally wreck it, and you have to be very careful about how much salt you add to something.
• You should not invite people over and serve them an untested recipe!

Over the years both taking and teaching classes, I’ve found that there are certain mistakes you can make that seem minor, but which totally break whatever it is that you’re working on. Often times, the reason that those mistakes seem minor is because there’s a lack of a deeper understanding of some key concept or concepts, and really the true error is something more fundamental. I call these errors “tablespoon of salt errors.” A tablespoon of salt error is one where

• the error appears superficial to the person who made it;
• the error is sufficiently serious that it’s difficult, if not impossible, to recover from it without starting over; and
• the error reflects a fundamental process error or a lack of understanding of some key concept.

For example, when I was an undergrad, I was taking a class on advanced object-oriented programming techniques in C++. One of the assignments asked us to simulate objects moving around in a shipping network. I had a helper function that checked whether the weight of a packet was zero and then returned one of two different values based on this. The code essentially looked like this:

WeightClass weightClassFor(unsigned weight) {
return weight = 0? ZeroWeightClass : NonzeroWeightClass;
}

There’s an big error here. Specifically, I’m using a single-equals sign (“assign weight to be zero, then evaluate to zero”) rather than a double-equals sign (“compare weight against zero.”) This code always returns the NonzeroWeightClass, which isn’t correct.

I spent many hours working on this project, submitted it, and earned a solid C for it. The error was localized specifically to the above function, and changing that one equals sign would have fixed the issue and passed all the functionality tests.

In the only time I can ever remember doing so as an undergrad, I went to the TAs to beg and plead for some partial credit back, since after all, it was a one-character error! It’s pretty clear what I meant to do, after all.

But the thing is, this wasn’t a one-character error. Somehow, I had written some code that didn’t work, and I hadn’t sufficiently tested things before I submitted it. In reality, the root causes of the error were more accurately things like these:

• I wasn’t compiling with warnings on the maximum setting. A good compiler probably could have caught this and said something like “so… you’re sure you want to have a comparison statement that always does the same thing?”
• I hadn’t tested things thoroughly enough. This code clearly doesn’t work, but I had written a whopping zero unit tests for my code and therefore didn’t have a way of noticing.

This is a classic tablespoon of salt error. It seems like a minor error (I missed an equals sign). It broke everything pretty seriously (a quarter of the tests failed, despite the rest of the code being essentially right). And it indicated a serious process error on my part (where were the unit tests? why didn’t you compile with warnings as errors?)

I’m now completely convinced that the TAs were justified in taking off a bunch of points, since there was something fundamentally broken with how I approached the assignment. Plus, I now know not to make that error again!

Now that I’m teaching classes rather than taking them, I’m noticing that tablespoon-of-salt-errors are extremely common, especially in discrete math and proofwriting, where very, very simple mistakes can totally derail a proof.

For example, consider the following proof about multiples of three. The relevant definition we give to students is that an integer $n$ is a multiple of three if $n = 3k$ for some integer $k$.

Theorem: If $n$ is an integer that is not a multiple of three, then $n^2$ is not a multiple of three.

Proof: Consider an arbitrary natural number $n$ where $n$ is not a multiple of three. This means that $n \ne 3k$ for any integer $k$. Consequently, $n^2 \ne 3(3k^2)$ for any integer $k$. Therefore, if we let $m = 3k$, we see that $n^2 \ne 3m$, and therefore $n^2$ is not a multiple of three. $\qed$

There’s a nice rhythm to the proof, and if I were to read this, I’d think that the (hypothetical) student who wrote it made a good-faith effort to solve the problem. I’d also think that they made a pretty big logic error: just because you can’t write $n^2 = 3(3k^2)$ for any integer $k$ doesn’t necessarily mean that you can’t write $n^2 = 3m$ for some integer $m$ that isn’t of the form $3k^2$. Using that same logic, I could prove the (incorrect) claim that if $n$ is not a multiple of nine, then $n^2$ is not a multiple of nine either:

“Theorem:” If $n$ is an integer that is not a multiple of nine, then $n^2$ is not a multiple of nine.

Proof: Consider an arbitrary natural number $n$ where $n$ is not a multiple of nine. This means that $n \ne 9k$ for any integer $k$. Consequently, $n^2 \ne 9(9k^2)$ for any integer $k$. Therefore, if we let $m = 9k$, we see that $n^2 \ne 9m$, and therefore $n^2$ is not a multiple of nine. $\qed$

The theorem here is not true (pick $n = 3$, for example), and the proof follows pretty much the same line of reasoning as the one from earlier, so the logic can’t possibly be correct.

Superficially, this error isn’t all that deep. “Oops, I just made a mistake in that last step.” But really, this goes a lot deeper and hits at the challenge of working with arbitrarily-chosen values. If you want to rule out all choices of $m$, you have to really show that no choice of $m$ is going to work, not just find a family of infinitely many numbers that don’t work. There’s also another issue with setting $m = 3k^2$ without actually saying what $k$ is in the first place (is it a placeholder? you generally can’t do that to placeholders.)

This is a tablespoon of salt error. The error seems superficial (either “you can’t use inequalities that way” or “not all integers $m$ can be written as $3k^2$ for some integer $k$.”) The error completely breaks the proof (the same reasoning can be used to prove patently false statements without much modification). And the error indicates a fundamental misunderstanding of some key concept (arbitrarily-chosen values, placeholder variables, etc.)

A question that I keep grappling with as an instructor, and which I’m not sure I’ll ever really figure out the best answer to, is how to assign grades to student work with a tablespoon of salt error in it. I remember my own frustration as a student getting slammed for mistakes that I thought were pretty minor. I also remember how valuable those experiences were, though, since they really forced me to admit that there were things I didn’t yet know and they helped me figure out where I needed to study up.

Figuring out how to handle this case in a way that pushes students to do better without making them feel overwhelmed by what they don’t yet know is a challenge. It’s exacerbated in the tablespoon of salt case because the grading feels crazily disconnected from reality.

There’s a case currently up at the US Supreme Court (District of Columbia v. Wesby) where someone hosted a party in a vacant house that they had no business being in. The police came, questioned everyone there, and arrested everyone on suspicion of trespassing. In DC law, trespassing requires a mens rea on the part of the perpetrator – they have to know that they’re not supposed to be there. The question before the Court is about probable cause. The police report seeing a group of people in a house whose situation was so rough that the partygoers should have known that they were breaking it. The partygoers reported that they thought they were invited to a house they were allowed to be in. The disconnect between these narratives is currently being decided by nine very intelligent legal minds and will have a huge impact on law enforcement.

From the perspective of an instructor, a student who made a tablespoon of salt error has a serious gap in their understanding that needs to be corrected. From the perspective of the student, they made a minor error and the graders harshly took off way too many points for it. It’s a clash of narratives, which is never fun.

I’m going to keep tuning my approaches to handling cases like these. I figured I’d post this concept in the meantime in case the metaphor proves useful in other contexts.

# Designing Assignments with Google Forms

Over the last academic year, I’ve gotten a lot of mileage designing online assignments using Google Forms. I’ve typically seen them used for event RSVP systems, but apparently if you’re creative about how you use sections and multiple-choice questions and data validation, you can use Google Forms to ensure that students work through a particular series of exercises, and can even offer targeted feedback along the way.

There are two particular techniques that I’ve gotten the most mileage out of. The first one is an approach I’m calling the multiple choice maze. This is a Google Form that’s a series of multiple choice questions. Each time you provide a wrong answer, the form takes you to a page with either a hint or an explanation about why that answer is incorrect, followed by a link back to the previous page. The form can only be submitted on the last page, which ensures that students have to answer everything correctly before they can submit the assignment.

There are a number of really nice advantages to using a multiple-choice maze for an assignment. First, it’s a highly scalable way to make sure that everyone in the class is on the same page about something. When I last taught CS106B, I used a multiple choice maze to make sure that the students were familiar with the Honor Code policies for the course; we asked a series of multiple choice questions about what actions were and were not permissible, and only allowed students to submit the assignment by getting all the answers right. Second, it’s an effective and lightweight way to make sure students are on top of lecture material. I sent the spring quarter CS103A students a multiple choice maze assignment every Wednesday evening covering material from the week’s two lecture meetings and specifically asked questions that required them to exercise the skills and concepts we’d taught. Because the feedback is automatic and customizable, students can quickly size up whether their understanding of the concepts is where it needs to be and can get refreshers and reminders on topics that subject. Finally, since everything is handled through Google Forms, once all things are said and done we have a reliable way to determine who’s completed the assignment, since we have timestamps for all of the submissions and can ask for names.

Once you’ve gotten the hang of it, it’s not too hard to assemble a multiple choice maze. For example, here’s the internal view of the first question from the form linked above:

Notice that there are no questions in this section and that the section is purely text describing how to approach the problem. (I’ve never taught addition of fractions, so I suppose that this is probably not a very good explanation… sorry!) Importantly, the After section 2 option, which controls which section to visit after seeing this section is set to Go to section 1, which is the section containing the initial multiple-choice question. That way, if you get the answer wrong, you’ll get this nice helpful text and then get warped back to the question so you can make another guess.

We can take this a step further by creating questions where each wrong answer goes to a separate screen with a follow-up. For example, here’s the back-end view of the second question from the list:

Notice how each wrong answer goes to its own individual section, each of which is designed like the one above to warp back to this question.

The second approach I’ve found useful for making Google Form assignments is what I’m calling a find the hidden number assignment.

Last time I taught CS106B, I wanted students to get more familiarity using the debugger over the course of the quarter and figured a good way to do this would be to create a debugging assignment. For the very first assignment of the quarter, I took a standard program we’ve given out for a while that computes a rolling hash code, then designed an assignment that asked students to single-step through the hash code computation on a specific input. At some point in the single-stepping, I asked students to write down one of the numbers that was generated at an intermediate step. The Google Form for the assignment then required students to enter that number in order to move on to the next part of the assignment (and, therefore, the “Submit” button.)

Here’s a demo of what the form might look like. The secret number is 137.

The main advantage of setting up an assignment like this is that it functions as a lightweight “proof-of-work” system. Students need to show that they did some amount of work – which you can customize – in order to receive credit for the assignment. In my case, I was able to use this system to ensure that (in principle) everyone in the class had successfully gotten their IDE and debugger set up and were able to run programs and step through them in the debugger.

Setting something like this up isn’t too hard and just requires you to enable data validation and to choose the “Equal to” option:

Are these tools perfect for the job? No. But do they work well? Evidence seems to suggest that they do! The multiple choice maze I set up for the Honor Code seemed to work well; we had way fewer copied assignments submitted the quarter we deployed it than what we usually see, and more students seemed to know how to use the debugger.

I’ll probably keep using this technique in the future (once the forms are set up, it’s not at all hard to copy them from quarter to quarter). Let me know if you’re using anything along these lines or find this approach useful or interesting!

# A Case for Undecidability via Self-Reference

One of the major results that’s covered in most intro CS theory courses is that there are problem that are truly undecidable – there’s no general algorithm that can tell you the correct answer in all possible cases.

The typical way that undecidability is taught goes something like this:

• Students first see a concrete example of an undecidable language, like the halting problem. The proof that the halting problem is undecidable typically goes something like this:
• Assume that there’s a decider $D$ for the halting problem.
• Using $D$ as a subroutine, produce a machine $H$ that takes as input a TM $M$, then runs $D$ on $\langle M, \langle M \rangle \rangle$, then loops of $D$ says $M$ will halt on $\langle M \rangle$ and halts if $langle D$ says otherwise.
• Run $H$ on its own encoding $\langle H \rangle$. Contradiction!
• Students then use mapping reducibility as a mechanism for showing that other related languages are undecidable. Those proofs typically look like this:
• Given as input a TM $M$, construct a TM $D$ that runs $M$ on some input and has some complex behavior if $M$ halts on some input and has some other complex behavior if $M$ loops on that input.
• Use the fact that a TM that could decide whether $D$ has the property in question would decide the halting problem to show that the problem in question is undecidable.

When I first taught CS103, this is the approach I took to introducing undecidability, and I found it to be less than ideal for a number of reasons. First, the construction used to show that the halting problem (or, in my case, the language $A_{TM}$ of the universal TM) is undecidable is extremely tricky. It involves starting with a decider for the language and making a series of highly non-obvious transformations on it – converting it so that instead of taking two inputs, it takes a single input; building a new TM that listens for the result from the decider and then does the opposite; and finally running that TM on itself.

Second, once students had figured out how the construction worked, they had little intuition for what it was exactly about the halting problem that made it undecidable. “I can follow the proof,” they’d say, “but that still doesn’t give me a high-level intuition for how you’d eyeball this problem and have a hunch that it’s undecidable.” This is partly because the standard proof that the halting problem is undecidable is fairly technical and requires a number of non-obvious steps.

Third, students often had little intuition for how reducibility had anything to do with the intrinsic difficulty of problems. Part of this might have been specific to CS103: since the course doesn’t have algorithms as a prerequisite, for most students these sorts of impossibility reductions were the first exposure they’d had to reducibility. Many students thought that “reduction” mean “put a TM inside of a TM so that you can put that TM inside of another TM” and didn’t realize that there was a bigger-picture idea at play. Even after mastering the technique of proving undecidability via self-reference, many students expressed frustration that they understood how to execute a proof of undecidability, but had no idea why the reduction they’d done had proved anything.

In Fall 2014, I decided to abandon presenting undecidability using this above route and instead opted to use direct self-reference as a mechanism. After tuning and tweaking the presentation over the past three years, I’m very happy with how it’s worked.

This presentation of undecidability works in a number of stages. After first establishing the fundamentals (what decidability and recognizability mean, how languages connect back to the abstract notion of decision problems, how TMs work, etc.), I present a series of claims, in sequence, that build up to our first proof of undecidability:

• Universality is an emergent property of computation. It’s possible to build TMs that compute on other TMs and for TMs to run other TMs. (This introduces $A_{TM}$ and also is a great launching point for talking about why virtual machines, compilers, etc. are possible.)
• Self-reference is an emergent property of computation. I introduce an informal version of Kleene’s recursion theorem by showing how to write a Quine (with the strong caveat that what I’m doing is a proof of concept rather than something students are expected to know how to do!), then show how to write programs that do things like ask how many characters long they are or that print out whether they’re an even-length or odd-length program. I conclude by mentioning that, essentially, any program in any programming language can be modified to include a new function mySource() that returns the source code of the program.
• The combination of these two ideas leads directly to undecidability. Imagine you have a decider for $A_{TM}$, which we can model in software as a function willAccept(program, input) that takes as input the source code of a program and an input, then returns whether the program will accept the input. We then write this program, which asks whether it’s going to accept its input and then does the opposite:
• Generalize the above pattern. We can write all sorts of self-referential programs along the above lines to show that many different problems are undecidable without needing to appeal to reducibility.

From what I’ve seen, this approach offers several advantages over the version I outlined above. First, while there’s certainly some creativity involved in coming up with a self-referential program like the one above, the amount of effort required is lower than in the traditional proof that $A_{TM}$ is undecidable. There isn’t a complicated step of taking a two-input decider and turning it into a one-input decider, then flipping the accept and rejecting states, then running it on itself. This program is more of a “one-step” trick. The program asks what it’s going to do (is it going to accept its inputs?) and then proceeds to immediately do the opposite.

Second, this approach provides a powerful intuition for why many problems about TMs are undecidable. I often frame these sorts of self-referential constructions in terms of the classical trope of oracles and seers predicting the future. If you have a decider for a language about the observable behavior of TMs, you can think of it as an oracle (in the classical Greek sense rather than the complexity theory sense) that will predict the future by telling you what the program is or is not going to do. With self-reference, programs can then use such a decider to ask questions about what they themselves are going to do in the future, and therefore they can just decide to do the opposite of whatever is supposed to happen next. Students can point directly at the self-referential machine they construct as an explanation about why the problem they’re exploring is undecidable – they can say that this specific program will do the opposite of what it’s supposed to do if the underlying language is undecidable.

Finally, from a CS103 perspective, this approach is pedagogically useful for motivating reductions in a more practically-oriented setting. I touch on reducibility in the context of P and NP by showing it as a way of solving new problems using a solver for an original problem. For example, I show off how to solve the problem “given a box of connectors, can you find a way to plug your laptop’s VGA output into a projector’s HDMI input?” using a solver for the directed reachability problem, and the problem “given a crossword puzzle grid, how many dominoes can you place on it so that each domino covers two white squares and no dominoes overlap?” using a solver for maximum matching. As a first introduction to reducibility, I think these examples build a much better intuition for how reductions function than the computability “put a TM inside a TM inside another TM” reductions students would otherwise encounter first.

From experience, the sorts of questions I’ve gotten from students about this technique tend to be dramatically more focused and interesting than the questions I used to get about reducibility and the standard proof that the halting problem is undecidable. Students often ask about the limits of this technique – why can’t you use self-reference to prove that every language is undecidable, for example? – and about whether it generalizes to show that every language about TMs is undecidable. These are great questions and an excellent launching point for discussions of topics like Rice’s theorem.

There are still some bugs I need to work out of the presentation. I’ve been using C/C++-style pseudocode, which more students feel comfortable with than rather than high-level TM descriptions a la Sipser. This works well, but getting the right syntax down to make the learning as easy as possible is tricky and I think I still have some more work to do on that front.

There is also the shortcoming that this approach completely bypasses reducibility in the conventional sense. For the purposes of what I do in CS103 I think this is totally fine, though in other contexts this might not work out so well.

Going forward, I’m hoping to get this fully polished and to write a more in-depth explanation of the pedagogical decisions behind it and the supporting materials I’ve used to make everything click.