8-year-old Kid v. 45-year-old Pseudo AI


, , , ,

I’ve variously been on again off again regarding computer/video games. Early on I had this fantasy concept that Leo would only be allowed to play computer games that he programmed himself, and that held for a little while, during which time we created some interesting, although laughably poor, games on the HopScotch platform. Since then we’ve engaged in several rounds of brief run-ins with puzzle games such as Where’s My Water and Monument Valley, and a couple others. We finally came to a mutual landing on Minecraft, which is at least slightly creative, and has a lot of programming and math_ed potential. Still, his media (including games) time is quite limited.

Another game that Leo has become enamored with (within the tight content and time controls that we put on him) is Portal, which is an extremely clever puzzle game with amazing graphics. (It has a tiny bit of robot violence, but is generally just a clever puzzle.) One of the best parts of Portal is that your adventure through the “lab” is narrated by an AI companion, named Glados (in Portal 2 it’s Wheatley). There some kind of complex back story, where Glados was a person uploaded into a computer, but Wheatley is a fully self-conscious AI. Of course, what these “AI”s say is completely canned, but it’s also (sometimes) clever and (often) funny.

Somewhat to my surprise, Leo’s interest in the games, combined with the (sometimes/often) clever/funny commentaries from the companion (pseudo) “AI”s has gotten Leo interested in AI more generally. Yesterday, Leo was monkeying with the Chegg flashcard app on my phone (for no reason other than wasting time on a long car ride), and, unbidden, he created some AI flashcards:

I have a bit of personal history with this sort of AI, having been the author (many many year ago — like, 1973!) of a BASIC version of Eliza that ran in very early PCs, and so became extremely popular. Indeed, I’ll bet that my Eliza is about the most knocked-off (as in copied/modified) program on the planet!

I happen to have a copy of Peter Schorn’s iAltair on my iPhone that coincidentally comes with a close knock-off of my actual old BASIC Eliza! So, since Leo is interested, having done the AI flash cards, I put him on my Eliza, now, um, 2018-1973=45 years later!

Here’s their conversation:


A while back we did a little toe-in-the-water experiment, using Lisp to write (bad) poetry. So, next week we’re going to start writing our own AI!


Car Physics Revisted


, , , ,

Long long time ago (actually, only back in 2015) we tracked a short road trip and manually computed and plotted the Time/Distance/Velocity/Acceleration. I’ve been wanting to revisit that fun little experiment, and this past weekend we had the opportunity on a 60-mile trip. Leo recorded our distance from start every five miles. This time, instead of manually plotting (which he does a lot of in school), I started him on Excel, which is certainly how he’ll be doing this once he gets beyond 3rd…or maybe 5th grade!

Here’re the results (you can see the master by clicking the pic):


The valley in the middle of the trip is where we stopped for lunch and turned around. And, yes, we really did travel exactly 60 miles, although that wasn’t planned!


Calculus Craft


, , ,

Okay, so I’m a little slow! I’ve been working my way toward calculus with Leo for a long time. We’ve explored a great deal among the preliminaries, such as algebra, geometry, and trig, done a little “real life calculus” using cars and rockets, and I’ve even left various fun “calculus coloring books” around, which he’s picked up and variously paged through. We’ve even done some Minecraft-math before that verges on calculus. But until today I hadn’t really broached any real calculus.

Today it hit me that Minecraft provides a perfect setting to discuss integration via Riemann Sums (which most of you will remember as the method of rectangles). The reason is obvious: In Minecraft you (generally) only have rectangles, so if you want to know the area under something — say the roof of semi-circular building — you don’t have any choice but to build the thing out of blocks. Therefore, the actual area is actually most conveniently measured in terms of piles of blocks, i.e., rectangles, i.e., the basis of the Reimann Sums!

Once I realized this it took only a couple of minutes to come up with some easy examples. To you and me, this just looks like simple intro calculus, but the “patter” — the story I was telling all the way through — is all about Minecraft!


Next I explored the idea that if the blocks were smaller, the approximation would get better and better. Conveniently, the Minecraft blocks are 16-units wide, so you can divide them in half four times and still be working with integers, making it easy to actually work the detailed math.

What’s most interesting about this is that because of Leo’s facility with Minecraft, once he had his Minecraft thinking cap on (which, once on, is extremely hard to get off!), he was able to see right away how the simple version of the rectangle-based integration worked, and was also able to easily think through (approximately) how things would go if the blocks were divided in half, and then half again and again and again!

In the last example (second half of the page) Leo worked the Integral[X^2] by rectangles while I worked it by the usual (X^(N+1))/(N+1) method. (I had to help him a little with his.) And we both got about 42, which is about right! (Actually, it’s 125/3 = 41.666…)

I think that this (somewhat “duh”) realization of the relationship between Minecraft and Calculus has opened up our whole next level of math fun!


Magic v. The Machines: Septimus Heap, Hyper-Literal AI, and Public Key Encryption


, , , ,

For a long time, now, Leo and I have been listening to audio books of long Fantasy and Steam Punk series, and at some point I’ll write a post — or probably several — about our responses to these, as there’s a lot to say. But occasionally we are able to make a contained and useful lesson out of some tidbit in these series. For example, the wonderful Leviathan series by Scott Westerfeld offers many opportunities ranging from discussion of evolutionary theory to the inventions of Nikola Tesla.

One of the ongoing issues that Leo and I discuss around these various series is the difference between fantasy and science fiction (or steam punk, which is what we tend toward). This is, of course, not a simple question, and the answer is probably as philosophical as it is problematic, so I’m not going to offer any deep theory of it here. But only of the things that is more-or-less clear is that in fantasy you can just make things up, that have no bearing at all to possible physical reality, whereas in SciFi/steam punk, there is generally some plausible way in which at least most of the things that play a role in the story could have some plausible reality in our real universe. So, in fantasy you find magic, whereas in SciFi you get machines. Whether the machines are really plausible is debatable — sometimes they’re time machines, for example! But at least there’s no explicit magic in SciFi. SciFi sort of has to follow the rules, whereas fantasy came just make up magic. In fact, this is one of the problems with fantasy: If you can make up random magic, why can’t you just make up magic that solves whatever problem you have. Fantasy writers end up creating endless random constraints on their magic in order to keep the story interesting. (“You can only use the magical Time Turner twice on Tuesdays between the hours of 3 and 4 pm when the moon is blue.” … or whatever …)

The opposite direction is less constrained; You often find steam-punky-quasi-machines in fantasy: Doors with special complex locks seem to be a favorite, with special keys scattered all about. The series that we’re listening to at the moment is the extremely long, but wonderfully written Septimus Heap series, by Annie Sage (hereafter SH).

There are many interesting things about this series (and about all of the series that we’ve been reading and/or hearing), but I just want to draw out two small details in SH that struck me as particularly interesting and funny, in a somewhat STEM-educational sense.

Hyper-Literal AI Self-Driving Sled Gone Wrong

The climax of the 5th book in the SH series, Syren, turns on a bunch of under-sea tunnels, called “ice tunnels”. It’s not giving away too much to tell you that in order for our heroes to get through these tunnels fast, they utilize a magical sled, which they can call remotely, and which is some sort of AI that they can tell where to go within the ice tunnels. At one point our heroes call the sled, which shows up (just in time, of course!) and upon climbing aboard, they tell the sled to “Head for the nearest hatch!” The sled takes off at a shot, but in the wrong direction! Too late they realize that what they should have said is: “Head for the nearest hatch in the castle!” The literally nearest hatch is in the opposite direction!

Public Key Encryption Saves (actually nearly loses) the Kingdom to the Dark Domain

The resolution 6th book of the SH series, Darke, turns on the decoding of a spell that will save everyone from some sort of magical evil, called the “Dark Domain”. The spell is protected by a complicated split key book cipher, with a few interesting properties. First, interesting property of the cipher is that each part lives on a circular disc — something akin to a DVD, but where you can read the “bits” off it with a strong magnifying glass. The circularity of the discs introduces an interesting problem, which they make a bit of in the story, which is that you don’t know where to start on a circular cipher! You could use the grammar of the language, or try all 49 (in this case) starting points, but Sage pulls out plausible escape clause that these are many hundred year old spells, and also that if you get it wrong you could do more harm than good. Hilarity ensues, and, of course, good triumphs over evil at the last possible moment.

(Another slightly funny aspect of this situation — if you’re 8 years old! — is that one of the halves of the split key had been eaten by one of the bad guys. The heroes had to get him to throw it up, and then utilize the disgust-covered disc! I guess it could have been worse! 💩 )

All this is very interesting (or perhaps not), but the point for the present is that Leo and I were able to use this imaginary cipher as a jumping off point for discussion of public key encryption, which is also a split key cipher, the details of which basically followed this excellent Numberphile video. (Numberphile, by the way, is a terrific video series on math … or as the Brits call it, “Maths”.)


Bayes Rule for 8-year-olds


, , ,

Leo has been fascinated for some time by the probabilities of things. He’s telling us all the time about the probability that, for example, the light will turn red before we get to it, or a meteor will destroy the earth tomorrow. Of course, he’s totally making up the numbers, although he does know that a probability lies between 0 and 1 (inclusive), and that when something can’t possibly happen, it’s p=0.0, and when it either is sure to  happen, or has already happened, it’s p=1.0 He also knows the general principle that p of n equally-likely events is 1/n, and he sort of gets the p of complex things generally in the right range. (For example, he puts the probability of the meteor at 0.0000…lots of zeros…and then a 1.)

Given that he’s fascinated by probabilities, my main principle of teaching, “Catch ’em while they’re interested!”, suggests taking this to the next level, so today we started into Bayesian probability updating. In order to do this, I had to devise a particularly simple example. Biomedical tests (a typical example) aren’t really gonna work for him. So I invented a little card game, that goes like this:

Take N ordered cards … I’ll start with 3 for these examples … so, say J Q K of the same suit. Could be 1 2 3, but I don’t want to get all confused with the numbers, so actually, I’m just gonna call them A B C. Shuffle them and lay them face down. Now, the question (hypothesis, in Bayesian terminology) is whether they have been laid out in order: ABC.  p(H) at this point is obviously 1/6 (could be any of ABC ACB BAC BCA CAB CBA) — this is our prior.

Okay, so we turn over the first card. In the easy case, it’s not an A. Intuitively, p(ABC|first<>A) should be zero, and indeed, if we use Bayes’ rule to compute p(first<>A|ABC) = 0, so the whole equation just obviously becomes zero, regardless of the other terms.

What? Oh, you’ve forgotten Bayes’ rule….Here you go:

p(H|D) = (p(H)*p(D|H))/p(D)

So, one more time, p(D|H) = p(ABC|first<>A) = 0, so regardless of the other numbers in the equation, the result is 0. (Assuming p(D) isn’t zero, which it never is in any real problem.)

Okay, now suppose first = A. What’s the “posterior”, that is the new p(ABC|first=A)? Now we have to ask about p(D) and p(D|H). p(D|H) = p(first=A|ABC) = 1 because it had to be an A if the hypothesis was true. Okay, so what’s p(D). Well, that’s just 1/3rd since there are three letters (ABC).

(You might be lured into thinking that P(D) is 1/2, since you’ve used up one of the letters (A), but that would be P(D|first=A), but at this stage we have all three letters. It’s common to confuse the order of explanation with the order of term evaluation in the equation! Just p(D) is unconditional at this stage, and is just the probability of the data given just the structural properties of the situation. At this stage that is all three letters, so 1/3, whereas at the next stage, it’ll be only two letters.)

Recall that p(H) was 1/6th (six possible sequences, as above).

So you end up with:

p(ABC|first=A) = (1/6 * 1)/(1/3) = 3/6 = 1/2

(Remember how to divide fractions? Invert and multiply!)

So there’s now a 1/2 = 0.5 = 50:50 chance that it’s the right sequence, i.e., ABC given that the first card is an A. That is: p(ABC|first=A)=0.5. This accords with intuition since there are now only two possible orders for the B and C cards.

Now, when we turn over the next card, and see that it’s B. At this stage, p(D) = 1/2 (only two possible letters!), and p(D|H) = 1, and the p(H) = 1/2 from the previous stage (that stage’s posterior is this stage’s prior!) = 1/2. (This could also be written p(D|first=A)). So we have p(ABC|first=A & second=B) = (1/2)/(1/2) = 2/2 = 1. Tada!

And, of course, if we’d turned over C here, again p(D|H) would be 0 and the whole thing would come to a crashing zero again, which, again, correctly accords with intuition!

It’s fun to go to very large numbers, which Leo also likes. That’s easy to do using cards by just asking exactly the same question as above, but using all 52 cards, that is, p(ABCDEF…). We know that, given a fair shuffle, this is very very very very small. (Here’s a terrific video about how very very very very small a given ordering from a shuffle really is!) I’ll call this “epsilon” (a number very very very very very close to zero!)

So in this case p(H) = epsilon (~=0), and p(D) = 1/52 the first time through.So although getting an Ace of Spades (assuming that’s what you are counting as the first ordered card) multiplies the probability 52 times, it’s still very very very very small, and will continue to be so for a very very very long time!

So, now I have to start engaging Leo when he generates random probabilities, for example of meteorites hitting the earth tomorrow, to reenforce his Bayesian thinking.

Postscript: Just a couple days after I posted this, the medical case actually came up in conversation. Leo was reading a book about a little girl who became deaf due to meningitis, and he became worried about whether he had meningitis, and how he would know. So we talked about how very unlikely it is in the first place (p(H) [prior] is very small), and how if he just had a headache, that made it more likely, but not a whole lot more likely (Although p(D|H) is high, p(D) for a headache is also pretty high, so they more-or-less cancel.) But that if there were two symptoms, like a stiff neck AND a headache … etc … and we sort of verbally worked through the equation, at least intuitively. (This was a car conversation, so I wasn’t about to whip out a pencil and calculator, much less real incidence data about headaches and meningitis.) He ended up being a lot less worried about having meningitis, so I guess that Bayes’ rule is actually good for kids to understand, at least intuitively.


The Young Lisper Meets Infinite Ravens



When I was a young lisper, of, say, 16 (which at the time was quite young, although these days would be considered quite old!), I read The Little Lisper (TLL). Of course, very few folks actually read TLL to learn Lisp; It’s mostly a curiosity among those who already know Lisp, or at least among quasi-adults trying to learn Lisp efficiently.

So usually, when one read TLL one usually actually just read through the whole book in one sitting, the whole exercise taking a whopping hour. When approached in this way, TLL seems a bit silly; After a couple pages, one says to oneself: “Oh come on! This is silly! Just give me a damned language manual!” (Indeed, I was one who said exactly these words, so I wrote my own!)

Now, however, I think that I — and pretty much everyone — was approaching TLL the wrong way. Obviously it’s not meant for those who know Lisp already. But I also don’t think that it’s meant for adults (or even quasi-adults) who want to learn Lisp efficiently. I contend that TLL is actually meant for CHILDREN (Soylent green is people!), or those of us who can put our mind in the mode of being a child, which I think is actually extremely difficult.

Brief aside: When you apply for at teaching job, they ask you to describe your “educational (or teaching) philosophy”. As one might expect, this is a difficult question, and I’m not going to try to give my whole teaching philosophy. But one of its pillars is this: You only get people’s (esp. children’s!) attention for a couple of minutes at a time, so be sure to do tiny fun things, and build them up over days, weeks, months, and years to reach where you want to go.

So when I decided that it was time for Leo to learn Lisp — specifically, when he turned 8, and was proficient at those “baby languages”, like scratch and hopscotch — I deployed my afore(partially)described teaching philosophy: I started with atoms, then lists, and so on, building up Lisp concepts and programming, no “lesson” consisting of more than about 10 minutes, and usually containing only one of a couple of concepts.

This worked exceptionally well with Leo (as you’ll see below). But just today, after a couple weeks at this, and now writing some actually vaguely-interesting programs, it just hit me that I was doing The Little Lisper ALMOST EXACTLY! Of course, wasn’t actually using TLL; rather, my teaching method enacted TLL! This is when the aforementioned realization that Soylent Green is… er… that TTL is written for children (or, again, more likely, adults with child-like minds — which probably describes Lispers of a certain age).

Okay, so this long winding dissertation is all by way of introducing Leo’s first substantial Lisp program, which we have called…


First we create two variables WORDS and PATTERNS:

(setf words '(
          Once upon a midnight dreary  while I pondered  weak and weary
          Over many a quaint and curious volume of forgotten lore
          While I nodded  nearly napping  suddenly there came a tapping
          As of some one gently rapping  rapping at my chamber door
          Tis some visitor  I muttered  tapping at my chamber door

(setf patterns '(
          (Once upon a midnight dreary  while I pondered  weak and weary  )
          (Over many a quaint and curious volume of forgotten lore  )
          (    While I nodded  nearly napping  suddenly there came a tapping  )
          (As of some one gently rapping  rapping at my chamber door  )
          ( Tis some visitor  I muttered  tapping at my chamber door  )
          (            Only this and nothing more  )
          (    Ah  distinctly I remember it was in the bleak December  )
          (And each separate dying ember wrought its ghost upon the floor  )
          (    Eagerly I wished the morrow  vainly I had sought to borrow )
          (    From my books surcease of sorrow sorrow for the lost Lenore  )
          (For the rare and radiant maiden whom the angels name Lenore  )
          (            Nameless here for evermore  )
          (    And the silken  sad  uncertain rustling of each purple curtain )
          (Thrilled me filled me with fantastic terrors never felt before  )
          (    So that now  to still the beating of my heart  I stood repeating )
          (     Tis some visitor entreating entrance at my chamber door  )
          (Some late visitor entreating entrance at my chamber door   )
          (            This it is and nothing more  )

(Yeah, yeah, we could have computed words from patterns…cut the 8 year old a break! He’s only been doing Lisp for about an hour total, over a couple weeks!)

Okay, so you can see where this is going:

(defun random-raven (w p)
  (loop for line in p
        do (print (loop for word in line
                        collect (nth (random (length w)) w)))))

Et voila!


And so on! If read with the correct Vincent Price horror movie voice and prosodics, this stuff sounds both great and hysterically funny!

Next week… Eliza?! 🙂


Leo and Jeff’s Zillion Notations



In other posts I’ve talked about Leo being fascinated by giant numbers, and a while back, based on his interest, I proposed a joke number called 11Z, called “eleventy zillion”. Today we put some specific mathematical meat on this notation, in two different ways, which we agreed to call “Leo Numbers” v. “Jeff Numbers”.

Jeff’s Z Numbers:

Both are based on Jeff’s Numbers, which is the following exponentiation ladder:

nZ = n^n^n^n… n times.

For example:

3Z = 3^3^3

Now, there is immediately a problem, which is which way to evaluate the ladder.

Does 3Z = (3^3)^3 = 27^3 = 19,683, or is it = 3^(3^3) = 3^27 = 7,625,597,484,987

I think that the mathematical default if left-to-right, i.e., up the ladder, but I hate these defaults because they aren’t explicit, so I prefer to have an explicit notion telling you which way to evaluate the ladder.

So to clarify this, we use arrows as:

3Z→ = (3^3)^3 = 27^3 = 19,683– called “3 zillion up”

3Z← = 3^(3^3) = 3^27 = 7,625,597,484,987 — called “three zillion down”

Now, Leo, who like scientific notation and thinks in terms of Googol and Googolplex, thinks of Z notion slightly differently.

Leo’s Z Numbers:

3Z→ = 3e19,683


3Z← = 3e7,625,597,484,987

So, just as Z-down notation is way bigger than Z-up notation, Leo numbers are way bigger than Jeff numbers!

A weird observation on powers of 2…


, ,

(…until you think about it for a few seconds, then it’s not so weird anymore.)

Leo’s been fascinated recently by powers of 2, mostly because weird things happen in MineCraft at boundaries of powers of 2, because of floating point overflow and such-like phenomena. (Actually, weird things happen in all computers at boundaries of powers of 2, for the same reason.) as a result of his interest in powers of 2, he spends inordinate periods playing with calculators, and with Wolfram Alpha, and we’ve done a bunch of Lisp, which has built-in Bignums that let you do arbitrary precision arithmetic. This has all actually been very mathematically and computationally educational, but today he noticed something apparently very odd and surprising about powers of 2.

We all know that multiplying by 2 repeatedly gives powers of two, as: 2, 4, 8, 16, … And if you keep dividing by two, the rational representation is, unsurprisingly, the inverse, as: 1/2, 1/4, 1/8, 1/16, …. Leo noticed that the decimal representation of these progressive divisions by 2 seem to be powers of 5, not 2, as: 0.5, 0.25, 0.125, 0.0625, and that in order to get decimal equivalents of the increasing powers of 2 (2, 4, 8,…) you have to divide by 5, not by 2 as: 1/5=0.2, /5=0.04, /5=0.008, /5=0.0016, etc!

A moment’s thought will reveal this to be not very surprising, as 2 and 5 are inextricably bound together in base 10 arithmetic (because 10 = 2*5). So, even though the mathematical reason for this is obvious (after a moment’s thought), it remains slightly weird that 2, 4, 8, … are the same as 2.0, 4.0, 8.0, …. but 1/2, 1/4, and 1/8, … are 0.5, 0.25, 0.125, …. Put even more starkly: 2/1, 4/1 , 8/1, … invert naturally to 1/2, 1/4, 1/8, … but the decimal representations don’t “invert” in the same easy way: 2.0 -> 0.5, 4.0 -> 0.25, 8.0 -> 0.125, …. My friend, Eric, notes that: “Dividing by 2 is like multiplying by 5 and dividing by 10. Hence the successive powers of 5 and convenient shifting of the decimal point.” This just re-emphasizes the fact that rational representations (i.e., fractions, as 2/1 and 1/2) are very different than decimal, place-value representations; they just work differently is all there is to it, regardless of the faux amis of 2.0 (that is, decimal 2.0) and 2 (i.e., the number 2), and our short hand for rationals with unit denominator (as: 2/1 => 2) having similar looking overt.

Continued thoughts: Fractions (rational notation, as in 2=2/1=10/5, etc.) isn’t place-value! The separate components (numerator and denominator) are place value, but the whole thing isn’t! This leads to the weird effect that if you keep dividing by 2 from, say, 4, you get a nice balanced progressions: 4, 2, 1, 1/2, 1/4, and so on, whereas when you divide place-value notation by 2s you get 4.0, 2.0, 1.0, 0.5, 0.25, etc. The former (rational — fractional) notation is really just multiplying the denominator by 2 (i.e., dividing by 2), so the above sequence could, more obviously, be written: 4/1, 4/2, 4/4, 4/8, 4/16, etc. (you could, of course, start anywhere you wanted: 8/1, etc.). You can do the same trick in decimal notation by multiplying by, say, 1000: 8000, 4000, 2000, 1000, 500, 250, 125, etc. That looks a lot less odd, and you can see that it becomes “5-ish” when you have to divide what is an odd number: 1!


Klein Clothing (Soft Topology)


Leo came to me the other day and told me that he couldn’t put on his pants because they were “Klein pants”. Um…whaaaaa?

Turns out that he had managed to get one leg inserted into the other in such a way that it really did seems like his pants were working something was a Klein Bottle: You could put your legs down different leg holes at the top, and they would come out in the same place!

Now, one obvious way to get this effect is to simply put your legs into the bottoms instead of the top, and then they’ll both come out the single waist. But the tangle that Leo had managed to get his pants into was actually more (unintentionally) sophisticated than that. It really did seem for a moment like there was something impossible about what he’d managed to do with them.

I thought about taking a picture of it, but it’s sort of hard to see; like trying to see what’s special about a Klein bottle if it wasn’t clear, so instead I’ll simply explain that the way you get this effect: Simply stick one leg down the other leg from the inside. The effect is that if you start by putting both legs into different holes, as normal, they end up coming out the same hole at the bottom! This is initially very weird, but it’s obvious what’s going on once you look at it for a moment…well, so is a Klein Bottle, I guess.

[Edit: A reader points out that all topology is “soft” by definition, in that anything that it’s specified as a topological assertion, in, by definition, flexible! This is why you get to do things like turn spheres inside out, morph mugs, and such-like transformations, as long as you retain their defining properties.]


Fun Minimum Description Length Game


As I’ve noted before, Leo is fascinated by huge numbers. Tonight we were trying to figure out what Tree(3) means, and we got all into recursive graph representations. That was all a little too complex (even for me, a least in trying to explain it to a 7-year-old!), so I decided to try to flip the problem on its head. We created a game called…well, computer scientists would call it the Minimum Description Length game, but we just called it the Shortest Equation game.

You start with a pretty big number, say 748. The goal is to use any normal math operations that a 2nd grader would know (i.e., +-/*, powers and roots, !, and that sort of thing) to create the shortest equation that gets you to that number, using only single digit numbers (and allowing 10). So, you can’t just say, for example 748-=748, or 700+48, but need to break it down to single-digit numbers (and 10s).

Score each digit, and operation, including parens, as a character. (And we were a little fast and loose about whether you can rely upon order-of-operations, which I usually don’t like, and would rather it never be taught, but then you end up with a LOT of parens!) You can play against one another, or collaboratively.

Here’s our whiteboard after playing 748 for a few turns:


So, Leo started with (7*10*10)+(10*9)+8 for 18 characters, and we got all the way down to 8 chars with 3^3^2+10+9 (which doesn’t have the up-arrows if you use superscript notation…and notice that this is one place where we’re playing a little fast-and-loose with order of operations, so that we’re reading 3^3^2 as 3^27, whereas you could also read it as 9^2, so we really should have used two parens, but it’s still only 10 long! 🙂