A weird observation on powers of 2…


, ,

(…until you think about it for a few seconds, then it’s not so weird anymore.)

Leo’s been fascinated recently by powers of 2, mostly because weird things happen in MineCraft at boundaries of powers of 2, because of floating point overflow and such-like phenomena. (Actually, weird things happen in all computers at boundaries of powers of 2, for the same reason.) as a result of his interest in powers of 2, he spends inordinate periods playing with calculators, and with Wolfram Alpha, and we’ve done a bunch of Lisp, which has built-in Bignums that let you do arbitrary precision arithmetic. This has all actually been very mathematically and computationally educational, but today he noticed something apparently very odd and surprising about powers of 2.

We all know that multiplying by 2 repeatedly gives powers of two, as: 2, 4, 8, 16, … And if you keep dividing by two, the rational representation is, unsurprisingly, the inverse, as: 1/2, 1/4, 1/8, 1/16, …. Leo noticed that the decimal representation of these progressive divisions by 2 seem to be powers of 5, not 2, as: 0.5, 0.25, 0.125, 0.0625, and that in order to get decimal equivalents of the increasing powers of 2 (2, 4, 8,…) you have to divide by 5, not by 2 as: 1/5=0.2, /5=0.04, /5=0.008, /5=0.0016, etc!

A moment’s thought will reveal this to be not very surprising, as 2 and 5 are inextricably bound together in base 10 arithmetic (because 10 = 2*5). So, even though the mathematical reason for this is obvious (after a moment’s thought), it remains slightly weird that 2, 4, 8, … are the same as 2.0, 4.0, 8.0, …. but 1/2, 1/4, and 1/8, … are 0.5, 0.25, 0.125, …. Put even more starkly: 2/1, 4/1 , 8/1, … invert naturally to 1/2, 1/4, 1/8, … but the decimal representations don’t “invert” in the same easy way: 2.0 -> 0.5, 4.0 -> 0.25, 8.0 -> 0.125, …. My friend, Eric, notes that: “Dividing by 2 is like multiplying by 5 and dividing by 10. Hence the successive powers of 5 and convenient shifting of the decimal point.” This just re-emphasizes the fact that rational representations (i.e., fractions, as 2/1 and 1/2) are very different than decimal, place-value representations; they just work differently is all there is to it, regardless of the faux amis of 2.0 (that is, decimal 2.0) and 2 (i.e., the number 2), and our short hand for rationals with unit denominator (as: 2/1 => 2) having similar looking overt.

Continued thoughts: Fractions (rational notation, as in 2=2/1=10/5, etc.) isn’t place-value! The separate components (numerator and denominator) are place value, but the whole thing isn’t! This leads to the weird effect that if you keep dividing by 2 from, say, 4, you get a nice balanced progressions: 4, 2, 1, 1/2, 1/4, and so on, whereas when you divide place-value notation by 2s you get 4.0, 2.0, 1.0, 0.5, 0.25, etc. The former (rational — fractional) notation is really just multiplying the denominator by 2 (i.e., dividing by 2), so the above sequence could, more obviously, be written: 4/1, 4/2, 4/4, 4/8, 4/16, etc. (you could, of course, start anywhere you wanted: 8/1, etc.). You can do the same trick in decimal notation by multiplying by, say, 1000: 8000, 4000, 2000, 1000, 500, 250, 125, etc. That looks a lot less odd, and you can see that it becomes “5-ish” when you have to divide what is an odd number: 1!


Klein Clothing (Soft Topology)


Leo came to me the other day and told me that he couldn’t put on his pants because they were “Klein pants”. Um…whaaaaa?

Turns out that he had managed to get one leg inserted into the other in such a way that it really did seems like his pants were working something was a Klein Bottle: You could put your legs down different leg holes at the top, and they would come out in the same place!

Now, one obvious way to get this effect is to simply put your legs into the bottoms instead of the top, and then they’ll both come out the single waist. But the tangle that Leo had managed to get his pants into was actually more (unintentionally) sophisticated than that. It really did seem for a moment like there was something impossible about what he’d managed to do with them.

I thought about taking a picture of it, but it’s sort of hard to see; like trying to see what’s special about a Klein bottle if it wasn’t clear, so instead I’ll simply explain that the way you get this effect: Simply stick one leg down the other leg from the inside. The effect is that if you start by putting both legs into different holes, as normal, they end up coming out the same hole at the bottom! This is initially very weird, but it’s obvious what’s going on once you look at it for a moment…well, so is a Klein Bottle, I guess.

[Edit: A reader points out that all topology is “soft” by definition, in that anything that it’s specified as a topological assertion, in, by definition, flexible! This is why you get to do things like turn spheres inside out, morph mugs, and such-like transformations, as long as you retain their defining properties.]


Fun Minimum Description Length Game


As I’ve noted before, Leo is fascinated by huge numbers. Tonight we were trying to figure out what Tree(3) means, and we got all into recursive graph representations. That was all a little too complex (even for me, a least in trying to explain it to a 7-year-old!), so I decided to try to flip the problem on its head. We created a game called…well, computer scientists would call it the Minimum Description Length game, but we just called it the Shortest Equation game.

You start with a pretty big number, say 748. The goal is to use any normal math operations that a 2nd grader would know (i.e., +-/*, powers and roots, !, and that sort of thing) to create the shortest equation that gets you to that number, using only single digit numbers (and allowing 10). So, you can’t just say, for example 748-=748, or 700+48, but need to break it down to single-digit numbers (and 10s).

Score each digit, and operation, including parens, as a character. (And we were a little fast and loose about whether you can rely upon order-of-operations, which I usually don’t like, and would rather it never be taught, but then you end up with a LOT of parens!) You can play against one another, or collaboratively.

Here’s our whiteboard after playing 748 for a few turns:


So, Leo started with (7*10*10)+(10*9)+8 for 18 characters, and we got all the way down to 8 chars with 3^3^2+10+9 (which doesn’t have the up-arrows if you use superscript notation…and notice that this is one place where we’re playing a little fast-and-loose with order of operations, so that we’re reading 3^3^2 as 3^27, whereas you could also read it as 9^2, so we really should have used two parens, but it’s still only 10 long! 🙂


Matrix Wars (1.01)


, , ,

Continuing in our exploration of gamified higher math, Leo and I programmed up a version of space invaders in HopScotch that depends on matrix multiplication. I took only a few hours to create a pretty interesting game. Below are some screenshots from the game. It’s a bit hard to explain the game play; I recommend playing it yourself here!


The matrix (Vmn) is constantly being multiplied by the current target (Txy) to create the new values (Nxy):


You click on the V(mn) entries to roll them through -1, 0, 1, and then when you have an Nxy result you like, click “Fire” to load the new (Nxy) to the target (Txy). Since the matrix multiplication is being continuously, as soon as you load the Nxy into the Txy, the new Nxy are re-computed. It’s easy to create matrices that move the target around in more-or-less any way you like.


Meanwhile, the invader is getting closer and closer. If you hit the invader before it hits earth, you win.



PythonCraft: Chasing the Sort Front


, ,

Leo watched an extremely good TedEd about sorting algorithms, and wanted to try it out himself, so we wrote a simple bubble sort in Python, and we deployed it through the PythonCraft API to run in MineCraft. This turned out to be more fun and educational than I thought it was going to be.

To begin with, here’s the code (slightly cleaned up for display here; you’ll have to add the right imports, for example):


Unsurprisingly, we had the longest discussion about why you need the “Hold” variable in the swap step(s). There are two errors in this code, that I’ll reveal below. (The highly motivated reader can try to figure these out; Actually, any semi-competent programmer can probably guess what the errors are without even looking at the code!) Regardless, it does the basic job.

So, it’s actually extremely cool when it runs, because you can watch the sort in action, and actually chase down the sort front! Here are a couple of videos of us doing that:

(Note the sheep on the blocks!)

You can see (and I mention) one of the bugs in the video: The “Obi wan” (Off by One) error, which leaves on of the blocks unsorted at the far end of the array.

The other bug is more subtle, until I point it out: We actually intended to sort by rainbow order (ROYGBIV), but that would have required another dereference, and when I tried to explain that, it went right over Leo’s head, him already being well snowed by the Hold/Swap problem, so I simply let it go. As a result, it’s actually sorting by whatever the color codes happen to be in MineCraft, which seem to be random (or at least I can’t tell what the logic is that relates MC color IDs to rainbow order).


NewtonCraft: Rocket Engine Blocks


, , , , , ,

This year I’ve been making a concerted effort to “climb the mountain of algebra” with Leo through physics (like some sort of Newtonian Julie Andrews*). One way into this that works extremely well in motivational terms is (unfortunately) via MineCraft. Here’s an early example of using MineCraft to motivate some algebra. This week we did an interesting problem on Newton’s second law. (That’s the F=MA one, in case you don’t remember the order!) The pages are reproduced below, and if you click any of them, they link to a downloadable powerpoint which contains a bunch of “fun” math worksheets, including these (although, of course, “fun” is in the eye of the student).


By the way, although there really is a MineCraft Mod called “Mekanism”, which has a lot of cool stuff, like lasers and a fusion reactor, it doesn’t actually have an REB … I made that up!

(*) This is an oblique reference to the wonderful movie “In The Loop”. Search for “Climb the mountain of conflict”, and watch the youtube video. I won’t link to it because it’s slightly NSFW (“strong language”, as they say), and definitely NSFK! But if you’re an adult, it’ll make you truly lol!


Fun (and Educational) Fluid Physics


, , ,

Off of quantum mechanics for a moment, and back to the world of macroscopic (newtonian) fluid physics. We’ve been playing a lot with fluid (particle-based) simulations lately, looking at lift, drag, and turbulence, the bernoulli effect, eddies, and other such-like chaotic phenomena.  (“Macroscopic” would be putting it a bit too strongly; Maybe “mesoscopic” is the right word?)

I had in mind actually making a Navier-Stokes game, but it turns out that there are already a bunch of great apps out there that simulate these phenomena perfectly well, and are also very fun! I’ve tried out at least a dozen — I only bother with the free ones! — and two are of special note.

The first is Wind Tunnel (Free), by Algorizk (which I assume is either a name or a pun). This is a simple but extremely nice app with all the right capabilities. You can draw arbitrary shapes, view in particle, smoke, pressure, or speed modes, and calculate overall lift and drag. Here are some examples:


There are a bunch of pre-drawn objects (although not many), and you can draw your own objects. My only complaint about this app is that you can’t rotate solid bodies, like the wing above, in order to experiment with angle-of-attach (that is, the position of the object with respect to the flow).

The other great example of a fluid sim is Powder Game. The free version is ad-supported, but the ads are along the bottom, and not too annoying. In addition to being an newtonian fluid simulator, Powder Game has tons of special types of particles:


This lets you do a ton of very fun experiments, like exploding things and watching the chaotic dynamics on Nitro!


You can spend hours with either of these apps in the perfect paradigm of learning-by-playing.

In another post I’ll talk more about some great newtonian sims that are a truly macroscopic, which is fun (and educational) of a very different sort, at a very different level.


HopScotching towards Quantum Computing


, , , , ,

Leo’s FLG (Focused Learning Goal) for this year is to build a real quantum computing mod in MineCraft. (Note that the kids set these goals for themselves at the beginning of the year, and although I might have slightly influenced his choice of project, the “in MinceCraft” setting was all Leo!) This came from several sources, aside from just the obvious entanglement of his interest in quantum computers with MineCraft. The main one is that there are two quite cool MineCraft mods, one, called qCraft, that adds a sort of quasi-quantum mechanics, and another, called Mekanism, that adds all sorts of advanced devices, esp. lasers. The qCraft mod actually has quantum computer components, but they are not very elegantly done; We wanted to make it a little more like a real quantum computer by using the Mekanism lasers as the qubit sources, and then add optical components for the gates.

(If you don’t know what the heck I’m talking about, there are innumerable videos on quantum computation on youtube, but if you REALLY want to understand it, I strongly recommend this excellent set of short lectures by Michael Nielsen, which not only nicely demystifies quantum computers, but at the same time demystifies quantum mechanics!)

Anyway, so we needed to get our feet a tiny bit wet toward this pretty massively complex FLG. Unfortunately, direct MineCraft modding is done either in Java or Python. (There are a few little experiments in modding in scratch-like languages, but, much as they are nice tries, they have issues, so I decided not to bother with them, at least for the moment.)

To get our feet wet using a programming paradigm that Leo already knows, we chose HopScotch. Together we created a highly simplified quantum computer game in HopScotch. We only have one gate, a Hadamard gate, and the quantum state is highly simplified, having only the possibilities of being 1 or 0 or 50/50, no complex numbers nor normalizations.

But it’s kinda cool, none-the-less:


The circle represents a qubit that might be a photon, for example, and its color represents its quantum state: green is definite 1, and red is definite 0. It travels a continuous loop from left to right, and then re-appears on the left again, as though it’s on a quantum wire that’s looped around from the end to the start.

When the program starts, the photon’s quantum state is definite 1 (1.0|1>+0.0|0>), and so the photon is green. The M box on the right measures the quantum state, “collapsing” it to 1 or 0. If you just let the program run from the start without doing anything (as above), the measurement gate will just keep reading 1, and incrementing the 1 count. The photon will just stay green.

The hex is a Hadamard gate (H gate), which splits the quantum state in half: 0.5|1>+0.5|0>. (Remember that we’re simplifying here, so there are no complex values or normalizations, and I’m using probabilities instead of amplitudes; I did say “simplified” and “baby steps”, right?!) If you drag the hex into the photon’s path (pic below), the state becomes a mixture of 1 and 0, and the color becomes a (ugly) mixture of red and green:


When the qubit in this 50/50 state gets measured (that is, when it hits the M gate), there’s a 50/50 chance of “collapsing” into a 1 or a 0. It’ll change to either red or green, start again at the left, hit the H gate again, and so on. If you let it run like that for a while, the counts of 1 and 0 will come out the same, statistically speaking.

You can try it out yourself on HopScotch on your iPad. (I’m told that HopScotch now allows you to run code in your browser, so you don’t need to actually have HopScotch on an iPad, although I highly recommend HopScotch on an iPad!)


Some pointless Pi/factorial numerology


, , ,

As I’ve previously mentioned, Leo loves large numbers. While we were looking at factorials with huge numbers of digits in the result — a favorite puzzle being predicting the number of trailing zeros — he noticed that there were a few digits of Pi embedded in one of them. So we wrote a program to find the largest run of Pi embedded in a factorial…or at least as far as we felt like letting the program run, which was up to 26000!

Here’s the log:

Pi digs  n(!)
1          8
2         32
3         35
4        116
5        147
6        380
7       3057
8       5599
9      14192

That’s it under 26000! I’ll spare you the 52765 digits of 14192! Suffice it to say that the sequence “314159265″ shows up at the 48996-th digit!

(Before you complain, of course there are many occurrences of shorter sequences all over the place. This is just the location of the first occurrence of each next longest subsequence, so, for example, there are 3, and 31, and 314’s all over the place, but the first 314 occurs in 35!)

I’m not going to bother showing you the couple of line of simple Lisp code that it took to program this up. …Exercise for the reader!   🙂

An Exploration of Molecular Rigidity


, ,

As I mentioned a few posts ago, Leo and I have started receiving Science Magazine, and each week, in addition to paging through the issue looking at interesting pictures, I try to find something that we can actually do together that is related to one of the papers (even if the relationship is tenuous).

This week Leo became interested in this paper:


Okay, so I’ll admit that, not only didn’t we actually read the paper, but I can only vaguely understand the abstract! However, what attracted our attention was this cool figure:


Leo has always liked complex molecular structures; since he was three, or so, he’s had a molecular construction kit, so these looked like fun!

What seemed most interesting about these molecules is that some of the structures appear to be rigid, while others appear to be floppy. Molecule 2.2, for example, seems to be floppy, whereas the 4.x structures (and 2.1) appear to be rigid.

Whether or not this ends up being really true (recall that we didn’t actually read the paper — see “tenuous” above! :-)), it’s a great excuse to explore rigid v. floppy structures, which we did!

It turns out to be surprising, and surprisingly simple to build structures that can be easily transformed from a rigid to a floppy structure, and back. This is a little hard to explain without seeing it, so here’s a video showing off these interesting structures, built out of Knex . All you need to do to transform it from the rigid form:


To the floppy form:


You’ll have to watch the video to see how the conversion works.

These are actually amazingly fun to manipulate, and it’s quite surprising when you discover that just by changing one link you completely change the flexibility of the structure!

It appears that if you add even one additional link to this particular structure, you convert it into one that only has a floppy conformation, although I can’t prove that. Here’s Leo manipulating a slightly larger structure: