## Archive for the ‘Math in the News’ Category

### Soccer Math

June 19, 2014

The World Cup is happening! It’s inspiring to watch excellent soccer players…inspiring us to write about some excellent math. We’ll venture out on the field with a few soccer and math tidbits.

• The soccer ball that I think of as typical – in other words, the one that I remember from Days of Yore – is an Archimedian solid, made from 20 regular hexagons and 12 regular pentagons.  Specifically, it’s a truncated icosahedron because it can be built from lopping the corners off of a regular icosahedron.  It’s also a buckminsterfullerene, although that’s only the formal name:  friends can call it a buckyball.  The buckyball was Red Hot News in 1985, because it was a new way of putting Carbon atoms together.  Scientists Harold W. Kroto, Robert F. Curl, and Richard E. Smalley* named it after architect  R. Buckminster Fuller*, whose geodesic domes had inspired them to try and create such a carbon cluster.   But this soccerball-shaped soccerball doesn’t limit itself to Ancient Greeks and Modern Scientists, oh no.  It also dabbles in the arts, as shown in this photo below from Labor Park in Dalien, China.
• Soccer balls aren’t the only thing math-related in soccer: there’s also the number of people on a team.  Each team has 23 players, which means that on any team there is a 50% chance that two people on that team share a birthday.  With 32 teams playing in the world cup, you’d expect about half of them to have birthday-sharing teammates, and in fact, as the BBC pointed out earlier this week,  exactly 16 of the 32 teams do.    For example, tomorrow (June 20) six people have birthdays, including two (Asmir Begovic and Sead Kolasinac) on the team from Bosnia and Herzegovina.  Now oddly enough, even though you’d expect half the teams to have teammates sharing a birthday, the fact that it’s exactly half is actually rather strange:  with a 50% chance of two teammates getting to share cake, the probability that exactly 16 of the 32 teams satisfy that is only 14% – it’s just that at that point it’s equally likely to be more or fewer days.  Ironically, it’s rather unexpected to actually hit the expected value.
• One final math fact about the World Cup: one of the referees for yesterday’s match between Chile and Spain is actually a former high school math teacher!    Not all that former, either:  Mark Geiger taught in New Jersey alongside his brother, winning the Presidential Award for Excellence in Math and Science Teaching, but eighteen months ago he left teaching in order to referee full-time, hoping for a shot at the World Cup.  Not a bad gig, and he always has those math skills to fall back on if he finds he misses teaching.

*Whenever I type “[Occupation] [Person’s Name]” I get the urge to add “renowned” and then go read Da Vinci Code again.

The photo of the sculpture is by Uwe Aranas, Creative Commons License.  And if you didn’t follow the link to the BBC article, “The Birthday Paradox ath the World Cup” by James Fletcher, it’s worth a read – it has a lot more detail about the birthday paradox and sports.

### Third Derivatives in the News (Again)

March 29, 2013

The news is no stranger to third derivatives, although it doesn’t sneak in very often – we’ve mentioned before the October 1996 issue of the Notices of the AMS in which Hugo Rossi wrote, “In the fall of 1972 President Nixon announced that the rate of increase of inflation was decreasing. This was the first time a sitting president used the third derivative to advance his case for reelection.”

Well, just like a blog that doesn’t post for so long that you figure it’s basically dead, and then all of a sudden out of nowhere WHOA there’s a new post (Hi everybody!), the third derivative made a new appearance recently.  On the appropriately palindromic- (in the US)  date of  March 10, 2013, Paul Krugman wrote in the Opinion pages of the New York Times:

People still talk as if the deficit were exploding, as if the United States budget were on an unsustainable path; in fact, the deficit is falling more rapidly than it has for generations, it is already down to sustainable levels, and it is too small given the state of the economy.

Did you catch that?  The line about the deficit falling more rapidly than it has been?  Let’s take a closer look:

Assume that the National Debt at year t  is the original function:  D(t).  This is positive, since we have debt.

Then the Deficit is the derivative, D'(t).  It’s also positive, because the National Debt is increasing.

If the Deficit is falling, that means that the Deficit is a decreasing function, so the derivative of D'(t) – that is, D”(t) – is negative.  That would mean the Debt is increasing, but concave down.

But the quote said the Deficit is falling more rapidly than is has been:  the derivative of the Deficit is getting more negative, so to speak.  In other words, the Deficit itself is decreasing and concave down, which means that D”'(t) is negative.

And so we have a third derivative!  Welcome back old friend!

### Math Mistakes and Misplaced Measuring

March 14, 2012

In an unfortunately tribute to Pi Day and the importance of mathematics, there was an article in the New York Times yesterday (March 13, 2012) illustrating that the people who need to measure parts don’t always know how:

“The employee responsible for finding a replacement part for a tower crane that ultimately collapsed on the Upper East Side in 2008, killing two workers, testified on Tuesday about his own difficulty with the basic math of measuring key components. Tibor Varganyi, whose formal education ended in the ninth grade in Hungary, struggled how to measure the distance between the roughly 30 bolt holes around a piece of the turntable assembly. He decided to use a ruler.”

The article (“Worker Tells Court He Lacked Math to Measure Crane Part” by Russ Buettner)goes on to explain how the measurements didn’t match up with expectations, so he switched to a protractor, which also didn’t work.  This particular replacement part was never used, and the article is primarily about the prosecution’s argument that the company wasn’t worried about the lack of expertise or safety, instead focusing on profits, but the description is still worrisome.

That’s depressing.  We’d better recover by looking back at some old Pi Day Sudokus.

### Billions and Billions of Burgers

September 18, 2010

There’s a new restaurant in Manhattan called 4Food which is geared towards Social Networkers, according to this September 16 review on CNNMoney.com:

Customers who step into the restaurant are met by staffers ready to take orders on their iPads, a 240-foot screen featuring live Twitter feeds and Foursquare check-ins, and a menu that offers more than 96 billion customizable burger options.

This paragraph goes on to provide a link to this article in which Stacy Cowley actually explains the mathematics.  It turns out that there are some items where you can only pick 1 [1 bun among 5 choices; 1 scoop of something I don’t normally think of as being on a burger —  mac n cheese and sushi are two of the options — among 18 choices, where one of the choices is “No thanks.”; and finally 1 burger among 8 choices] but for the 4  add-ons like lettuce, the 12 condiments, the 7 cheeses, and the 4 meat slices, you can have as many different kinds as you want.  This means the total number of combinations becomes

$5 \cdot 18 \cdot 8 \cdot 2^{\left( 4+12+7+4\right)}$

which is 96,636,764,160.

The article contains more specific information about each of the choices, and the many many comments include additional information, like that someone in theory should be able to order no bun or no meat, although the comments mostly seem to be arguing whether or not the math is correct (no math mistakes that I see!).  Incidentally, if you allow the option not to have a bun or not to have the patty at all, which seem reasonable options to include, the 5 in the equation changes to a 6 and the 8 to a 9, giving 130,459,631,616 combinations. And that’s a lot of choice.

### A Newsworthy Ha’penny

August 16, 2010

Here’s a good rule of thumb:  if you’re trying to calculate how much money to send an insurance company, it’s probably a good idea to round up.  That’s a lesson that La Rosa Carrington learned the hard way.

Carrington had health insurance under her job, and when she lost her job she was allowed to continue her health insurance under federal COBRA law.  Trouble is, she didn’t get a bill so she estimated the amount she would have to pay:  her payments were “a little over $471.87 per month” (according to The Gazette in Colorado Springs, where the story first appeared on July 6) but because of the 2009 American Recovery and Reinvestment Act she only had to pay 35% of that. Carrington didn’t get a bill from Discovery Benefits, and yet she knew it was important to keep up the payments, especially because she was also undergoing chemotherapy for leukemia so details like current health insurance coverage were totally non trivial matters. She sent them a check for$165.15.  Trouble is, Discovery Benefits said she owed $165.16, and canceled her coverage. She called, they refused to budge, and finally the supervisor did the calculation herself and decided that rounding the amount to$165.15 was actually right, or at least reasonable, and the penny was paid [either by the company or by a person in the company; it’s not clear which].

So be warned:  sending in that extra penny might be good insurance for your insurance.

The story could end there, since rounding is all mathematical in and of itself, but there’s a tangent that I’m still wondering about:  what’s the deal with the monthly payments being “a little over” $471.87 each month? If the annual dues were$5662.46, for example, then the monthly payments would be $471.8716666…, which would round to$471.87, but 35% of  the original $471.871666….. would actually be$165.155083333… which does round to \$165.16 using conventional models of rounding.  It seems plausible to me that the Benefits Computer was just rounding, and not necessarily rounding up all the time, and that the multiple rounds gave a difference of a penny, which would make this a story not about rounding up versus rounding down, but about the compounding of rounding errors.  I looked at a few different reports on this, though, and never saw mention of this so it’s possible that the Benefits Computer was automatically rounding up for all rounding as implied.

### A billion here, a billion there; soon you’re talking a real portfolio!

May 6, 2010

What is a billion?

• In contemporary USA, it is 109
• In contemporary France, it is 1012
• In the UK prior to the early 1970s, it is 1012
• In the UK after the early 1970s, it is 109
• In most European countries, it is 1012
• In 19th C France, it is 109
• In 14th – 18th C France, it is 1012
• It is a modernized spelling of “bymillion”, a word introduced in 1475 by Jehan Adam for a million2. (He also coined the term “trimillion” for a million3, and similar vocabulary for higher powers, vestiges of which remain in our number systems.)

Apparently, a billion is also a wickedly large number of shares of stock to be trading at one time. For if you accidentally hit the “b” key instead of the “m” key at your computer, and thus execute a trade in billions of shares instead of millions of shares, you might cause the Dow Jones Industrial Average to drop 9 percent in a matter of moments on a Thursday afternoon. Or so I’ve heard.

### Calculus from Washington

November 1, 2009

The White House is talking about derivatives again!  As in Calculus, though that’s not the word being thrown around.  Christina Romer is the Chair of the Council of Economic Advisers, and a week ago she was quoted in an article in the Christian Science Monitor (from the October 22 JEC hearing) as saying:

Most analysts predict that the fiscal stimulus will have its greatest impact on growth in the second and third quarters of 2009… By mid-2010, fiscal stimulus will likely be contributing little to growth.

That article apparently caused some confusion, so she clarified the situation in The White House Blog:

As a teacher, I should have realized that many people have trouble with the distinction between growth rates and levels….When we go from no stimulus to substantial tax cuts, increases in government spending, and aid to state governments, this has a large effect on the growth rate of real GDP – just as when you press hard on your car’s accelerator and go from 0 to 60, you have a great change in your speed. This sense of acceleration is exactly what we have been experiencing since the start of the year. Fiscal stimulus has been steadily increasing, raising GDP growth by between 2 and 3 percentage points in the second quarter and between 3 and 4 percentage points in the third quarter….. We expect that stimulus will continue to have a positive effect on growth in the fourth quarter of 2009 and well into 2010, though, by design, not by as much as it did in the second and third quarters of 2009. As a result, we expect the largest effect of the stimulus on the levels of GDP and employment to occur well after the largest effects on growth rates.

At some point, the stimulus plateaus at a high level. That is important too. Such continued stimulus may not add much to growth, but it is keeping the levels of GDP and employment much higher than they otherwise would have been – just as keeping pressure on the accelerator keeps the car going at 60 mph.

So here’s another kind of situation to discuss in those calculus classes!  And presumably the words “point of inflection” could also be brought into play, since that is apparently where Christina Romer thinks we are at right now.

*”again” referring to Hugo Rossi’s quote “In the fall of 1972 President Nixon announced that the rate of increase of inflation was decreasing. This was the first time a sitting president used the third derivative to advance his case for reelection.” from the October 1996 Notices of the AMS.

HT:  smb

### Information Gain in “Manager-Speak”

September 28, 2009

There’s a neat post over at Language Log on determining whether or not someone is a manager if they say, “at the end of the day.”  It is the latest in a recent thread (parts 1, 2, 3) about irritating phrases being associated (often incorrectly) with irritating people (see here for an earlier discussion).

September 3, 2009

Last week I ran across an article by Randolph E. Schmid that, judging from the google search I just did, made its way all around the networks [here it is at the Huffington Post] but I still find fascinating.  People who multitask don’t do it very well.

The article describes how they tested this, and below is one of my favorite quotes, in which study-guy Clifford Nass described the outcome of one of the experiments  in which the participants looked at red and blue rectangles [that’s the Math part of this post] and then had to determine if the red ones had moved, ignoring anything about the blue rectangles.  They thought high multitaskers would be good at this:

But they’re not. They’re worse. They’re much worse….They couldn’t ignore stuff that doesn’t matter. They love stuff that doesn’t matter.

(I love that last line.)

In other tests the high multitaskers couldn’t organize information as well as others, and were the same as far as memory goes.  In the last test people had to identify letters and vowels or consonants and numbers as even or odd [that’s the Bonus Math part of this post], but it turns out that high multitaskers had a lot of trouble switching between the two tasks, despite what one would think is constant practice of changing focus.

I’m not sure what to do with this information, and no doubt it needs further study yadda yadda yadda.  But I found the concept fascinating as far as teaching goes, and I’m still left wondering what high multitaskers are good at.  Maybe nothing, but I believe deep down inside that multitasking must provide some skills.  Or maybe I’m just hoping.

### Calculus Demonstration: 3D printing

June 18, 2009

I love the concept of 3D printing.  Of course, I also really enjoy teaching Calc II when we talk about slicing and shell formulas and volumes of revolution, because I remember the AHA! moment when I suddenly put together what the formulas were all describing and it all made perfect sense.  (Sadly, this moment came when I was a senior studying for comps, several years after actually taking the class, but still, AHA! moments are glorious.)

In 3D printing, objects are built from the bottom up, cross section by cross section, the same way you’re supposed to envision the pieces when you calculate volumes by slicing.  This article in the Christian Science Monitor last week likens it to building with legos, although my experience with legos is that separate sections are constructed and then put together (you build the walls, then add the furniture, then the roof); that concept might work with printing too, where you print separate components and then put them together.  And what’s amazing is that you can print some pretty complicated things with moving parts.

So what is used for the printing?  The article above describes a layer of powder being put down and the printing is actually done by spraying glue instead of ink.  Wikipedia also describes printers that build with a liquid gel.  But my favorite is printing done with candy.

That’s right:  candy.  Not surprisingly, the CandyFab 6000 and it’s earlier prototypes are made by the folk at Evil Mad Scientist Laboratories.  Here’s an example of 3D printing:

Isn’t that amazing?  Here’s the machine it was done on:

You can see it making some 3D Candy above.  They keep making improvements, as you can see in the two dodecahedrons below (printed at different times):

Here’s another fancy piece o’ printing:

And finally, here’s a Möbius Strip.  I thought it was a cake at first but no, it’s eight pounds of sugar.

Now that’s some pretty neat printing.  You can even see the “slices” on the surface.

All photos are licensed under Creative Commons; the photos link to their home on Flickr, and you can find even more here, plus more information here.

### The Illusion of Winning — or vice versa

June 8, 2009

(Because “The Winning of Illusion” just didn’t sound as good.)

The Winners of the 2009 Best Visual Illusion of the Year Contest have been announced!    There’s a ball that drops straight down, but if you look to the side while it drops it appears to fall at a different angle, a  dove that appears to change color depending on the background, a pair of facesthat are identical except for the coloring (the one with more contrast between the face and eyes/mouth appears female, while the one with less contrast appears male), and more.

As a bonus, Arthur Shapiro, one of the creators of the dropping ball, has several other illusions up on his blog.  These can be posted for non-profit educational use, but I couldn’t get the html code to work.  Bummer.

Since a post on illusions would not be complete without a couple illusions, here are a few.  This first one is called Sander’s Parallelogram or the Sander Illusion (after creator Matthew Luckiesh author Friedrich Sander), and the two blue diagonals are the same length.  Seriously.

And finally, here’s a grid illusion.  There are white dots in the middle, but black spots seem to appear:

And finally, here’s one in which the bar in the middle is the same shade of gray throughout, but looks like it’s changing color (courtesy of Dodek, published under GNU-FDL).

(Contest Winners found via New Scientist.)

### Longitude, Part II

June 1, 2009

As mentioned in the last post about longitude, while one group of people were charting stars and hoping to use tables to help out with the determination, others were working the time angle (so to speak).  What those folk needed was a good clock, one that would keep time even if it got bounced around a bit, like on a ship, because people on the ocean were in especial need of figuring out where they were.

So people worked on it.  And worked on it.  And then worked some more.  Prizes were offered, and went unclaimed.  Then a famous shipwreck in 1707 (involving HMS Association, HMS Eagle, HMS Romney, and HM Fireship Firebrand) took the life of 1500 sailors, apparently because they miscalculated longitude, and Britain was all, “Enough of this!” and 1714 formed the Commission for the Discovery of the Longitude at Sea, which was a mouthful to say so everyone just called it the Board of Longitude.  They offered a prize for calculating longitude and didn’t even insist that the longitude be exact, just within 60 nautical miles for a prize, or within 40 or 30 nautical miles for better prizes.

Uh, nautical miles?  One nautical mile is 1 minute of an arc of latitude, so 60 nautical miles would be 1º and 30 nautical miles would be ½°.  Of latitude.  It translates to just over 1 regular mile.

Where were we?  Oh yes, in England.  Which is also where John Harrison was.   He was born in 1693, and made clocks out of wood with his younger brother.   One of his great achievements was to design the parts so that they had almost no friction, and therefore didn’t need any oil.  This was a big improvement because 18th century oil quite frankly stunk as far as clocks were concerned.

Harrison decided to make a clock good enough to win the prize.  His first clock, conveniently called H1, was made when he was about 40 years old.  And it worked well during the Official Testing on board a couple ships (because you didn’t think a prize would be awarded without checking how the clock did at sea, did you?) but Harrison wasn’t completely happy with it so instead of the full prize he asked for money to make a second version.  He worked on the next clock (H2) from 1737 to 1740, then decided that was all wrong and began work on H3.  This took 19 years — our man Harrison was nothing if not thorough.  But sadly, H3 wasn’t good enough to win the prize, and meanwhile he began working on — hold your breath everyone — H4.

Incidentally, one of the neat things about Harrison’s clocks is that they weren’t just different versions of the same thing.  It’s not like he said, “Hey, I have a new edition out — no, really, the fact that I changed one tiny thing makes it completely different.”  His clocks really were different, and H4 was down to being a pocket watch, which is mighty convenient for being on board a ship.

Here the story gets complicated.  H4 kept really good time, losing less than a second a day, but the Board of Longitude was all, “Well, maybe, maybe not” and Harrison had to make more copies, and blah blah and yadda yadda yadda and the end result was that he also made a new clock H5, plus his buddy Larcum Kendall made a copy (called K1), but the Board was still, “Umm, well” and people  — by people I mean King George III — got all upset and finally in 1773 the Board said, “OK, you win.”

Interestingly, this recent article from New Scientist says that when they opened up H1 to re-fix something (it had been in disrepair and was fixed more than 40 years ago), the way the parts were manufactured suggested that he had some help with some of the chains and whatnot inside.  It’s a pretty interesting article, and the comments are fun to read (mostly saying things like “Of course he had a bit of help!  He didn’t smelt his own metal, now did he?”) but the best part is the gallery of pictures here.

The photo of H5 is published on wikimedia by racklever under GNU-FDL.  A lot of the information about Harrison is from this site.

### Sunscreen confusion

May 14, 2009

An article in the New York Times describes consumer confusion over the ever-rising SPF numbers (used to rate the efficacy of sunscreen lotions), and their interpretation.

Unfortunately, the NYT adds to the confusion with the following:

The difference in UVB protection between an SPF 100 and SPF 50 is marginal. Far from offering double the blockage, SPF 100 blocks 99 percent of UVB rays, while SPF 50 blocks 98 percent. (SPF 30, that old-timer, holds its own, deflecting 96.7 percent).

Technically they’re right:  doubling the blockage is not the same as halving your radiation exposure.  But in terms of safety, the issue isn’t how much UV exposure you’ve avoided, but rather how much UV actually gets to your skin cells (which would then be a 2% versus 1% comparison).

According to the article, SPF measures how much longer a person wearing sunscreen can be exposed to sunlight before getting a burn, when compared to someone wearing no sunscreen.  Someone wearing SPF 50 can remain in the sun 50 times longer than someone with no sunscreen, and so SPF 100 sunscreen provides the wearer with twice the protection (in terms of time) as SPF 50 sunscreen.

It turns out there is a sense in which SPF100 is not twice as effective as SPF50 in protecting your skin, but it has nothing to do with the 99%/98% comparison.

According to the NY Times, “a multiyear randomized study of about 1,600 residents of Queensland, Australia” found that most users applied at most half of the recommended amount of sunscreen.

“If people are putting on about half, they are receiving half the protection,” said Yohini Appa, the senior director of scientific affairs at Johnson & Johnson, of which Neutrogena is a subsidiary.

But in fact they are receiving far less than half the protection:   a 2007  British Journal of Dermatology study noted that cutting the amount of sunscreen in half did not reduce the effective SPF in half, but rather reduced it geometrically to its square root.

If a person uses half of the recommended amount of an SPF50 sunscreen, they’ll get the protection of an SPF7 (since 7.1 is roughly √50), while similarly underapplying SPF100 sunscreen gets the protection of SPF10.

Apparently, if you’re looking for the protection of an SPF30 product, but like most people tend to under-apply sunscreen, you should be shopping for sunscreen rated as SPF 900.   No word yet on when such products will hit the marketplace.

(One wonders: does this work the other way ’round?  If I apply TWICE the recommended amount of a cheaper SPF8 sunscreen, do I end up with the protection of SPF64 sunscreen?)

### The Playground/Math Association

April 23, 2009

Russ  Lopez and his two buddies are the Defenders of the Playground.  I picture them with capes and swords, but actually they’re profs (Lopez from Boston University and the others from Tufts) who just studied the association between elementary school playgrounds and test scores.  According to BU Today today:

When Lopez studied the 2003 results of the fourth-grade English language MCAS (Massachusetts Comprehensive Assessment System), standardized tests that almost all public school students must take, he saw no discernible differences between children at the 70 schools with new playgrounds and children at schools with old playgrounds.

But when he looked at math scores, he saw a very different picture. In schools where fourth graders had new playgrounds, 25 percent more kids passed the math MCAS. And that remained true after he and his team controlled for factors such as demographics and the number of students receiving free or reduced-price lunches.

Of course, as the article goes on to explain, that doesn’t mean that building more playgrounds will automatically raise test scores — there could be other factors in play (so to speak).  But, especially in comparison to the English tests results, it’s certainly an interesting finding and I look forward to reading the follow-up.

The playground photo was taken by drk_faerie.

### Faulty Unit Conversion

April 16, 2009

Language Log has a post up (via HeadsUp the blog) about a Fox News story that had some metric issues:

The tests involved head-on crashes between the fortwo and a 2009 Mercedes C Class, the Fit and a 2009 Honda Accord and the Yaris and the 2009 Toyota Camry. The tests were conducted at 40 miles per hour (17 kilometers per liter), representing a severe crash.