“Ask adults from the industrialized world what number is halfway between 1 and 9, and most will say 5. But pose the same question to small children, or people living in some traditional societies, and they’re likely to answer 3. Cognitive scientists theorize that that’s because it’s actually more natural for humans to think logarithmically than linearly.” Fascinating. The human brain is such a magical machine.
I honestly can’t tell if this is serious and I’m missing something or they’re actually trolling me. It’s on web.mit.edu so it should be the former, but sweet mother of God does it look like the latter.
The example with “What’s halfway between 1 and 9” looks like having confirmation bias written all over it. Incidentally, both 3 and 5 and reasonably close 4.5, although who would round it to 3 is still debatable. Extrapolating that humans prefer, or are somehow predisposed to think logarithmically, is a stretch; one could literally come with an infinity of cases where this doesn’t hold. I think practical evidence shows this, too — have you actually seen how much engineering freshmen fight with logarithmic plots? Who the hell thinks 10 is midway between 1 and 100? Oh wait — it’s base *two* logarithm? Based on the fact that, you know, f(x) = log2(x) is close enough to f(x) = x for small x that you can sort of pretend “some traditional societies” — a formulation that would trigger raised eyebrows even on wikipedia — don’t mind a little rounding there? Really?
The description of the paper itself seems legit (although, since in the well-respected tradition of free flow of ideas, it’s only available for a considerable fee, I have no thought of actually checking that out myself), but the way it’s covered in the article totally sucks. The paper appears to imply that this kind of rounding doesn’t apply to just any kind of numbers, and that it also doesn’t apply to just any kind of *information*. Some of our peripheral processing is done logarithmically — think about sound sensibility, for instance — so it would make sense if this is how the whole chain would be wired up. How this is connected with the article’s introduction, other than the word logarithm, is beyond me.
yeah as usual the article and headline look at fault.
Huh! Huh! Huh1
You are so wrong. The question is half way between 1 and 9 not (0 and 9). The difference between 1 and 9 happens to be 8 and half of 8 happens to be 4. Therefore there is only one correct answer (4). If the question was halfway between 0 and 9 the exact answer will be 4.5 and rounding up to the nearest unit it will be 5.
If I follow you, halfway of 7 and 9 is (9-7)/2 = 1.
The mean of a and b is (a+b)/2
*facepalm*
1 2 3 4 *5* 6 7 8 9
4 on left of 5, 4 on right of 5
1 2*3*
4 5 6
7 8 9
makes no sense either, because there’s no way 1 2 6 9 looks anything like the other half of 4 5 7 8
It may just be me, but I can’t even generalize the method (for my own amusement/understanding) they are using to come up with this.
Edited 2012-10-08 23:32 UTC
3^0 = 1, 3^2 = 9, so apparently kids and adults who weren’t taught the algebra naturally think of 3^1 = 3 as half-way between the two. IOW, we don’t quantify numbers by counting, we look at order of magnitude.
This is similar to how people whose native language lacks larger integers (e.g. they have words for: one, two, three, a few, and a lot) have difficulty with math like 17 + 6. They’re always close, but rarely get it exactly right. Traditionally the explanation is that named integers help us remember exact quantities, but it could be that such people are operating under a logarithmic scale.
From a practicality standpoint, the theory makes sense. Orders of magnitude are far more important than exact numbers. But I think they still have a lot of research to do before most people will believe it.
Am I the only one who immediately said “4.5” for this one?
Atleast I answered 5 to the question: 1 to 9 is 8 numbers, half of which is 4, but since we’re starting from 1 instead of 0 the answer is 5. A quite typical programming maths problem, actually, you see these kinds of things all the time and beginners doing the exact same mistake as you did.
Ugh, no. *blushing*
No… 4.5 was my first reaction too. Even if I didn’t know the exact answer, I would have known that it wasn’t a whole number and would have thought something either “just above 4” or “just below 5”–no absolutes. Or just “four point something.”
Edited 2012-10-09 12:09 UTC
Sometimes, Winston, it’s 4. Sometimes it is 5. Sometimes it is 3. And sometimes it is all of them at once.
I would have thought it was 1 plus 9 divided by 2.
A person who had no simple math skills may have decided to try 3 as a guess.
Correct formula is geometric mean: Sqrt(1*9) = 3
I wonder what would be their answer if the question was the middle of 1 and 16 ???
Edited 2012-10-09 00:11 UTC
properly insulting this news will take longer than I am able to dedicate right now, so lets just summarize with: this paper is written by phd students practicing “the techniques of information theory”
get a real specialization you dropouts!
The way that I, personally, imagine the numbers graphically 3 would be on the horizontal line that is equidistant from 9 and 1.
Many hunter-gatherer socities don’t even have words for numbers above five or ten. They only need to know general quantities eg a few, many, or a huge amount.
Perhaps this kind of logarithmic operation is also manifesting as genocide blindness. One or two people murdered can make the news, but when it starts turning into a massacre (especially far away), people seem to care less.
People can get fired up over a few thousand people dying in two towers, but think nothing of the orders of magnitude more that resulted from their overreaction.
It is not overly surprising. Sound intensity, in decibel, is also a logarithmic scale. Star brightness magnitude, defined under ancient Greece by astronomers using only their unaided eyes, has been fitted in modern times to a logarithmic scale.
I’ve heard in the past that it was because our biological sensors have a logarithmic response to stimuli. In such a case, this proof that thought process too follow logarithmic scales would still be a new, relatively surprising result
Well, if this were true, I’d expect at least some languages to exhibit at least traits of logarithmic numerals. There are different systems: octal, decimal, duodecimal, you name it, but I’m afraid logarithmic is just not one of them. I call rubbish, in the modern American pseudo-scientific and splendidly pleased with oneself style.
There are logarithmic numerical expressions in all languages – eg small, large, huge.
In pre-agrarian socities there is no real need for precise numerals larger than about 5. It is easy enough to divide food by visual means or describe a distance as “three days walking”.
Well, how do you know small, large and huge are logarithmic? Can you back it up with any research?
The great majority of languages have native numerals up to at least ten. Which makes sense if you think about counting on your fingers. Anyhow, I fail to see how dividing the food by visual means or measuring relatively short distances in days of walking supports, or actually even relates to the idea of logarithmic perception of numbers.
My guess would be 5. My 9 year old son said 5, but then said it could also be 5.5. So five seems to be the dominant number in any answer.
More interesting (IMHO):
Yesterday I was listening to a podcast where someone said 0.999999999999-> ad inifinitum is the same as exactly 1, because there is no other number between those two numbers so they must be the same.
In practice they are basically the same thing of course, but in maths they shouldn’t be, then again they are.
No, no, in maths too, they are the same number. It is only a problem of notation.
This seems like a lot of work to explain a peculiar result, when I see no evidence that the researchers even tried to verify that those answering 3 (or even 5) actually understood the question. The bias could come from having an unexpected interpretation of “half way,” where a more careful definition might yield a different answer.
You, like a lot of the other commenters, missed the point of this research. This research isn’t about testing how smart or dumb people are.
The research is about teasing out the way the brain actually works. That’s why they want people to answer intuitively without thinking much about it. That way, we can see that 1) There really is a discrepancy between our intuition and learned behaviour 2) What form this discrepancy takes.
I still find it difficult to believe that people would actually answer 3. Maybe there are such people, but the article does not quote the related statistics from the paper, and the wording does not make me believe it even could. (Anyway, small children can’t count).
Also, this simple question does not form a very strong basis for building a whole theory on (in all fairness, judging from the article, they have other arguments). I would be interested what people who answered 3 (if there ARE such people) would answer to other ranges, e.g. 1 and 50 (most likely 10, not 7), 1 and 16 or 1 and 100. I like to play with the thought that as the number gets bigger, the answers would converge to the arithmetic mean.
I never thought it had anything to do with testing intelligence. What I’m saying is that there’s room for different interpretations of the question. So there are two places that a subject can deviate from expected responses. One is that they understand the question and have a different concept of “in between.” The other is that they didn’t interpret the question in the way that the researchers assume they should.
The point of the research IS about how people, at different ages, have different interpretations of the question.
Next test would be to tell them you’ll pay them in between 5 and 10 dollars and hour and see what they come up with. Bet they now say 10 dollars.
I notice that virtually no one commneting seems to have read the actual paper. It talks about how children and traditional hunter-gatherers think about numbers.
One of the researchers’ assumptions is that if you were designing a nervous system for humans living in the ancestral environment ^aEUR” with the aim that it accurately represent the world around them ^aEUR” the right type of error to minimize would be relative error, not absolute error. After all, being off by four matters much more if the question is whether there are one or five hungry lions in the tall grass around you than if the question is whether there are 96 or 100 antelope in the herd you’ve just spotted.
I originally trained as a chemist. Chemists often spend more of their time dealing with logarithmic data than linear data. In fact I’m probably far more comfortable dealing with logarithms (base 10) than linear data.