Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I found that looking at the original motivation of logarithms has been more elucidating than the way the topic is presented in grade-school. Thinking through the functional form that can solve the multiplication problem that Napier was facing (how to simplify multiplying large astronomical observations), f(ab) = f(a) + f(b), and why that leads to a unique family of functions, resonates a lot better with me for why logarithms show up everywhere. This is in contrast to teaching them as the inverse of the exponential function, which was not how the concept was discussed until Euler. In fact, I think learning about mathematics in this way is more fun — what original problem was the author trying to solve, and what tools were available to them at the time?


Toeplitz wrote "Calculus: The Genetic Approach" and his approach of explaining math via its historical development is apparently more widely used: https://en.wikipedia.org/wiki/Genetic_method . Felix Klein remarked: "on a small scale, a learner naturally and always has to repeat the same developments that the sciences went through on a large scale"


We could really take a page from this style for teaching advanced computing. We try to imagine that architectures just kind of come out of nowhere. Starting with mechanical computing and unit record equipment makes so much make more sense.

Plus, unit record equipment was cool.


Very cool. But so many of us paid not enough attention to the details. Only two of the people in my first shop attempted channel programming.


I always longed for a book/course on mathematics where topics are in chronological order:

1. ... (mathematical topics at the beginning of history of which I am ignorant of)

2. pythagoras theorem

3. ...

4. euclid geometry

5. ...

6. algebra

7. ...

8. calculus

9. ...

10. set theory

11. ...

12. number theory

13. etc. etc. (you get the point)

Maybe there's already something that lays out topics like this. I haven't searched too hard.


There is Mathematics for the Million by Lancelot Hogben, which not only covers math, but the history of math and why it was developed over the centuries. It starts with numbers, then geometry, arithmetic, trig, algebra, logarithms and calculus, in that order. It's a very cool book.


I was going to say the same! I got it years ago, it's hard to top a math book with a quote from a certain Al Einstein on the back cover singing its praises! Morris Kline's "Mathematics for the Nonmathematician" takes a similar approach, as I believe other books by the author do. Can also recommend "Code" by Charles Petzold and "The Information" by James Gleick, while not comprehensive they do cover the development of key mathematical insights over time.


I'm sympathetic but there's no clear historic chronology. For instance the ancient egyptians dealt with both algebra and calculus (at least in part) long before Pythagoras. And thats not starting on China and India which had very different chronologies.


Choose a chronology that makes sense. We can see how Western ideas build, we have less clarity on how the ancient Egyptians or Chinese ideas developed, and therefore it's harder to explain to a learner.

If you're sensitive to that singular world view warping the learner's prospect, you could at each point explain similar ideas from other cultures that pre-date that chronology.

For example, once you've introduced calculus and helped a student understand it, you can then jump back and point out that ancient Egyptians seemed to have a take on it, explain it, ask the student to reason did they get there in the same way as the Western school of ideas did, is there an interesting insight to that way of thinking about the World?

Another ideas is how ideas evolved. We know Newton and Leibniz couldn't have had access to direct Egyptian sources (hieroglyphs were a lost language in their life times), but Greek ideas would have been rolling around in their heads.


This one was just discussed on HN yesterday with pretty good reviews: https://www.amazon.com/Math-Through-Ages-Teachers-Mathematic...


Here's one that starts with the concept of a straight line and builds all the way to string theory. It's a monumental book, and it still challenges me. Roger Penrose's The Road To Reality.


There are two books which do a fantastic job of this:

Mathematics: From the Birth of Numbers, by Jan Gullberg

and

Mathematics: A Cultural Approach, by Morris Klein


A book without expecting any knowledge of mathematical notation would be a good start. I've bought 3 math books to get into it and quit all of them within the first chapter.


In a roundabout way, I wonder does this one fit what you're after:

https://bogart.openmathbooks.org/ctgd/ctgd.html

And more directly, a quick browse showed up a book called:

"Mathematical Notation: A Guide for Engineers and Scientists" which looks like it addresses your issue directly.


The issue is that I dont want to explicily learn all of the notation but step by step, topic related with usecases in the real world...


Could you give a concrete example concerning what sort of notation caused you difficulty in the past? Asking because it seems odd to me that you feel you need to learn „all“ the notation to get started.

Starting in elementary school you slowly build up topics, mathematical intuition and notation more or less in unity. E.g. starting with whole numbers, plus and minus signs before multiplication, then fractions and decimal notation. By the end of high school you may have reached integrals and matrices to work with concepts from calculus and linear algebra…

It makes little sense to confront people with notation before the corresponding concepts are being taught. So it feels like you may have a different perspective on notation as a layperson that are no longer obvious to more advanced learners.


Set theory comes to my mind as an example. Somewhat understood the notation but books increase the pace so much


Wait let me make sure I understand, you want to skip the notation all together, or you want more support in understanding it?


I want to learn the notation. Just not everything at once. I need to be able to see real world usecases, otherwise I wont be able to remember and apply the notation. What I meant is learning the notation step by step, topic related.


I recently read mathematics for the nonmathematician. https://www.goodreads.com/book/show/281821.Mathematics_for_t...

Although the math in the book is relatively basic I enjoyed it tremendously because it gives the historical development for everything and even describes the characters of different mathematicians, etc. The historical context helps so much with understanding.


Devlins book - "Mathematics: The Science of Patterns" was similar for me and you might enjoy it in addition to what you previously read.

Much better than how I was taught in my schooling.


Thanks for the recommendation, looks promising!


It's ontogeny recapitulating phylogeny, all the way down.


If you like this approach, I highly recommend Mathematics: It's Content, Methods, and Meaning by Kolmogorov. He uses this same approach, but applies it to many more concepts in math (about 1,000 pages!). In fact, I think I actually heard about that book on this site, so I guess I'm paying it forward.

This approach was to align with the Soviet philosophy of dialectical materialism, which claims that all things arise from a material need. Not sure I'm fully onboard with the philosophy as a whole, but Kolmogorov's book was really eye opening.


I think this should be front and center. To that end I propose "magnitude notation"[0] (and I don't think we should use the word logarithm, which sounds like advanced math and turns people away from the basic concept, which does make math easier and more fun).

https://saul.pw/mag


> I think this should be front and center. To that end I propose "magnitude notation"[0] (and I don't think we should use the word logarithm, which sounds like advanced math and turns people away from the basic concept, which does make math easier and more fun).

The only reason that "logarithm" sounds like advanced math is because it was so useful that mathematicians, well, used it. Since this terminology is just logarithms without saying the word, if it is more useful it, too, will probably be used by mathematicians, and then it will similarly come to sound like advanced math. So what's the point of running away from a name for what we're doing that fits with what it's actually called, if eventually we'll just have to make up a new, even less threatening name for it?

(I'd argue that "logarithm" is frightening less because it sounds like advanced math than because it's an unfamiliar and old-fashioned-sounding word. I'm not completely sure that "magnitude" avoids both these issues, but it's at least arguable that it suffers less from them.)


It's written like ^6 and said like "mag 6", which sounds like an earthquake (and this is basically the Richter scale writ large). One syllable, sounds cool, easy to type/spell, evokes largeness. "Logarithm" is 3-4 syllables, hard to pronounce, hard to spell, sounds jargon-y.


People virtually never say “logarithm” in use though. They either say “log” or they say “lun” for natural log. Notice that both log and lun are one syllable, easy to pronounce etc.

Magnitude is an existing and important concept in maths - it would be extremely confusing to just overload it to mean something else.


The log of 3.1m is 6.5. How do you say "10^6.5"? I say "mag 6.5" and it is clear. The Richter scale famously uses "mag 6.5" exactly like this. If that was ever confusing, then we've managed to work past it, and this just expands the Richter scale to cover basically everything.


There's nothing particularly special about the Richter scale in that respect. All logarithmic scales (eg dB) work that way. Both the Richter scale and decibels (and other logarithmic scales) are also famous like other nonlinear scales[1] for being widely misunderstood so I'm not sure a lot of people would think your way is clearer than the current usage, which is just to say "3.1m" if that's what you mean. That said, I like log scales and logarithms in general so if you want to campaign for this scale, knock yourself out. I don't like that you're calling it magnitude though, because magnitude means a specific thing (the first coordinate of a vector in polar or spherical form).

[1] eg the Beaufort scale for wind force


I have been writing the same thing by (ab)using the existing unit of measurement known as a bel (B), which is most commonly seen with the SI prefix “deci” (d) as dB or decibel. I write the speed of light as 8.5 Bm/s (“8.5 bel meters per second”), which resembles the expression 20 dBV (“20 decibel volts”) shown at https://en.wikipedia.org/wiki/Decibel.


If logarithm sounds too advanced, just say log and logs. I think it could work!


Mag is the inverse of log10. e.g. log10 ^6 = 6. We have no current shorthand for inverse log10 except "tentothe" which might be serviceable but is not as punchy.


I often wonder about this. I also believe that mathematical pedagogy strive to attract people that are very smart and think in the abstract like euler, and not operationally, meaning they will get it intuitively.

For other people, you need to swim in the original problem for a while to see the light.


I think it is a combination of factors. Mathematical pedagogy is legitimate if the end goal is to train mathematicians, so yes it is geared towards those who think in the abstract. (I'm going to ignore the comment about very smart, since I don't think mathematical ability should be used as a proxy for intelligence.)

On the other side, I don't think those who are involved in curriculum development are very skilled in the applications of mathematics. I am often reminded of an old FoxTrot comic where Jason calculated the area of a farmer's field using calculus.


Mathematicians also hate the current version of math education.


Frankly I wish I had known integral calculus going into geometry, I could tell there was a pattern behind formulas for areas and volumes but I couldn't for the life of me figure it out. There are worse ways to remember the formula for the volume of a sphere than banging out a quick integral!


I had known it. Thanks Dr Steven Giavat. The geometric shapes gave the patterns meaning. I read 'mathematics and the imagination' and mathematics a human endever' while I was starting algebra. Also the time-life book on math. All very brilliant because they used the methods that were used to investigate it, to show how it was discovered. These allowed me to fly ahead in math until I got to trig. Which took a long year to get facile, until I was able to finish my degree.

I had brilliant teachers.

Napier's bones, were for adding exponents, hense multiplication. Brilliant and nessary for the development of the slide rule, and the foundation of modern engineering, until the pocket calculator.


Math is rarely taught with practical problems in mind — that’s engineering !


Therein lies the rub. Treating abstract and the concrete in isolation was always tough sledding for me.

Bouncing between the two is where the action is.

And units: if I had it all to do over, I would pore over the units sooner rather than later.


Absolutely. Units are such a useful idea.

I was recently struggling to model a financial process and solved it with Units. Once I started talking about colors of money as units, it became much easier to reason about which operations were valid.


Strictly speaking this is about dimensional analysis, not units. (When discussing curricula we should be precise!)


I really disagree with the straightforward reduction of engineering to 'math but practical', but I'm finding it hard to express exactly why I feel this way.

The history of mathmatical advancement is full of very grounded and practical motivations, and I don't believe that math can be separated from these motivations. That is because math itself is "just" a language for precise description, and it is made and used exactly to fit our descriptive needs.

Yes, there is the study of math for its own sake, seemingly detached from some practical concern. But even then, the relationships that comprise this study are still those that came about because we needed to describe something practical.

So I suppose my feeling is that, teaching math without a use case is like teaching english by only teaching sentence construction rules. It's not that there's nothing to glean from that, but it is very divorced from its real use.


As someone who is studying maths at the moment I don’t recognise this picture at all. Every resource I learn from stresses the practical motivation for things. My book of odes is full of problems involving liquids mixing, pollution dispersing through lakes, etc, my analysis book has a whole big thing about heat diffusion to justify Fourier analysis, the course I’m following online uses differential equations in population dynamics to justify eigenvalues etc.


Agreed, and it's such a shame! A kid goes to math class and learns, say, derivatives as this weird set of transformations that have to be memorized, and it's only later in in physics class that they start to see why the transformations are useful.

I mean, imagine a programming course where students spend the whole first year studying OpenGL, and then in the second year they learn that those APIs they've been memorizing can be used to draw pictures :D


I've never seen an introductory math textbook that didn't point out how position, velocity and acceleration are related by the derivative.


Rules for derivatives require the least memorization


Well, logarithms were made from physical entities (celestial bodies) but not on engineering per se.

I think this is already enough context to root the mental effort deeper.


I actually prefer the straightforward log is an inverse of exponents. It's more intuitive that way because I automatically can understand 10^2 * 10^3 = 10^5. Hence if you are using log tables, addition makes sense. I didn't need an essay to explain that.

Take logs, add 2 + 3 = 5 and then raise it back to get 10^5.


This is how I've always taught logarithms to students I've tutored. I photocopy a table of various powers of ten, we use it in all sorts of ways to solve problems, and then I sneakily present an "inverse power" problem where they need to make the lookup backwards.

Almost every student gets it right away, and then I tell them looking up things backwards in the power table is called taking a logarithm.


That's how I mentally processed them when first learning them years ago. Doing operations on x and y with log(x) = y in the background somehow felt far less intuitive than thinking about 10^y = x.

I really enjoyed this author's work, BTW. Just spent several hours reading the entire first five chapters or so. What an excellent refresher for high school math in general.


We used logarithms routinely for large multiplications, divisions, etc. in 11th and 12th grade. No calculators were allowed. This was in India.


Same here when I was at school in the late 1960s and early 1970s. No one had a calculator.

So we were taught logarithms as a tool first.


This would be an interesting thing to study: How many different ways people learned about logarithms, and how they generally fared in math. I learned about logarithms by seeing my dad use his slide rule, and studying stock charts, which tended to be semi-logarithmic.


Coincidentally I watched this last night https://m.youtube.com/watch?v=7TWKSMtKCmU

It gives the history / motivation behind logarithms and suddenly it became so much clearer to me. Pretty much multipling huge numbers by adding exponents , well I think I've understood that correctly?

I think why I'm so interested in programming and computing is because I fascinated by the history of it all. It somehow acts as a motivation to understand it.


I rather like Feynman's approach in the lecture Algebra from the Feynman Lectures https://www.feynmanlectures.caltech.edu/I_22.html

He covers the inverse of the exponential, Henry Briggs' log tables and goes on to e^ix = cos x + i sin x

The audio is also available https://www.feynmanlectures.caltech.edu/flptapes.html


By the way, there's another function that can be used to turn multiplication into addition: f(x) = x^2 / 2

a * b = f(a + b) - (f(a) + f(b))


Isn’t x^2 a multiplication?


No, you misunderstood what I meant.

Normally, a sliderule at distance x has the value of log(x) written on it, which allows doing multiplications by moving along the sliderule, since log(ab) = log(a) + log(b).

Now imagine a sliderule onto which values of x^2/2 are written. This also allows you to multiply two numbers, because ab = (a+b)^2/2 - (a^2/2 + b^2/2).


This is how I learned them in middle school — just common logs, as an aid to doing roots, powers and multiplications of big numbers.

We were told in an off-hand way that logs could be to any base, even ‘e’, but not to worry about that for a few years.


This follows directly from the fact that exp(x+y)=exp(x)exp(y).


Yes, but such a property was not available to Napier, and from a teaching perspective, it requires understanding exponentials and their characterizations first. Starting from the original problem of how to simplify large multiplications seems like a more grounded way to introduce the concept.


From a teaching perspective it goes like this: first we learn additions, and to undo additions we have subtractions; then we learn repeated additions i.e. multiplications, and to undo multiplications we have divisions; finally we learn repeated multiplications, i.e. exponentiation, and to undo exponentiation we have logarithms and roots.


You see how one of those isn't like the others?


You mean we have both logarithms and roots to undo exponentiation? That's because exponentiation is non-commutative.


Right, I'm not saying it's for no reason, but the asymmetry makes it harder to keep track of which undoes exponentiation in which way.

And logs are frankly more confusing than the other operations because more than anything else they feel like an algebraic expression in the form of an operation. Other operations intuitively feel like a process, whereas logs feel like more like a question.

Maybe that's just because I never learned them super well though, maybe they're not actually that inherently different ¯\_(ツ)_/¯


Where did you pick this up? Is there a book that covers it that way?


Presumably the book from this thread by Charles Petzold will be a great canonical resource, but originally there was a quote by Howard Eves that I came across that got me curious:

> One of the anomalies in the history of mathematics is the fact that logarithms were discovered before exponents were in use.

One can treat the discovery of logarithms as the search for a computation tool to turn multiplication (which was difficult in the 17th century) into addition. There were previous approaches for simplifying multiplication dating back to antiquity (quarter square multiplication, prosthaphaeresis), and A Brief History of Logarithms by R. C. Pierce covers this, where it’s framed as establishing correspondences between geometric and and arithmetic sequences. Playing around with functions that could possibly fit the functional equation f(ab) = f(a) + f(b) is a good, if manual, way to convince oneself that such functions do exist and that this is the defining characteristic of the logarithm (and not just a convenient property). For example, log probability is central to information theory and thus many ML topics, and the fundamental reason is because Claude Shannon wanted a transformation on top of probability (self-information) that would turn the probability of multiple events into an addition — the aforementioned "f" is the transformation that fits this additive property (and a few others), hence log() everywhere.

Interestingly, the logarithm “algorithm” was considered quite groundbreaking at the time; Johannes Kepler, a primary beneficiary of the breakthrough, dedicated one of his books to Napier. R. C. Pierce wrote:

> Indeed, it has been postulated that logarithms literally lengthened the life spans of astronomers, who had formerly been sorely bent and often broken early by the masses of calculations their art required.


In my case, it was by chance.

I had a slide rule in high school. It was more of a novelty item by that point in time, only one of my math teachers even knew what a slide rule was, but that didn't stop me from figuring out how it was used and how it works. It didn't take much to figure out that the sliding action was solving problems by addition, and the funky scales were logarithmic. In other words: it performed multiplication by adding logs.

That said, I did encounter references to its original applications in other places. I studied astronomy and had an interest in the history of computation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: