Why does the "$\times$" used in arithmetic change to a "$\cdot$" as we progress through education? The symbol seems to only be ambiguous because of the variable $x$; however, we wouldn't have chosen the variable $x$ unless we were already removing $\times$ as the symbol for multiplication. So why do we? I am very curious.
Sign up
- Anybody can ask a question
- Anybody can answer
- The best answers are voted up and rise to the top
|
As @DavidRicherby implies out in a comment below, one should ideally distinguish carefully the history of the dot notation from the possible reasons for keeping it, retaining it, or modifying its usage. Unfortunately although I am qualified by age (66) to comment on the last 50 years, I am otherwise ill-qualified to deal with the history (for which see when and by whom and also the answers by @hjhjhj57 and @RobertSoupe). So what follows may sometimes seem to mix history and reasons and at times get the history wrong. It is intended as one individual's take on reasons. Note also that having only lived in the USA for about 3 years (2 years LA, and 6 months each in NY and Colorado), I am much more familiar with the UK scene than the USA scene, and know almost nothing about other countries. There are multiple reasons. Perhaps the most important is a desire to make the notation as concise as possible. The change is not really from $a\times b$ to $a\cdot b$. It is from $a\times b$ to $ab$. In many undergraduate algebra books, and at the research and journal level, the "multiplication" operation is just denoted by juxtaposition. But then that is also true in some maybe most schools for teenagers. Glancing at the 2014 Core Maths papers from one of the leading UK exam boards for the "A-levels" (the final exams for most pupils), they seem to use juxtaposition exclusively. On the other hand papers for GCSE maths (typically taken age 16) seem to use $246\times10$. This is also linked to a desire for speed. There are significantly less keystrokes in $ab$ than in either $a\times b$ or $a\cdot b$ if you are using LaTeX. Perhaps more important, being more concise, juxtaposition is easier to read. But as @DavidRicherby points out in a much upvoted comment below, LaTeX has come late to the party, so it may have a minor role in maintaining the status quo, but could not have helped to bring it about. Another reason is avoiding ambiguity. For example $3^25$ is unambiguous because the exponent separates the $3$ and $5$. But in LaTeX if you try to write $3\cdot5^2$ by juxtaposition you have to insert a special space to get $3\ 5^2$ and the outcome is still not entirely satisfactory. But I may pay too much attention to such matters, because having published two books in the last two years, I wonder how anyone manages to combine writing math books with a full-time job, the work involved is horrendous! Another reason may be that 3D vectors are often introduced early, and have two multiplication operations: the dot product and the cross product. So one is forced to use two different symbols to avoid ambiguity. Of course, one could avoid that by using the tensor subscript approach, and how all that is handled has a fashion element in it. For the last few decades for example, there has been a campaign to move us towards Clifford or "geometric algebras" (where the cross product is frowned on and the wedge product is key). Note also that $a\cdot b$ often does not represent ordinary multiplication. Of course $3\cdot5$ almost always does, but as one moves through undergraduate work into graduate work $a\cdot b$ is increasingly used to represent operations other than the ordinary multiplication (of integers, reals etc). As @Kundor correctly points out, the OP's real question could be seen as why teach $5\times 6$ in the first place? I have never tried to teach anyone younger than about 9. But I am fairly sure that trying to use juxtaposition when arithmetic is first introduced would be a non-starter. So the question becomes why not start with $5\cdot6$, instead of moving to it years later? That seems to me a mixture of history and psychology. I want to keep away from the history if possible, but the psychology does not surprise me. Making sensible changes to familiar things is hugely difficult when large numbers of people are involved, particularly when it is completely unclear to them how the change will benefit them. I clearly remember the UK's move from the old "pounds, shillings and pence" (with 12 old pence to the shilling, 20 shillings to the pound). It required a massive campaign by the government. In that case it was obvious that a simple 100 new pence to the pound would be much easier, but few people wanted to switch, having got used to the old currency. Another example is the difficulty we have had in the UK moving from fahrenheit to celsius for temperature. All our weather forecasts are now in celsius (or rather centigrade - the identical system with a different name), but it took years to get most people to accept it. The old system was bizarre (bp of water 212, fp 32), yet I believe it is still used in the USA! Or take miles. The SI unit is km. But there seems no prospect of the UK changing all its road signs to km for the foreseeable future. Remember this is a country where we drive on the wrong side of the road. When I was commuting backwards and forwards to LA and picking up rental cars at LAX and my own car at LHR, the only way I could find to remember it, was that I had to drive so that I was as near the centre of the road as possible. Mercifully I never got the wrong kind of car. So changing the status quo is tough. Time to make an obvious point: MSE is read in many countries, and practices vary widely, often even within countries. @Chieron 's much upvoted comment under the question notes that some schools never use $3\times4$, but start with $3\cdot4$. Similarly, I have tended to focus above on differences relevant to teenagers and undergraduates, but @BenC 's answer makes the excellent and easily overlooked point about potential and actual confusion between the centre dot and the decimal point. Again, @RobertSoupe (in his answer) makes the excellent point (which I managed to overlook entirely) about potential confusion between times $\times$ and the variable $x$ when children move on from learning tables to slightly more advanced maths. See also the comment by @user21820 below. I would also draw attention to some comments by @snulty. Under the question he and @MauroALLEGRANZA note that Descartes used $x$ for unknown and juxtaposition for multiply (which shows how tricky historical discussion can be unless you are well briefed)! I also highly recommend snulty's answer. I am ill-qualified to comment on the truth, but it certainly sounds highly plausible. A final observation. One of the simultaneously delightful and frustrating aspects of the academic world is that diktats do not work. To persuade people to change their usage can take generations. Sometimes (as with the great Classical Statistics debacle) one has to wait more than a century to get important changes widely accepted. Notation is particularly tricky. New areas of maths are constantly emerging and people are constantly hijacking old symbols for new uses, so that at any moment notation often appears inconsistent across the whole field. It is hard to see what can be done to change that. So $a\times b$ can still mean ordinary multiplication, but sometimes it means vector product, even though there is no boldface or under- or over-lining to make clear that $a,b$ are vectors. So context is always king. |
|||||||||||||||||||||
|
There is also an ambiguity between a decimal fraction with a dot, as in $3.5^2$, and multiplication with a centre dot, as in $3\cdot5^2$, particularly if the latter doesn't have spacing around the dot to give context, as in $3\!\cdot\!5^2$. In fact some textbooks use a centre dot for decimal fractions, for example Nelkon and Parker's Advanced Level Physics (sixth edition published in the UK in 1987, at least, which uses $\times$ for multiplication). |
|||||||||||||||||||||
|
The lowercase letter $x$ and the multiplication cross $\times$ ( The big problem here is that these two symbols that have different etymologies and different uses look so much alike, and nowhere was this problem felt more acutely than during the early history of computer programming languages in the time of ASCII. $x \times y$ is clear enough, at least for us with good enough eyes, but the multiplication cross is not in the ASCII character set, and so EDIT: Doing some research after posting this answer, I came across a page from Northeastern University on math symbols. William Oughtred, a 16th century mathematician, came up with a cross with vertical serifs as a multiplication symbol. Oughtred was rebuked by his now more famous contemporary Leibniz, who wrote:
This reinforces my point about how $x$ and $\times$ have different origins but are confusingly similar in appearance. And by the way, don't ever use $\Sigma$ (uppercase Sigma) as a cheap way to write E when you want to give something a Greek flavor. That Greek letter was chosen as the summation operator because it is an S sound. |
|||||||||||||||||||||
|
I'm looking for sources (see edit), but I would imagine when teaching children to count, add, multiply, you start with integers and addition is symbolised like $1+1=2$. Then you try to teach them that multiplication is short for lots of addition $3+3+3+3=4\times 3$ and $4+4+4=3\times 4$, and theres the obvious similarity between the symbols $\times$ and $+$. At a later stage in school $3\cdot4 $ can look like $3.4$ as in $3\frac{4}{10}$, so I imagine this would be nice to avoid especially when you're also teaching kids to practice their handwriting, so they might not always put a dot in the exact place you tell them. Then finally when you want to move onto more advanced things that $\cdot$ and $\times$ can stand for, even just algebra with a variable $x$, you might want to change to a better symbol. I think the other answers take this point of view very well, so I won't mention anything about that. Edit: On the irish curriculum at around third and fourth class they do multiplication and decimals at roughly the same time. It says to develop an understanding of multiplication of repeated addition and division as repeated subtraction (obviously in whole number cases). |
||||
|
It's because . stands for any binary operation which might look like 'multiplication' in some particular setting, or might be a substitute for multiplication. Hence, it is more general in nature. |
|||||||||
|
I believe the reason must be mostly pedagogical:
For the historical part here are two references which put together confirm that Descartes was the one who first used $x$ as a variable and used the juxtaposition convention for multiplication: about convention for unknowns and about multiplicative notation. In fact, both of them cite Cajori's as their main reference. |
|||
|
This is primarily done to emphasize different multiplication operations in terms of vector and multidimensional calculus. In particular, this is to emphasize that the dot product $\cdot$ is mechanically different from the cross product $\times$, although in operations on objects of one dimension, they are virtually the same. |
|||||||||||||||||||||
|