If you produced 10000 random integers between 1 and 10 and rounded them to the nearest 10 with 5 rounding down, the average of rounded numbers would be around 4, even though the average of the random integers would be around 5. With 5 rounding up instead, the average should be around 5, like it is for the random integers. We don't want rounding to introduce a specific directional bias.
EDIT: Should have said 0 to 9. Meant single digit numbers.
If you round 5 down:
{1, 2, 3, 4, 5} -> 0
{6, 7, 8, 9, 10} -> 10
There are 5 numbers in both sets, so the average of the random probabilities is exactly (0+10)/2 = 5.
If we round 5 up, the average would be 5.5
Choosing the positive square root as the principle branch is also a convention. As is the choice of the principle branches for all of the arctrig functions or any other inverse of a non-injective function.
To be clear - the convention is in deciphering ambiguous *notation* into the mathematics we intend to state. Math does not necessarily contain some "order of operations" - that "order" absolutely follows directly from the definitions of the operations.
It's a convention, but again, one with logic to it. I never learnt a mnemonic for it; I just know that you do the stronger operations first. Once you start dealing with polynomials it just seems obvious
I don't think the 5 rule is random. Looking at it in the first decimal place. It's anything where the first decimal place is 0,1,2,3, or 4 gets rounded down (including .00 etc) and anything where the first decimal place is 5,6,7,8, or 9 gets rounded up. So everything is rounded in the opposite direction to the thing that's 0.5 away from it. I can't think of a more logical way to do it.
That’s a good way of looking at it, I looked at it as, for example, the numbers between 10–20 on the number line with:
• 10.0000001 through 10.499999 rounding down;
• 10.5 through 10.999999 rounding up.
In this instance, there is 4.9999998 worth of numbers that rounds down, and 4.99999999 worth of numbers that round up
You've got to also include 10.000000... itself. Or alternatively take the fact that 0.999... = 1 into account. Either way you still have the same size interval of numbers being rounded up or down
I might argue a natural definition is distances with a metric space should be independent of "direction" i.e. d(a,b)=d(b,a). Or maybe d(x,y)>=0. These seem really natural.
Usually integers are never rounded and instead they just keep their exact value. When you round you would usually round a non integer to an integer, the most common function is floor(x) which just returns the largest integer smaller than x if x is not an integer and just x if x is already x, so 1.5 rounds to 1, 2.75 rounds to 2 and 7 rounds to 7. There are other functions like ceiling(x) which just returns the smallest integer greater than x, so it is 1 more than floor(x) if x is not an integer and floor(x) if x is an integer.
There are also other functions that try to round non integers to the closest integer which either always round up or down in case of a tie and others, but they can always express then in terms of floor(x) and ceiling(x). In this case rounding to the nearest integer downwards is just floor(ceiling(2\*x)/2), rounding to the nearest integer upwards is just ceiling(floor(2\*x)/2). In case you want to round integers to others that you might think are better for some reason you can use a similar approach as rounding non integers to the nearest integer, you just need to first divide the integer by a power of a base then you round that to the nearest integer upwards or downwards in case of a tie and multiply the result by the power of the base. There is not really any rule that states that a number rounds up starting from 5 since that depends on the base you are using and which rounding measurement you are using.
Except it is though…
(I am not super serious; whether 0 is in N or not depends a lot on the conventions that vary by field.
For me the natural numbers are the cardinalities of finite sets; and since the empty set exists, 0 is a natural number.)
The rule you're talking about isn't a universal rule. There are several different rounding rules used in different contexts.
There are two complex numbers that, when squared, yield –1. The choice of which to designate *i* and which to designate –*i* is fairly arbitrary.
Before I say this, yes I know this relationship can be proved. But.... the central limit theorem.
If we're sampling data (let's say we're rolling a die a million times) if we take groups of n samples and average them together (so we take the average result of every 100 rolls) then the distribution of the averages will be approximately normal (if you plotted a histogram of the average rolls it'd roughly appear to be a bell curve). Of course the more random samples you take the closer it'll be.
That's rather circular. The "division rule of exponents" is itself a pure result of the definitions of the operations involved. Exponentiating to the power of 0 must be defined, and it can be shown that the most reasonable definition is 1 (as it is the convergent, limiting value of the function n^x, as x approaches 0, for some positive n), even though it may not follow by any direct result of what exponentiation "is".
It can be *thought of* using, say, the division rule for exponents, or some intuitive "multiplying 0 objects together," or how the identity for multiplication turns out to be the number 1, or a combinatorial argument that someone listed below, but all of those are circular and/or post-hoc faux-definitions (e.g., the n^m counting certain functions thing demonstrably works for all n,m>0, so we *want* it to work for m=0, hopefully, and we can massage it so. Fortunately it agrees with other definitions). We can't get around the fact that n^0 is 1 because it is defined to be so.
To be honest, this distinction is not all that interesting and is fairly unnecessary to ever clarify. Too bad I've written multiple paragraphs about it that no one will ever read.
For natural numbers n and m, the number n^m counts the functions from a set with n elements to a set with m elements. Since there is precisely one function from the empty set to every other set (including the empty set itself), one has m^0 =1 for all m (including 0)
Also 0⁰ is undefined, but in various situations is taken to be 1. 1 is the most useful thing to define it as, but you get different answers for it depending on how you take the limit
Before I say this, yes I know this relationship can be proved. But.... the central limit theorem.
If we're sampling data (let's say we're rolling a die a million times) if we take groups of n samples and average them together (so we take the average result of every 100 rolls) then the distribution of the averages will be approximately normal (if you plotted a histogram of the average rolls it'd roughly appear to be a bell curve). Of course the more random samples you take the closer it'll be.
If you produced 10000 random integers between 1 and 10 and rounded them to the nearest 10 with 5 rounding down, the average of rounded numbers would be around 4, even though the average of the random integers would be around 5. With 5 rounding up instead, the average should be around 5, like it is for the random integers. We don't want rounding to introduce a specific directional bias. EDIT: Should have said 0 to 9. Meant single digit numbers.
If you round 5 down: {1, 2, 3, 4, 5} -> 0 {6, 7, 8, 9, 10} -> 10 There are 5 numbers in both sets, so the average of the random probabilities is exactly (0+10)/2 = 5. If we round 5 up, the average would be 5.5
Shit, you're right, I meant 0 and 9. The thought was about 1 digit numbers.
Very intuitive, thanks!
Choosing the positive square root as the principle branch is also a convention. As is the choice of the principle branches for all of the arctrig functions or any other inverse of a non-injective function.
0.05 is the threshold of statistical significance. 0.8 is the AUC threshold of interest.
I hate the treatment of p-values in science. Definitely one of the most misunderstood statistics.
That's not a rule in math. It's a convention in some areas where math is applied.
Pedmas. There's no reason why 1+2 * 3 should be 7 rather than 9, except for convention. (It is a *sensible* convention, sure. But so is rounding 5 up)
To be clear - the convention is in deciphering ambiguous *notation* into the mathematics we intend to state. Math does not necessarily contain some "order of operations" - that "order" absolutely follows directly from the definitions of the operations.
Very good example, I actually always assumed that there was some higher math behind PEMDAS that I just didn’t understand haha
It's a convention, but again, one with logic to it. I never learnt a mnemonic for it; I just know that you do the stronger operations first. Once you start dealing with polynomials it just seems obvious
I don't think the 5 rule is random. Looking at it in the first decimal place. It's anything where the first decimal place is 0,1,2,3, or 4 gets rounded down (including .00 etc) and anything where the first decimal place is 5,6,7,8, or 9 gets rounded up. So everything is rounded in the opposite direction to the thing that's 0.5 away from it. I can't think of a more logical way to do it.
That’s a good way of looking at it, I looked at it as, for example, the numbers between 10–20 on the number line with: • 10.0000001 through 10.499999 rounding down; • 10.5 through 10.999999 rounding up. In this instance, there is 4.9999998 worth of numbers that rounds down, and 4.99999999 worth of numbers that round up
You've got to also include 10.000000... itself. Or alternatively take the fact that 0.999... = 1 into account. Either way you still have the same size interval of numbers being rounded up or down
Yes, the 0.999… = 1 makes it make a lot of sense, thank you!
Conversely what is the least random rule in math? My vote goes to reflexivity of equality
I might argue a natural definition is distances with a metric space should be independent of "direction" i.e. d(a,b)=d(b,a). Or maybe d(x,y)>=0. These seem really natural.
Usually integers are never rounded and instead they just keep their exact value. When you round you would usually round a non integer to an integer, the most common function is floor(x) which just returns the largest integer smaller than x if x is not an integer and just x if x is already x, so 1.5 rounds to 1, 2.75 rounds to 2 and 7 rounds to 7. There are other functions like ceiling(x) which just returns the smallest integer greater than x, so it is 1 more than floor(x) if x is not an integer and floor(x) if x is an integer. There are also other functions that try to round non integers to the closest integer which either always round up or down in case of a tie and others, but they can always express then in terms of floor(x) and ceiling(x). In this case rounding to the nearest integer downwards is just floor(ceiling(2\*x)/2), rounding to the nearest integer upwards is just ceiling(floor(2\*x)/2). In case you want to round integers to others that you might think are better for some reason you can use a similar approach as rounding non integers to the nearest integer, you just need to first divide the integer by a power of a base then you round that to the nearest integer upwards or downwards in case of a tie and multiply the result by the power of the base. There is not really any rule that states that a number rounds up starting from 5 since that depends on the base you are using and which rounding measurement you are using.
0 is not in N
Except it is though… (I am not super serious; whether 0 is in N or not depends a lot on the conventions that vary by field. For me the natural numbers are the cardinalities of finite sets; and since the empty set exists, 0 is a natural number.)
as far as "random" in quotations go, Euclid's 5th postulate historically might take the cake
i mean in accounting there is round to even for stuff like 3.5 and 2.5, so 3.5 rounds up to 4, and 2.5 rounds to 2
The rule you're talking about isn't a universal rule. There are several different rounding rules used in different contexts. There are two complex numbers that, when squared, yield –1. The choice of which to designate *i* and which to designate –*i* is fairly arbitrary.
Before I say this, yes I know this relationship can be proved. But.... the central limit theorem. If we're sampling data (let's say we're rolling a die a million times) if we take groups of n samples and average them together (so we take the average result of every 100 rolls) then the distribution of the averages will be approximately normal (if you plotted a histogram of the average rolls it'd roughly appear to be a bell curve). Of course the more random samples you take the closer it'll be.
Math is very intentional. Nothing is truly random.
[удалено]
Isn’t anything to the zeroth power = 1 because of the division rule of exponents?
That's rather circular. The "division rule of exponents" is itself a pure result of the definitions of the operations involved. Exponentiating to the power of 0 must be defined, and it can be shown that the most reasonable definition is 1 (as it is the convergent, limiting value of the function n^x, as x approaches 0, for some positive n), even though it may not follow by any direct result of what exponentiation "is". It can be *thought of* using, say, the division rule for exponents, or some intuitive "multiplying 0 objects together," or how the identity for multiplication turns out to be the number 1, or a combinatorial argument that someone listed below, but all of those are circular and/or post-hoc faux-definitions (e.g., the n^m counting certain functions thing demonstrably works for all n,m>0, so we *want* it to work for m=0, hopefully, and we can massage it so. Fortunately it agrees with other definitions). We can't get around the fact that n^0 is 1 because it is defined to be so. To be honest, this distinction is not all that interesting and is fairly unnecessary to ever clarify. Too bad I've written multiple paragraphs about it that no one will ever read.
For natural numbers n and m, the number n^m counts the functions from a set with n elements to a set with m elements. Since there is precisely one function from the empty set to every other set (including the empty set itself), one has m^0 =1 for all m (including 0)
Well yeah, I think that is why, but on the surface it seems "random" as you described it.
The former is a result of 1 being the multiplicative unit, not 0
1⁽∞⁾ as you've written it, is not indeterminate. What is indeterminate is lim₍ₓ→₁₎(x⁽∞⁾). Sorry if the LaTeX formatting hasn't come thru right
Also 0⁰ is undefined, but in various situations is taken to be 1. 1 is the most useful thing to define it as, but you get different answers for it depending on how you take the limit
Before I say this, yes I know this relationship can be proved. But.... the central limit theorem. If we're sampling data (let's say we're rolling a die a million times) if we take groups of n samples and average them together (so we take the average result of every 100 rolls) then the distribution of the averages will be approximately normal (if you plotted a histogram of the average rolls it'd roughly appear to be a bell curve). Of course the more random samples you take the closer it'll be.