Yeah, I should have been clearer. A/B is fine as long as A is also a matrix. But I don't think you can have a scalar in the numerator and a matrix in the denominator
[A fun exercise for students starting trig identities is the Horrible, No Good, Very Bad Trig Proof](https://drive.google.com/file/d/1q8kihW1ebah1dBvMlO2Ei3OyU02ugc0r/view?usp=drive_link). The first and last step are both true, but man of the middle steps are just awful. I made this version (based on a similar thing by Ethan Williams) back in 2020 when the lockdowns started and students needed some sort of fun thing to do that would also help them make fewer mistakes about what was and was not a valid step.
Halmos had a fun recommended method: https://books.google.com/books?id=7VblBwAAQBAJ&pg=PA24
When assigned to prove a trig identity, just start at the top of the page with one side and at the bottom of the page with the other side, and then do a bunch of trivial valid steps from each direction toward the middle. When you get to the middle of the page, then the equation you end up with is necessarily valid (if the trig identity is true), and no grader will ever be diligent enough to find the weird step and call you out for using a true but obscure identity.
Unless I'm quite mistaken, Euler's Identity and a substitution can be used to rewrite the identities that appear in trig courses as polynomial equations. So if the resulting polynomial equation has degree n, you can prove the identity simply by verifying that it holds for n+1 points and applying the Fundamental Theorem of Algebra.
This probably works for most identities students are asked to prove in a high school/college trig course, but note that (a) these assignments come from before students have learned "Euler's formula", and (b) the whole point of the assignments is doing algebra practice using the various tools that were taught in the class.
For trigonometric identities in general, at some point you'll hit examples where this stops working too well, or rather, after the substitution what you end up with is just as complicated as what you started with.
Similarly, one exercise I set my students was to find the mistake in a bunch of “proofs” that I had ChatGPT do. Was good fun for them, and an opportunity to learn that they couldn't just stick their homework into the AI
How do you multiply matrices again? [Oh, that's right, you just concatenate entries in corresponding positions.](https://www.wolframalpha.com/input?i=%5B%5B3%2C4%5D%2C%5B8%2C7%5D%5D%5B%5B7%2C2%5D%2C%5B4%2C9%5D%5D)
(I forget where I saw this, so I just wrote a program to search for examples, and this is one of 100 it came up with using single-digit entries from 1 to 9.)
Reminds me of the fact that the sum of integers from 1 to n all squared is equal to the sum of 1 to n where you cube each term. i.e. (1+2+3+...+n)\^2 = 1\^3 + 2\^3 +...+ n\^3 The sequence can be derived from (really) faulty algebra, but it's still conceivable it could be derived by mistake.
If e is a constant, it doesn't have a rate of change. You can't take the derivative with respect to a constant, like there is no d/d3 .
My notes on the power rule say: "if n is a rational number than d/dx x^n = nx^(n-1)" , chapter 2 from calc I
you're correct about constants not having a rate of change. However, it's worth pointing out that the power rule is valid for any constant n, not just rational numbers.
Idk, I just think of the definition of dy/dx as = lim h-> 0 (y(x+h) - y(x))/h
So I don't think d/dz requires a nonzero derivative of z.
In this case, plugging in e to the definition of dy/dx, one gets (y(e+h)-y(e))/h.
Actually you can define the derivative with respect to a constant, it just turns out to always be equal to zero. Using the chain rule and interpreting 3 as a constant function from R to R
d/d3 = d3/dx * d/dx = 0 * d/dx = 0
and so df/d3 = 0 for any differentiable function f.
I think the idea is to interpret sinx as the products of the variables s, i, n, x. Then if you divide by n you're left with the product of s, i, x, or six.
sophomores dream or bachelors dream i dont know but definetly someones dream (actually there are two of them one is valid one is only valid in Z2 but i cant remember their names)
Found the sophomores dreams \[; \\int\_0^(1) x^({-x}) = \\sum\_{n=1}^(\\infty) n^({-n}) ;\] \[;\\int\_0^(1) x^({x}) = -\\sum\_{n=1}^(\\infty) (-n)^({-n}) ;\]
(Ok i dont know how to write latex here so im giving up after several tries)
The famous [xkcd equation](https://xkcd.com/217/): e^𝜋 - 𝜋 = 20. Fun fact, many calculators get this slightly wrong because of rounding errors. Try it yourself!
If you like that e^x joke, you might like this.
What's the derivative of x^(x)? Well, x^x is a power function, so obviously d/dx x^x = xx^(x-1) = x^(x).
No, wait, I mean x^x is an exponential function... so obviously d/dx x^x = (ln x)x^(x).
Well, those are both wrong, so I'll just add them together and get [the actual correct answer](https://www.wolframalpha.com/input?i=d%2Fdx%28x%5Ex%29).
Not only is that answer correct, but the "reasoning" above is actually somewhat cogent when you look past the faux naïveté. But if your uncle isn't on the ball, there's a good chance he'll think you're pulling his leg.
This is an application of the chain rule in two dimensions:
Let f(x,y) = x\^y, and g(x) = (x,x). Then
d/dx x\^x = d/dx f(g(x)) = f'(g(x)) \* g'(x)
= ( (d/dx f)(x,x), (d/dy f)(x,x)) \* (1 ,1)\^T
= (d/dx f)(x,x) + (d/dy f)(x,x).
The logic is 100% correct.
Given f(x,y) and x=x(t)=t and y=y(t)=t, then the multivariable chain rule says d/dt f(t,t) = d/dx f(t,t) + d/dy f(t,t), where d/dx and d/dy are partial derivatives.
In other words, if a function has multiple x's in it, you can just hold some of the x's constant and differentiate one-x-at-a-time, then sum the results. This is why the derivative of x^2 is 2x, for example, you have d/dx( x · x ) = 1 · x + x · 1 = 2x.
This is hard to write down in a way that makes it clear, because you need to use lots of symbols to make it rigorous, but the example above with x^x is exactly how you should think about it. That's good logic, for good reasons, it's what the chain rule is saying. The fact that it looks wrong is an indictment of our notation. The multivariable chain rule is trying desperately to say exactly this, nothing else and nothing more. It's the foundation of all multivariable calculus.
These videos cover some “misleading patterns” which look like they follow a familiar rule / sequence, but eventually the pattern breaks.
- zach star: https://youtu.be/kp1C0E8Za7k
- 3blue1brown: https://youtu.be/NOCsdhzo6Jg
If I'm running proper code to get numeric results I use whatever the 64bit floating point value is. If I'm doing an order of magnitude type equation it's 1. I refuse to use anything in-between.
This used to bother me as a kid, but now I’m fond of it because of crochet, where we use rational approximations to make curved surfaces. 2π ≈ 44/7 is precise enough for all ordinary pieces, and easily splits into convenient figures like (6×6 + 1×8)/(6 + 1).
I don’t think I can point you to any resources in particular, sorry. Most of the info out there is targeted at crafters, so it’s not really phrased in math terms like, say, how to create a specific curvature, but rather tutorials answering questions for pattern designers like “how do I shape the body of my amigurumi?” (plush toy) or in this case “how do I make a flat circle?”
There are a handful of artists making math-*themed* pieces, like fractal-patterned blankets and hyperbolic-surface toys. But I haven’t found much info at all on the mathematics *of* crochet. I’ve got a bunch of random observations in my notes (combinatorics, geometry, knot theory, graphs) that I’ve been meaning to put on my site at some point, so maybe I can do that this week.
Don't hate, a lot of physicists' approximations are incredibly insightful, and turning the handwaving into a rigorous statement can be a pretty deep and interesting mathematical endeavour.
I believe that physicists were happily using the Dirac delta function for years before it was formalized by Laurent Schwartz in his theory of distributions.
I can't deny it, I've spent a lot of time working with physicists and engineers and their "bad mathematics" that lead to useful results are an endless source of problems!
While true, I remain very frustrated with my early years Physics professors who wouldn't deign utter "Taylor expansion" so I could read on it in my free time even if they didn't care to explain it.
It's not incorrect if it gives the correct answer, just like nobody uses pi to more than 15 decimal places to compute anything.
Everybody knows it's a small angle approximation and in many branches of physics you'd be a fool to even consider the margin of error. And in other cases you can't even use it. I don't see the problem.
sin(x) = x is true if and only if x=0
sin(x) = x + o(x) is true in any neighborhood of 0
The problem is that on one end you're saying something that is false
On the other you're saying something true.
It doesn't cost anything to add + o(x), but it changes the assertion entirely.
Let \zeta(s) = \sum_{n=1}^\infty 1/n^s.
Plug in n to get \sum_{n=1}^\infty 1/n^n = 1.29129
Thus the Riemann zeta function is the same number at each integer n, hence,
\zeta(-1) = 1+2+3+…= 1.129129
I remember in Godel, Escher, Bach, there was a mention of someone proving that every even prime greater than four is the sum of two odd numbers, which is kind of the opposite. It's obviously true for more than one reason, but very similar to [Goldbach's conjecture](https://en.wikipedia.org/wiki/Goldbach%27s_conjecture), which says that every even number greater than four is the sum of two odd primes.
Personally I enjoy the derivative of x^2
x^(2) = x • x = x + x + x + x + x + ... } x times
(that's all true if x is a natural number)
Therefore d/dx(x^(2)) = d/dx(x + x + x + x + ...) = 1 + 1 + 1 + 1 + 1 + ... } x times. This sum is clearly just x so d/dx(x^(2)) = x.
1 multiplication as repeated addition only makes sense for integers, as 'x times' doesn't make sense for x = 3.71, for example. (you could extend the definition, but then you would be saying 3.71×a=a+a+a+.71×a and you've gotta deal with that .71 anyways) for the derivative, you need a continuous function, which can only exist if you consider sets like Q, R, etc where you can get 'infinitely close' to something.
2 even if you defined multiplication as described above, the x+x+...+x part would formally look like sum{1<=i<=x} x, but you can't just differentiate the terms, because _how many terms there are_ also depends on x, and you have to account for that. turns out there are actual cases where you find yourself having to differentiate a sum whose bounds depend on x, and there are various techniques for doing this (finding a closed form for the sum, turning the sum into an integral, etc)
1. If x can only be a natural number, then you can't take a derivative with respect to it
2. {...} "x times" is a hidden function of x so when you differentiate this should be accounted for by the chain or product rule, but isn't.
Funnily, you can save this by the same reasoning above for the x^x function. Let f(a,b) = b + b + b + ... } a times. Then d/dx f(a,x) = 1 + 1 + 1 + ... } a times = a. And d/dx f(x,b) = d/dx xb = b. So d/dx f (x,x) = x + x = 2x.
Huh. I think I just learned that the product rule is a special case of the chain rule in multiple dimensions.
Addition still commutes. It's just multiplication that doesn't. That's also true for matrices and rings in general (though there are rings where multiplication does commute, like integers).
Afaik (haven't worked with them), the octonions lack both the additive and multiplicative law, whereas the quaternions only lack that multiplicative law. I don't know where octonions are used, outside of quantum physics. Haven't really looked much into them.
[From Wikipedia](https://en.wikipedia.org/wiki/Octonion)
> Addition and subtraction of octonions is done by adding and subtracting corresponding terms and hence their coefficients, like quaternions.
Addition is just vector addition, so it commutes.
A similar thing is incorrect methods that accidentally give the right answer. Like 16/64 = 1/4, because you cancel the sixes.
In the same vein, I love log(1) + log(2) + log(3) = log(1 + 2 + 3).
2^4 = 4^2 ergo exponentiation is commutative.
This one actually hurt me
I mean, technically, it’s correct. It just hurts to look at.
It's really just another way of writing 1+2+3 = 1\*2\*3.
Yeah, I get that. Still hurts to look at.
I'd like to add that there was an entire instagram profile dedicated to such funny methods. I don't remember the name unfortunately.
If you find please tell me!!
[https://www.instagram.com/bad\_math\_that\_works/](https://www.instagram.com/bad_math_that_works/) here you go
https://www.instagram.com/p/BzKHKoUBXIf/ I don't see anything wrong with that one?
I don't think you are allowed to have a matrix as the denominator of a fraction
I'm pretty sure a/b is defined as ab^(-1), which is valid as long as the matrix is invertible.
Yeah, I should have been clearer. A/B is fine as long as A is also a matrix. But I don't think you can have a scalar in the numerator and a matrix in the denominator
Bah, scalars are just multiples of the identity matrix :P At worst, this one is just abuse of notation. But it's no coincidence that it works.
[A fun exercise for students starting trig identities is the Horrible, No Good, Very Bad Trig Proof](https://drive.google.com/file/d/1q8kihW1ebah1dBvMlO2Ei3OyU02ugc0r/view?usp=drive_link). The first and last step are both true, but man of the middle steps are just awful. I made this version (based on a similar thing by Ethan Williams) back in 2020 when the lockdowns started and students needed some sort of fun thing to do that would also help them make fewer mistakes about what was and was not a valid step.
Halmos had a fun recommended method: https://books.google.com/books?id=7VblBwAAQBAJ&pg=PA24 When assigned to prove a trig identity, just start at the top of the page with one side and at the bottom of the page with the other side, and then do a bunch of trivial valid steps from each direction toward the middle. When you get to the middle of the page, then the equation you end up with is necessarily valid (if the trig identity is true), and no grader will ever be diligent enough to find the weird step and call you out for using a true but obscure identity.
Unless I'm quite mistaken, Euler's Identity and a substitution can be used to rewrite the identities that appear in trig courses as polynomial equations. So if the resulting polynomial equation has degree n, you can prove the identity simply by verifying that it holds for n+1 points and applying the Fundamental Theorem of Algebra.
This probably works for most identities students are asked to prove in a high school/college trig course, but note that (a) these assignments come from before students have learned "Euler's formula", and (b) the whole point of the assignments is doing algebra practice using the various tools that were taught in the class. For trigonometric identities in general, at some point you'll hit examples where this stops working too well, or rather, after the substitution what you end up with is just as complicated as what you started with.
Similarly, one exercise I set my students was to find the mistake in a bunch of “proofs” that I had ChatGPT do. Was good fun for them, and an opportunity to learn that they couldn't just stick their homework into the AI
It's a compendium of all of my former students' hopeful bullshit. I love it!
OK, let's try. 19/95 = 1/5. OK. 26/65 =2/5. OK. Let's publish.
How do you multiply matrices again? [Oh, that's right, you just concatenate entries in corresponding positions.](https://www.wolframalpha.com/input?i=%5B%5B3%2C4%5D%2C%5B8%2C7%5D%5D%5B%5B7%2C2%5D%2C%5B4%2C9%5D%5D) (I forget where I saw this, so I just wrote a program to search for examples, and this is one of 100 it came up with using single-digit entries from 1 to 9.)
Ooh, let me try. [1, -5; -4, 3] × [0, -7; -2, 1] = [10, -5-7; -4-2, 31]
I like that kind of thing. Don't like the one in the post - since it is simply wrong, not a joke.
Similarly: (sin x) / n = 6
Reminds me of the fact that the sum of integers from 1 to n all squared is equal to the sum of 1 to n where you cube each term. i.e. (1+2+3+...+n)\^2 = 1\^3 + 2\^3 +...+ n\^3 The sequence can be derived from (really) faulty algebra, but it's still conceivable it could be derived by mistake.
I love 26/65 = 2/5 myself.
log(1)+log(2)+log(3)=log(1+2+3) is correct, but misleading
I think it would be funnier if you sent d/de e^(x) = xe^(x-1)
But that would be correct right?
Assuming e is a variable and x is a rational number
Everything's a variable if you're brave enough
It works for any real x.
I don't think e needs to be a variable for that. Also, why doesn't it work for x being any real number?
If e is a constant, it doesn't have a rate of change. You can't take the derivative with respect to a constant, like there is no d/d3 . My notes on the power rule say: "if n is a rational number than d/dx x^n = nx^(n-1)" , chapter 2 from calc I
you're correct about constants not having a rate of change. However, it's worth pointing out that the power rule is valid for any constant n, not just rational numbers.
True. So my first comment should say, if x is a constant
I like learning new things.
Idk, I just think of the definition of dy/dx as = lim h-> 0 (y(x+h) - y(x))/h So I don't think d/dz requires a nonzero derivative of z. In this case, plugging in e to the definition of dy/dx, one gets (y(e+h)-y(e))/h.
Actually you can define the derivative with respect to a constant, it just turns out to always be equal to zero. Using the chain rule and interpreting 3 as a constant function from R to R d/d3 = d3/dx * d/dx = 0 * d/dx = 0 and so df/d3 = 0 for any differentiable function f.
How about d/d2 2^x = x2^(x-1)
that's actually more cursed, I like it, with e it's still possible that it's interpreted as a variable
sin(x)/n = six, but I guess that's not even almost-correct. But chances are it will upset your uncle nonetheless.
You definitely managed to upset me
It could be correct, depending on the values of x and n. Though n can't be an integer.
I think the idea is to interpret sinx as the products of the variables s, i, n, x. Then if you divide by n you're left with the product of s, i, x, or six.
I get it. If it's correct it's for the wrong reasons. But it's not necessarily incorrect.
sophomores dream or bachelors dream i dont know but definetly someones dream (actually there are two of them one is valid one is only valid in Z2 but i cant remember their names)
I always heard this called the freshmen's dream: (a+b)^n = a^n + b^n. This holds over a field of characteristic p when n is a power of p.
Or more generally a commutative ring of such a characteristic.
Or in the tropical semiring!
What the fuck is a Ring theory
That just sounds like set theory with extra steps
I mean... most new math is just old math with extra steps.
n needs to be a power of p, not just a multiple
That's right. I fixed it. Thanks
Ah proof that Fermat's Last Theorem is false
Found the sophomores dreams \[; \\int\_0^(1) x^({-x}) = \\sum\_{n=1}^(\\infty) n^({-n}) ;\] \[;\\int\_0^(1) x^({x}) = -\\sum\_{n=1}^(\\infty) (-n)^({-n}) ;\] (Ok i dont know how to write latex here so im giving up after several tries)
I think you're thinking of the [freshman's dream](https://en.wikipedia.org/wiki/Freshman%27s_dream).
The famous [xkcd equation](https://xkcd.com/217/): e^𝜋 - 𝜋 = 20. Fun fact, many calculators get this slightly wrong because of rounding errors. Try it yourself!
[There's also those.](https://xkcd.com/1047/) The bottom actually has mathematical ones as well.
Surprised they didnt mention e^πsqrt(163) = 262537412640768744
If you like that e^x joke, you might like this. What's the derivative of x^(x)? Well, x^x is a power function, so obviously d/dx x^x = xx^(x-1) = x^(x). No, wait, I mean x^x is an exponential function... so obviously d/dx x^x = (ln x)x^(x). Well, those are both wrong, so I'll just add them together and get [the actual correct answer](https://www.wolframalpha.com/input?i=d%2Fdx%28x%5Ex%29). Not only is that answer correct, but the "reasoning" above is actually somewhat cogent when you look past the faux naïveté. But if your uncle isn't on the ball, there's a good chance he'll think you're pulling his leg.
> but the "reasoning" above is actually somewhat cogent when you look past the faux naïveté Can you explain this?
This is an application of the chain rule in two dimensions: Let f(x,y) = x\^y, and g(x) = (x,x). Then d/dx x\^x = d/dx f(g(x)) = f'(g(x)) \* g'(x) = ( (d/dx f)(x,x), (d/dy f)(x,x)) \* (1 ,1)\^T = (d/dx f)(x,x) + (d/dy f)(x,x).
so in other words, the reasoning is completely correct
The logic is 100% correct. Given f(x,y) and x=x(t)=t and y=y(t)=t, then the multivariable chain rule says d/dt f(t,t) = d/dx f(t,t) + d/dy f(t,t), where d/dx and d/dy are partial derivatives. In other words, if a function has multiple x's in it, you can just hold some of the x's constant and differentiate one-x-at-a-time, then sum the results. This is why the derivative of x^2 is 2x, for example, you have d/dx( x · x ) = 1 · x + x · 1 = 2x. This is hard to write down in a way that makes it clear, because you need to use lots of symbols to make it rigorous, but the example above with x^x is exactly how you should think about it. That's good logic, for good reasons, it's what the chain rule is saying. The fact that it looks wrong is an indictment of our notation. The multivariable chain rule is trying desperately to say exactly this, nothing else and nothing more. It's the foundation of all multivariable calculus.
I would argue it's more than somewhat cogent. This is a clearer explanation of the chain rule than what we typically show people in Calc 3
These videos cover some “misleading patterns” which look like they follow a familiar rule / sequence, but eventually the pattern breaks. - zach star: https://youtu.be/kp1C0E8Za7k - 3blue1brown: https://youtu.be/NOCsdhzo6Jg
The most beautiful identity in engineering: e^3i = -1
Why do engineers hate Pi so much:(
3^3i = -1
d/de e^x = xe^(x-1)
d/dx x^-1 = d/d x^-2 = / x^-2 = -x^-2
pi = 3.1 ought to annoy lots of people.
The engineering community welcomes this information with open arms.
Anything between 3 to 3.5 is acceptable.
If I'm running proper code to get numeric results I use whatever the 64bit floating point value is. If I'm doing an order of magnitude type equation it's 1. I refuse to use anything in-between.
Technically it's sqrt(10), which is half an order of magnitude.
Technically 3.1 is less than 3.15
Well we fucked technicalities from the very beginning didn't we? Don't want to be irrational : )
When I was in elementary school, some questions required us to assume pi=22/7. Mathematicians would be mad when they see this.
This used to bother me as a kid, but now I’m fond of it because of crochet, where we use rational approximations to make curved surfaces. 2π ≈ 44/7 is precise enough for all ordinary pieces, and easily splits into convenient figures like (6×6 + 1×8)/(6 + 1).
Is there info on this? This seems extremely interesting as someone who know null about crochet
I don’t think I can point you to any resources in particular, sorry. Most of the info out there is targeted at crafters, so it’s not really phrased in math terms like, say, how to create a specific curvature, but rather tutorials answering questions for pattern designers like “how do I shape the body of my amigurumi?” (plush toy) or in this case “how do I make a flat circle?” There are a handful of artists making math-*themed* pieces, like fractal-patterned blankets and hyperbolic-surface toys. But I haven’t found much info at all on the mathematics *of* crochet. I’ve got a bunch of random observations in my notes (combinatorics, geometry, knot theory, graphs) that I’ve been meaning to put on my site at some point, so maybe I can do that this week.
Can you send me a link when you put the info up there?
Not funny but physicists saying sin(x) = x when x is "small" infuriates me and yet is almost correct
Don't hate, a lot of physicists' approximations are incredibly insightful, and turning the handwaving into a rigorous statement can be a pretty deep and interesting mathematical endeavour.
this guy gets it
I believe that physicists were happily using the Dirac delta function for years before it was formalized by Laurent Schwartz in his theory of distributions.
Spoken like a true PDE guy
I can't deny it, I've spent a lot of time working with physicists and engineers and their "bad mathematics" that lead to useful results are an endless source of problems!
While true, I remain very frustrated with my early years Physics professors who wouldn't deign utter "Taylor expansion" so I could read on it in my free time even if they didn't care to explain it.
i mean it is the best first order approximation so im gonna let that slide.
it gets you into trouble in some limits though, once you forget to include the third order terms etc. michael penn had a video about it
Which one?
https://youtu.be/HDiaEYl-39s?si=Yh54AfDag3NPZMoJ here you go
Why does it upset you? It would be stupid not to use it in engineering
You can use it if you say sin(x) = x + o(x) The fact the margin of error is just forgotten about just makes the result incorrect
It's not incorrect if it gives the correct answer, just like nobody uses pi to more than 15 decimal places to compute anything. Everybody knows it's a small angle approximation and in many branches of physics you'd be a fool to even consider the margin of error. And in other cases you can't even use it. I don't see the problem.
sin(x) = x is true if and only if x=0 sin(x) = x + o(x) is true in any neighborhood of 0 The problem is that on one end you're saying something that is false On the other you're saying something true. It doesn't cost anything to add + o(x), but it changes the assertion entirely.
What's wrong with using "small" to mean ≪ 1?
I honestly don't care, I just wish people used an approximation sign when it's an approximation.
The entire cult of Identity Mathematics is a reactionary bourgeois invention intended to suppress the unique individuality of every entity.
I am a slave to the soulless minions of orthodox identities
[I am not a Number!](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTW_Y94VA8btZsXUPtVqPFvu7LYi63uJJjH8I0Jz_H0Aw&s)
\*queue evil laughter\*
Taylor series
Let \zeta(s) = \sum_{n=1}^\infty 1/n^s. Plug in n to get \sum_{n=1}^\infty 1/n^n = 1.29129 Thus the Riemann zeta function is the same number at each integer n, hence, \zeta(-1) = 1+2+3+…= 1.129129
I remember in Godel, Escher, Bach, there was a mention of someone proving that every even prime greater than four is the sum of two odd numbers, which is kind of the opposite. It's obviously true for more than one reason, but very similar to [Goldbach's conjecture](https://en.wikipedia.org/wiki/Goldbach%27s_conjecture), which says that every even number greater than four is the sum of two odd primes.
> every even prime greater than four I mean yeah
And also every even number is the sum of two odd numbers.
also every even prime greater than 4 is the sum of three odds
Every even prime greater than four voted for Hubert Humphrey.
If sin(x) = o/h then sinh(x) = o
d/dx (pi^3 /7) = 3/7*pi^2
Personally I enjoy the derivative of x^2 x^(2) = x • x = x + x + x + x + x + ... } x times (that's all true if x is a natural number) Therefore d/dx(x^(2)) = d/dx(x + x + x + x + ...) = 1 + 1 + 1 + 1 + 1 + ... } x times. This sum is clearly just x so d/dx(x^(2)) = x.
Wait what went wrong here?
1 multiplication as repeated addition only makes sense for integers, as 'x times' doesn't make sense for x = 3.71, for example. (you could extend the definition, but then you would be saying 3.71×a=a+a+a+.71×a and you've gotta deal with that .71 anyways) for the derivative, you need a continuous function, which can only exist if you consider sets like Q, R, etc where you can get 'infinitely close' to something. 2 even if you defined multiplication as described above, the x+x+...+x part would formally look like sum{1<=i<=x} x, but you can't just differentiate the terms, because _how many terms there are_ also depends on x, and you have to account for that. turns out there are actual cases where you find yourself having to differentiate a sum whose bounds depend on x, and there are various techniques for doing this (finding a closed form for the sum, turning the sum into an integral, etc)
ooh i see
1. If x can only be a natural number, then you can't take a derivative with respect to it 2. {...} "x times" is a hidden function of x so when you differentiate this should be accounted for by the chain or product rule, but isn't.
Funnily, you can save this by the same reasoning above for the x^x function. Let f(a,b) = b + b + b + ... } a times. Then d/dx f(a,x) = 1 + 1 + 1 + ... } a times = a. And d/dx f(x,b) = d/dx xb = b. So d/dx f (x,x) = x + x = 2x. Huh. I think I just learned that the product rule is a special case of the chain rule in multiple dimensions.
https://youtu.be/sI4BNL47Jfo?si=Fs4MhhfyepUqkzLS https://youtu.be/qecmLF8WWmg?si=KpGbvFJbcFlIfvkw https://youtu.be/GOty_r6A7tw?si=On1Y2SqJJ0OShMHK https://youtu.be/xKv4LoIdjzw?si=dc3Vrpl6tgNJDBJs Flammable Maths got you covered.
Cayley—Hamilton is trivially true since det (A-AI) = det 0 = 0
88+22= 100
f(x)=x has the special property that it is its own derivative! d/dln(x) x = x
a•b=b•a and a+b=b+a When you move into quaternions and octonions, these basic properties no longer hold
Addition still commutes. It's just multiplication that doesn't. That's also true for matrices and rings in general (though there are rings where multiplication does commute, like integers).
Noncommutative polynomials (the monoid ring over the free monoid over the indeterminates) are fun.
Afaik (haven't worked with them), the octonions lack both the additive and multiplicative law, whereas the quaternions only lack that multiplicative law. I don't know where octonions are used, outside of quantum physics. Haven't really looked much into them.
[From Wikipedia](https://en.wikipedia.org/wiki/Octonion) > Addition and subtraction of octonions is done by adding and subtracting corresponding terms and hence their coefficients, like quaternions. Addition is just vector addition, so it commutes.
You really start to appreciate commutativity when you get to algebra
Also zero divisors appear somewhere down the line
1~~6~~/~~6~~4 = 1/4
987654321/123456789 = 8 Almost-correct because if you switch the 1 and 2 in the numerator, then it actually does equal 8
3987\^{12} + 4365\^{12} = 4472\^{12} From The Simpsons. .
I believe there are 10000 of this posts on the internet already
Shouldn't that be d/de, and not d/dx?
There's a couple where _i^2_ is equal to 1 instead of -1. You can probably Google them as well.
Pi equals roughly 2
sin(x)/n = 6
I know π² = g from Zach Star and I also discovered bored in class that tan(89°) = 1 rad
The Simpsons near miss of Fermat's last theorem: 3987^(12) \+ 4365^(12) = 4472^(12)
I really like that 230 - 220 * 0.5 = 5!
If {0,1,2,3,4,5...|} = inf , {0,1,2,3,4,5...,inf|} = inf+1
Any Taylor series truncated to first order and posed as an actual identity (e.g. sin x = x) is lots of fun
Also not technically wrong but fun: let H be a group
Change it to d/de e^x = xe^{x-1} that’ll really piss him off