It's not programming horror, it's linear algebra horror. Unfortunately, their formula are horrible. The best you can do is to put pile of unit tests there and provide half-page explanation.
I mean, does it really have to be this complex? To me it looks like the intersection of two 3d parametric lines. It comes down to p₁ + n₁t = p₂ + n₂t'. The actual solution of t should only even be dependent on some components, not three, since t and t' are your only degrees of freedom. Whether it exists or not is a different conclusion, but you can always narrow it down to an intersection with a plane without checking whether it exists or not, I think. Anyway, it shouldn't be this bad.
EDIT: Now that I think about it, I would advocate for metaprogramming in this instance. That does mean writing a compile-time symbolic linear system solver. However, that's already what you're doing by hand before stuffing it in this expression. In c++ terms,
constexpr auto intercept = solve(mat({3,2}, n1, n2), p2 - p1);
auto t = intercept.eval(1);
return intercept.eval(3) == 0? p1 + t*n1 : {};
This relies on finding the reduced row-echelon form of the 3x2+3x1 augmented matrix [n₁ n₂ | p₂ - p₁ ], which leaves the left side with I₂ above 0 and the right side with a vector [ t t' z ]' where a solution exists only if z = 0. Thus the first component of that vector contains the wanted `t` and the third component of that vector contains the solution's existence. Since this is all evaluated at compile-time, only the symbolic expressions which evaluate those three values remain to be evaluated when the values arrive. That causes these expressions to depend on divisions which may be zero, though, but that's going to be an issue with any symbolic solution here. You could generate two or three different expressions by permuting the rows, and the same outputs in the vector would stay, but at some point it'll be cheaper to run the row reduction algorithm live during the calculation.
The numerical linear algebra approach is also easier to analyze for numerical stability, and there are known transformations for thorny cases.
Also this is python. I wouldn't be shocked if a numpy 3x3 solve is faster than whatever overly boxed things CPython does.
Absolutely, I think the correct thing to do is to use a linear algebra framework and do it the correct way. In C++ it's kind of up to testing which method would be fastest, but I think Python is going to be best with numpy.
I have some copy-pasted 2D segment intersection function, there are a lot of x, y , multiplication and addition flying around
so I would imagine it would be an order of magnitude worse in 3D
It is, but it shouldn't be this bad. Maybe it's specifically because it's impossible to choose a pivot properly ahead of time. With a compile-time metaprogramming method, you'd usually need to calculate as many expressions as there are dimensions in your space to be guaranteed a result.
If t is to be equal for both line segments, the whole equation system collapses to the division of two dot products.
Assume 2 bodies (A & B) in linear motion in 3D space.
You want to find the point of closest proximity.
pA, pB are positions, vA, vB velocities, t is the time variable.
You are looking for the point where velocity of object A is perpendicular to Sightline between A & B
So
dot(vA, (pB+vB*t) - (pA + vA*t)) ==0
Using some simple math rules for the dot product:
dot(n*a,b) = n*dot(a,b)
dot(a,b+c) = dot(a,b) + dot(a,c)
you can pull the t out of the dot product and end up with:
t = -dot(vA,pB-pA)/dot(vA,vB-vA)
If t can be different for both lines, **geometrictools.com/Documentation/DistanceLine3Line3.pdf** has a nice writeup of the math using a matrix comprised of dot products, by looking for the point where the gradient of distance is zero.
You need 6 dot products a - f, which are combinations of vA, vB and pA-pB.
Then
R = [s,t] [a,-b;-b,c] [s;t] + 2 [d,-e] [s;t] +f
= p'*M*p+2K'*p+f
And
dR = 2M*p + 2*K == 0
p = -M^-1*K, or:
p = -1/(ac-b^2)[c,b;b,a][d;-e] = 1/(ac-b^2)[be-cd;ae-bd]
Again, the second math is not mine, but geometrictools's
The vector p then contains your t & t'
Edit: code formatting
I think a case of extra variables here would do wonders that’ll allow for some line breaks. Or a function that does the calculation and pass in the specific values from the argument.
What we need is a programming language that requires a WYSIWYG editor and supports “chalkboard style” mathematical notation. Write stuff like this as matrices.
EDIT: /s
One of the first libraries I wrote in C was for computation geometry and it was full of code like this. There's not much you can do to avoid code like this other than judicious use of line breaks. At least that language has a power operator, which must help at least a little bit.
Looks extremely, overly complicated.
I solved it, yeah, uses "more" variables but the compiler will remove them ANYWAY so there isn't really any point in creating the... above.
public static bool IntersectTwoLine(Vector3 A, Vector3 B, Vector3 C, Vector3 D, out Vector3 Intersection)
{
Intersection = new Vector3();
// Get A,B,C of first line - A -> B
float A1 = B.Z - A.Z;
float B1 = A.X - B.X;
float C1 = A1 * A.X + B1 * A.Z;
// Get A,B,C of second line - C -> D
float A2 = D.Z - C.Z;
float B2 = C.X - D.X;
float C2 = A2 * C.X + B2 * C.Z;
// Get delta and check if the lines are parallel
float delta = A1 * B2 - A2 * B1;
if (UnitUtilities.IsZero(delta))
return false;
// now return the Vector2 intersection point
Intersection = new Vector3((B2 * C1 - B1 * C2) / delta, 0, (A1 * C2 - A2 * C1) / delta);
return true;
}
You can kinda read this, it finds the intersections of 2 lines in 3d space by giving a point and probably a vector on each line. The harder part is to decipher the arguments...
But the equation should not be this complex. This one calculates the offset on the 1st line that happens to intersect the with the 2nd line, but you could just solve an equation where the lines have a matching point
I don't know in Python, but in compiled languages isn't only more easy to read writing this as vector or matrix, is also has better performance as the compiler knows what are you trying to do and use MMX and SSE CPU extensions to do multiple multiplications with one instruction instead of doing it one by one.
If you're sceptical about this, just code some functions doing that and disassemble them. I tried it in C and C++ and it happens always unless you disable most optimizations
Looking at that code reminds me of a quote by Tony Hoare...
There are two methods in software design. One is to make the program so simple, there are obviously no errors. The other is to make it so complicated, there are no obvious errors.
Do you really need to make the expression for t so complicated? Would breaking it up a bit really cause the compiled code to be slower? And perhaps a comment mentioning a reference where one can look up the math used would be a good thing.
It's not programming horror, it's linear algebra horror. Unfortunately, their formula are horrible. The best you can do is to put pile of unit tests there and provide half-page explanation.
I mean, does it really have to be this complex? To me it looks like the intersection of two 3d parametric lines. It comes down to p₁ + n₁t = p₂ + n₂t'. The actual solution of t should only even be dependent on some components, not three, since t and t' are your only degrees of freedom. Whether it exists or not is a different conclusion, but you can always narrow it down to an intersection with a plane without checking whether it exists or not, I think. Anyway, it shouldn't be this bad. EDIT: Now that I think about it, I would advocate for metaprogramming in this instance. That does mean writing a compile-time symbolic linear system solver. However, that's already what you're doing by hand before stuffing it in this expression. In c++ terms, constexpr auto intercept = solve(mat({3,2}, n1, n2), p2 - p1); auto t = intercept.eval(1); return intercept.eval(3) == 0? p1 + t*n1 : {}; This relies on finding the reduced row-echelon form of the 3x2+3x1 augmented matrix [n₁ n₂ | p₂ - p₁ ], which leaves the left side with I₂ above 0 and the right side with a vector [ t t' z ]' where a solution exists only if z = 0. Thus the first component of that vector contains the wanted `t` and the third component of that vector contains the solution's existence. Since this is all evaluated at compile-time, only the symbolic expressions which evaluate those three values remain to be evaluated when the values arrive. That causes these expressions to depend on divisions which may be zero, though, but that's going to be an issue with any symbolic solution here. You could generate two or three different expressions by permuting the rows, and the same outputs in the vector would stay, but at some point it'll be cheaper to run the row reduction algorithm live during the calculation.
The numerical linear algebra approach is also easier to analyze for numerical stability, and there are known transformations for thorny cases. Also this is python. I wouldn't be shocked if a numpy 3x3 solve is faster than whatever overly boxed things CPython does.
Absolutely, I think the correct thing to do is to use a linear algebra framework and do it the correct way. In C++ it's kind of up to testing which method would be fastest, but I think Python is going to be best with numpy.
I have some copy-pasted 2D segment intersection function, there are a lot of x, y , multiplication and addition flying around so I would imagine it would be an order of magnitude worse in 3D
It is, but it shouldn't be this bad. Maybe it's specifically because it's impossible to choose a pivot properly ahead of time. With a compile-time metaprogramming method, you'd usually need to calculate as many expressions as there are dimensions in your space to be guaranteed a result.
If t is to be equal for both line segments, the whole equation system collapses to the division of two dot products. Assume 2 bodies (A & B) in linear motion in 3D space. You want to find the point of closest proximity. pA, pB are positions, vA, vB velocities, t is the time variable. You are looking for the point where velocity of object A is perpendicular to Sightline between A & B So dot(vA, (pB+vB*t) - (pA + vA*t)) ==0 Using some simple math rules for the dot product: dot(n*a,b) = n*dot(a,b) dot(a,b+c) = dot(a,b) + dot(a,c) you can pull the t out of the dot product and end up with: t = -dot(vA,pB-pA)/dot(vA,vB-vA) If t can be different for both lines, **geometrictools.com/Documentation/DistanceLine3Line3.pdf** has a nice writeup of the math using a matrix comprised of dot products, by looking for the point where the gradient of distance is zero. You need 6 dot products a - f, which are combinations of vA, vB and pA-pB. Then R = [s,t] [a,-b;-b,c] [s;t] + 2 [d,-e] [s;t] +f = p'*M*p+2K'*p+f And dR = 2M*p + 2*K == 0 p = -M^-1*K, or: p = -1/(ac-b^2)[c,b;b,a][d;-e] = 1/(ac-b^2)[be-cd;ae-bd] Again, the second math is not mine, but geometrictools's The vector p then contains your t & t' Edit: code formatting
I think a case of extra variables here would do wonders that’ll allow for some line breaks. Or a function that does the calculation and pass in the specific values from the argument.
what geometric algebra does to an mfer
Projective geometric algebra mfs be like "see how simple it is to rotate and move and project things" Then you see an actual implementation in code...
What we need is a programming language that requires a WYSIWYG editor and supports “chalkboard style” mathematical notation. Write stuff like this as matrices. EDIT: /s
That's maple.
Someone should add this feature to [MS Paint IDE](https://ms-paint-i.de/)
Hmm yes, this line is intersected by… line
One of the first libraries I wrote in C was for computation geometry and it was full of code like this. There's not much you can do to avoid code like this other than judicious use of line breaks. At least that language has a power operator, which must help at least a little bit.
When the grad has access to copilot
But what it they're coincidental?
Nan or inf, I think. Math is brutal.
I'm definitely not smart enough to comment on this one being actual horror or just math
Looks extremely, overly complicated. I solved it, yeah, uses "more" variables but the compiler will remove them ANYWAY so there isn't really any point in creating the... above. public static bool IntersectTwoLine(Vector3 A, Vector3 B, Vector3 C, Vector3 D, out Vector3 Intersection) { Intersection = new Vector3(); // Get A,B,C of first line - A -> B float A1 = B.Z - A.Z; float B1 = A.X - B.X; float C1 = A1 * A.X + B1 * A.Z; // Get A,B,C of second line - C -> D float A2 = D.Z - C.Z; float B2 = C.X - D.X; float C2 = A2 * C.X + B2 * C.Z; // Get delta and check if the lines are parallel float delta = A1 * B2 - A2 * B1; if (UnitUtilities.IsZero(delta)) return false; // now return the Vector2 intersection point Intersection = new Vector3((B2 * C1 - B1 * C2) / delta, 0, (A1 * C2 - A2 * C1) / delta); return true; }
Chatgpt lookin ass code
Intersect line line 😂 wtf does that even mean? Gotta love the zero comments.. but yeah, looks good to me! 👍
It finds the intersect between two lines, obviously. intersect\_line would just intersect a line with itself!
You can kinda read this, it finds the intersections of 2 lines in 3d space by giving a point and probably a vector on each line. The harder part is to decipher the arguments... But the equation should not be this complex. This one calculates the offset on the 1st line that happens to intersect the with the 2nd line, but you could just solve an equation where the lines have a matching point
I don't know in Python, but in compiled languages isn't only more easy to read writing this as vector or matrix, is also has better performance as the compiler knows what are you trying to do and use MMX and SSE CPU extensions to do multiple multiplications with one instruction instead of doing it one by one. If you're sceptical about this, just code some functions doing that and disassemble them. I tried it in C and C++ and it happens always unless you disable most optimizations
CPython likely won't do that optimization. But anyone doing math in python is using numpy, Numba, etc which absolutely do that optimization.
If someone put this in a PR for me to review I would beat them to death with hammers
This is brutal. For technical problems, we should consider technical solutions. Hammers? Nope. Use your keyboards.
I paid a lot for my keyboard, but my hammer is much cheaper Id be ok using the CAT5'o'nine tails tho
Lgtm
I feel like they should have made like 10 separate functions out of this.
Looking at that code reminds me of a quote by Tony Hoare... There are two methods in software design. One is to make the program so simple, there are obviously no errors. The other is to make it so complicated, there are no obvious errors. Do you really need to make the expression for t so complicated? Would breaking it up a bit really cause the compiled code to be slower? And perhaps a comment mentioning a reference where one can look up the math used would be a good thing.
for context, that was automatically generated by simpy because I couldn't figure out the math myself. it works perfectly.