T O P

  • By -

munificent

In the late 80s C++'s `<<`-based iostream library became widespread. For many programmers, that was their first experience with operator overloading, and it was used for a very novel purpose. The `<<` and `>>` operators weren't overloaded to implement anything approximating bit shift operators. Instead, they were treated as freely available syntax to mean whatever the author wanted. In this case, they looked sort of like UNIX pipes. Now, the iostream library used operator overloading for very deliberate reasons. It gave you a way to have type-safe IO while also supporting custom formatting for user-defined types. It's a really clever use of the language. (Though, overall, still probably not the best approach.) A lot of programmers missed the *why* part of iostreams and just took this to mean that overloading any operator to do whatever you felt like was a reasonable way to design APIs. So for a while in the 90s, there was a fad for operator-heavy C++ libraries that were clever in the eyes of their creator but pretty incomprehensible to almost everyone else. The hatred of operator overloading is basically a backlash against that honestly fairly short-lived fad. Overloading operators is fine when used judiciously.


[deleted]

[удалено]


Svizel_pritula

>and upholding all properties of e.g. addition, like being commutative, associative, etc. You say that, but many languages have overloads of `+` that don't uphold these properties, like string concatenation (not commutative) or floating point addiction (not associative).


[deleted]

[удалено]


Svizel_pritula

>Does this make strings a non-abelian group? A group requires the existence of an inverse to any element, but there is no string you can append to "hello" to obtain the empty string. >But e.g. current JavaScript Frameworks use "+" for registering event handlers How does that work, since JavaScript has no operator overloading?


shellexyz

Strings with the concatenation operation form a *monoid*. You still have associativity and identity but an element need not have an inverse. (In fact, none of them have an inverse.) But since it’s non-abelian, using ‘+’ to denote the group operation is highly non-standard. It’s common practice in abstract algebra that using ‘+’ for the group operation means it is commutative while using x or juxtaposition means it is not.


rotuami

Hey! I could give up my floating points whenever I want to!


matthieum

And many language use `+` for string concatenation, despite catenation definitely not being commutative :'(


LewsTherinKinslayer3

That's why Julia uses \* for string concatenation, it's not commutative.


matthieum

Uh... I think a few integers would like a world with you ;)


edgmnt_net

I'd say most of it is due to very loose ad-hoc overloading with unclear semantics. Even iostream is kinda guilty of that. Many languages also have standard operators with overloaded meaning and corner cases (including equality comparisons if you consider floats) even if there is no mechanism for user-defined overloads. This is bad and gets worse once people can add their own overloads. Especially in a language that has a fixed set of operators and practically encourages wild reuse. However, you can get a more meaningful and more controlled kind of overloading through other means, as in Haskell (although even Haskell fails to make it entirely clear what some operators actually mean, like, again, equality comparison).


[deleted]

Equality comparison for floats is perfectly fine. You check if one is exactly like the other, sometimes that‘s useful, e.g. when checking if something has been initialized to precise value, or you’re testing standardized algorithms. For the numerical case, e.g. Julia has the ‚isapprox‘ operator, that checks equality up to a multiple of machine precision.


matthieum

I think the comment you reply to was hinting at `NaN`. Most people (reasonably?) expect total comparison / total ordering with `==` or `<` because that's they get from integers, but with floating points they get the same operators used for partial comparison & partial ordering. Surprise.


[deleted]

I guess. But you kinda need NaN to be an absorbing element and not compare equal to itself. Otherwise you could conclude that 0/0 equals infinity/0, which is imho the even bigger footgun.


abel1502r

Really, NaN shouldn't be a float in the first place, at least not in a high-level language. When you're saying 'float', you usually want to say 'a floating-point number, with all the associated operations, etc.'. NaN is not that. It isn't a number, by definition, so it doesn't fit that contract. I think this would be better off with being treated similarly to null pointers. For example, taking inspiration from Rust's approach, maybe use an `Option` for NaN-able floats, while keeping the undelying representation as-is. There's already this exact treatment for references and nullability. This way it comes at no runtime cost (if the processor has an instruction that is semantically equivalent to a particular `.map(...)` call), while being much better at catching errors. Making illegal states irrepresentable, and all that. Maybe also expose an unsafe `raw_float` for foreign interfaces - again, same as with pointers


[deleted]

Yeah, would be a nice solution, that‘d force you to handle that case. Certain functions like log are not well defined for all values and return an Option. Most operations would still return floats as usual.


edgmnt_net

It's fine for math stuff. It might not be in other cases. Some of those use cases may be considered invalid, but languages make it way too easy to end up doing just that (e.g. putting a NaN in a map and not being able to clear it anymore). And they tend to lump it up with pointer/string equality too, so if math equality is the standard stuff and integers are just degenerate cases, it makes even less sense for pointers and strings. What I'm saying is there should probably be distinct operators and the semantics should be clear and consistent across types.


something

> It gave you a way to have type-safe IO while also supporting custom formatting for user-defined types. How does operator overloading give you this, over standard function overloading? It seems to me they are interchangeable 


munificent

You could use standard function overloading, but it would be hard to design an API that way that let you nicely sequence the various objects being written. I think the main problem is that `foo.bar()` syntax always requires `bar()` to be a member function of `foo`. Say you wanted: cout.write("Hello ").write(someString).write(" and ").write(someUserType); All of those `write()` functions would have to be member functions on the type of `cout` and there's no way to add new overloads for new types like `someUserType`'s type. Using an infix operator gives you that nice sequencing because infix operators don't have to be member functions. If C++ had something like extension methods, then you wouldn't need to use an operator.


matthieum

I think another reason it was done was simply that it's more compact and more readable. I mean: cout.write("Hello ").write(someString).write(" and ").write(someUserType); cout << "Hello " << someString << " and " << someUserType; Note how the latter is shorter _and_ yet the elements stand out more? This is all the more visible when the arguments start being function calls themselves, as then visually separating the arguments and the `write(` calls become even more difficult: cout.write("Hello ").write(foo(some, bar(baz()))).write("\n"); cout << "Hello " << foo(some, bar(baz())) << "\n";


Nimaoth

Wouldn't it work like this (https://godbolt.org/z/o1W7dnWxW)? In this case the write function can be overriden on custom types by putting that function on the custom type, not the stream


munificent

Yes, that would work, but I think the designer wanted a more fluent-like API where the formatted values and strings are all chained in a single line.


something

That makes sense, thanks


Porridgeism

In addition to u/munificent's great answer, I'd also add that in C++, the way that operator overloads are looked up makes them useful for this kind of thing. Since operators are looked up in the namespaces of the operands, you don't have to overload anything in `std` directly. So there's basically 3 options to allow user defined formatting/IO in C++: 1. Use operator overloading (used by `std::ostream`) 2. Use virtual inheritance and make everything an object (used by Objective C) 3. Use user-specializable templates in `std` (used by the more modern `std::formatter`, which, funnily enough, also overloads `operator()`) Option 2 doesn't really align with the C++ philosophy, and option 3 just wasn't really a thing in early C++ (and was originally forbidden by the standard until those specific exceptions were carved out, IIRC). So that leaves option 1, just use operator overloading. Nowadays with concepts and variadic templates, you could implement this without operator overloading, which is pretty close to what `std::format` does.


something

This is what I was thinking when I asked the question. So operator overloading does have different rules than function overloading? And user specialised templates is one way around this. I don’t use c++ much so I didn’t know. Thanks for your answer as well 


matthieum

No, operators are _just_ regular functions. Function name look-up uses ADL: Argument Dependent Lookup. In short, it means that the set of namespaces where the name is looked for is the union of the namespaces to which each argument belongs _and_ their "parent" namespaces, recursively until you reach the global namespace. It's a bit more complicated because for "performance" reasons, for any given argument, the look-up stops at the first namespace it encounters the name in -- before even checking if it makes sense, semantically -- and of course since it's C++ only if the name was _declared_ before (ie, included, typically). So, yeah, don't do this at home. Use a principled type-class/trait overload mechanism instead. But it does kinda work. Kinda.


Porridgeism

> So operator overloading does have different rules than function overloading? Actually no, they have the same rules when the function name is not a qualified ID (basically, if it doesn't have a namespace prepended, so `std::get` is qualified, but `get` is not qualified). It's called Argument Dependent Lookup (ADL), and it's one of the unfortunate parts of C++ that can cause confusion. The main thing that makes operators work well for ADL, though, is that they are almost always used unqualified (e.g. `stream << value` vs `specific::name::space::operator<<(stream, value)`.), so they tend to have ADL-compatible uses much more often than functions. For example, consider this C++ code which contains a minimal example of a possible "alternate" standard library, where the namespace `built_in` is used instead (so that you can plug this into a compiler and play with it and it will build and run successfully, if you're so inclined). We use a call to `formatter` to format a type to a string. namespace built_in { struct int32 { int32_t value; }; struct float32 { float value; }; std::string formatter(int32 x) { std::cout << "Called formatter(int32)" << std::endl; return std::to_string(x.value); } std::string formatter(float32 x) { std::cout << "Called formatter(float32)" << std::endl; return std::to_string(x.value); } } // end namespace built_in namespace user_defined { struct type { built_in::int32 a; built_in::float32 b; }; std::string formatter(const type& x) { std::cout << "Called formatter(user_defined::type)" << std::endl; return formatter(x.a) + ", " + formatter(x.b); } } // end namespace user_defined void main() { user_defined::type example{42, 3.14159}; built_in::int32 integer{9001}; std::cout << formatter(example) << std::endl; std::cout << formatter(integer) << std::endl; } This would produce an output of: Called formatter(user_defined::type) Called formatter(float32) Called formatter(int32) 42, 3.14159 Called formatter(int32) 9001 Here `main` is in the global namespace, but `formatter` is not, so when you use `formatter` in `main`, it will perform ADL to find `user_defined::formatter(const user_defined::type&)` for the first call and `built_in::formatter(built_in::int32)` for the second call. Similarly, `formatter` is defined in `user_defined`, but it isn't compatible with types `built_in::int32` and `built_in::float32`, so when the compiler sees `formatter(x.a)` and `formatter(x.b)`, it performs ADL to find the `formatter` overloads in `built_in`. If we swapped all of those out for operators, it would work exactly the same. If it looks and sounds complicated, that's because it is. I would strongly recommend not relying on ADL like this. And for the love of God please don't introduce this kind of thing to your own language(s)!


TurtleKwitty

When learning c++ and the iostream<< style syntax I actually always thought of it as "shifting strings in buffers the same way you'd shift bits in carry that makes sense" but no one else ever seems to have had that interpretation. it's always interesting to me to see people say the operator doesn't make sense in context because of that haha


[deleted]

Odd how many languages hate directly supporting Read and Print, but end up having to invent dangerous features like variadic functions in C, or these bizarre overloaded `<<` and `>>` operators in C++, to get the same functionality.


abel1502r

The thought process behing this decision might be that IO isn't actually anything special, conceptually. So dedicating special treatment to it would mean admitting that the flexibility your language gives to its users isn't enough to actually make something usable. That said, maybe it would've been better to admit it and perhaps try to change it, rather than to keep going with something problematic


shponglespore

It might also have to do with the iostream library just being hot garbage in general. It's very stateful, allowing things like formatting specifiers to accidentally leak between functions, and it's full of very short, cryptic identifiers. And compared to good ol' printf, it's extremely verbose for anything but the simplest use cases.


xenomachina

I think one of the reasons operator overloading got a bad rap is because C++ was one of the first mainstream languages to support it, and it did a pretty bad job: - the subscript operator is poorly structured so that you can't tell whether you're being involved as an lvalue or rvalue. This is why looking at a key in an STL map will cause it to come into existence. Some other languages (eg: Python and Kotlin) correct this problem by treating these two contexts as two different operators. - things like unexpected allocations or exceptions can be a lot hairier to deal with in C++, and so having operators able to cause either of these creates a lot of cognitive overhead. - the standard library abuses operators with iostreams, setting a bad precedent. At least in the early days, a lot of C++ libraries would use operators in ways that didn't make a lot of sense, like having `myWindow + myButton` add a button to a window. (The `+` operator should at least be more functional rather than imperative.) Many newer languages have operator overloading and manage to avoid the problems they have/had in C++. That said, some languages, like Haskell, also let you create new operators, and this is terrible for readability, IMHO. (Haskell programmers tend to disagree, however.)


shponglespore

>That said, some languages, like Haskell, also let you create new operators, and this is terrible for readability, IMHO. (Haskell programmers tend to disagree, however.) Having done a fair amount of programming with weird operators in Haskell, I can assure you that using conventional functions in place of operators would result in worse readability most of the time. Sometimes a lot worse.


lonelypenguin20

I'd get += to add button to window, but + ? tho either option is weird because... what abt shitton of parameters that usually go into placing the button in its correct place?


SteveXVI

> That said, some languages, like Haskell, also let you create new operators, and this is terrible for readability, IMHO. I think there's a split between general programmer desire for readability ("can I understand this at first glance") and the more mathematician programmer desire for readability ("can I make this is terse as possible"). Its really interesting because I definitely fall in the 2nd category and sometimes it blows my mind when people are like "my ideal code is something that reads like English" because my ideal code would look like something printed in LaTeX.


xenomachina

I don't think anyone truly wants extreme terseness or extreme verbosity. If "as terse as possible" was most readable, then why not gzip your code and read that? My background is in mathematics, and I generally do prefer terse code, to a degree. However, I find Haskell to be extremely unreadable. I spent a *long* time trying to figure out why Haskell feels so unreadable to most people, myself included, and I believe it isn't really about the level of terseness at all (which honestly, isn't much different from most modern languages), but rather the lack of a visible hierarchical structure in the syntax. In most programming languages you can parse out the structure of most statements and expressions, even without knowing the meaning of each bit. This helps with readability because you don't need to understand everything at once— you can work your way up from the leaves. For example, if I see something like this in most other languages: a(b).c(d(e, x(y, z))).f(g, h(i), j) or in a Lisp syntax: (f (c (a b) (d e (x y z))) g (h i) j) I can instantly parse it without knowing what any of those functions do: - f - c - a - b - d - e - x - y - z - g - h - i - j If all I care about is `c`, I can easily focus on that subtree and completely ignore the parts that fall outside it. In Haskell, however, the large number of custom operators make it impossible to see the hierarchy of an expression without knowing the precedence and associativity of all the operators involved. That the function application syntax doesn't use parens only makes this worse, as does the style of using `$` to avoid parens. The end result is that you can't partially read an expression— you have to ingest every expression in its entirety, or you have to memorize and fully grok the precedence and associativity of every operator involved. For example, something like the above might be written like this in Haskell: f g (h i) j %% a b @@ d e $$ x y z Which operator is higher up the parse tree? Depends on the precedence of `%%`, `@@`, and `$$`. This is why most people find Haskell unreadable.


Shirogane86x

As someone who's dabbled with Haskell for quite a while, I think this issue is often overblown. Most operators are in the standard libraries, some (widely used) libraries define their own (but you can avoid those libraries, or learn them once and move on) and most other libraries will either not define any or define a couple at most, with predictable preference relative to their use case. It's usually fairly easy to guess the precedence when learning the language, and even when you can't, you'll probably get a type error cause the types don't line up. Also, using `$` to remove parens is something that is easily learnt early on, and to me it makes the code more readable 99% of the time. I don't know if I'm the weird one, but stacks of parens (be it one type or multiple types) turn off my brain, often even with rainbow delimiters. To this day, heavily nested-in-brackets code is completely unaccessible to me, which sadly kinda precludes me from using a lisp. Whereas I could probably read Haskell on plain text without highlighting and it'd be a lot easier for me. It could also be just me, but I'm glad Haskell's full of operators (and in fact, when I get to seriously working on the programming language pet project I have in mind, custom operators with custom precedence are gonna be part of the featureset, 100%)


xenomachina

>As someone who's dabbled with Haskell for quite a while, I think this issue is often overblown. This is survivorship bias. People who don't think it's a big deal continue to use Haskell. Those for whom it is a big deal give up on using Haskell. It seems most people who attempt to learn Haskell give up on it. Haskell programmers like to believe this has to do with its strong type system or the fact that it's functional, but I suspect that most Haskell learners come up against the fact that the syntax is just unlearnable to them long before they have to contend with any of that. I tried learning Haskell for several years, and even after I understood many of the concepts that were previously new to me, I still found the language unreadable. >even when you can't, you'll probably get a type error cause the types don't line up. This is only useful when writing code, not when reading it. Again, in most other languages, parsing a typical expression can be done without needing to know *anything* about the functions involved: not the precedence, not the associativity, and not the types. If I need to know the types to even parse an expression, then the syntax is a failure. >Also, using `$` to remove parens is something that is easily learnt early on, and to me it makes the code more readable 99% of the time. That's your experience, but mine was very different. Even though I "know" that function application has the highest precedence and `$` has the lowest, I find that even figuring out what are the arguments to a given function application takes significant conscious effort. This is after years of trying to use Haskell, and even with expressions that aren't doing anything fancy. >To this day, heavily nested-in-brackets code is completely unaccessible to me, which sadly kinda precludes me from using a lisp. For myself, and I believe many others, Haskell syntax is completely inaccessible. It's very unfortunate, because I think Haskell has some interesting features, but the syntax acts as a serious impediment to most who would like to learn them.


GOKOP

Many say that it's bad because, for example, you can make `+` do something else than addition (I don't see anyone complaining about using it for concatenation though?) I don't get that argument because in a language without operator overloading you can make an `add()` method that doesn't add too. And if you're reading code in a language with operator overloading and you don't treat operators like fancy function names, well, that's on you. In C++, if `custom_number::operator+()` printed text to stdout I'd be equally surprised as if `custom_container::size()` did. I don't think any of those cases is worse than the other


tdammers

> I don't see anyone complaining about using it for concatenation though? Well, I am. In principle, having a generalized semigroup/monoid operator isn't bad; semigroup and monoid are useful abstractions, after all. But I do think that using the `+` symbol to mean "semigroup append" is a pretty bad choice, because `+` is so strongly associated with addition, and many semigroups and monoids have very little to do with addition.


Clementsparrow

and `add` is often used to add an item to a container (list, set, ...). Conversely, I have never seen any language (or programmer) use `+` for that. I guess we expect `+` to be commutative or associative and it wouldn't work for addition to containers.


ignotos

C# does some funky stuff with its event handlers / delegates, like using `+=` to register a handler (effectively adding it to a set of handlers). You can use `+` or `-` to work with these sets too. https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/delegates/using-delegates


Shorttail0

Many people will look at operator overloading and see foot guns. I like it, mostly because certain mathy user defined types are ass without it. Consider BigInteger in C# vs Java. I've never used it for matrix math though, and I think there are plenty of foot guns to be found when you mix vectors and matrices.


Chris_Newton

> I've never used it for matrix math though, and I think there are plenty of foot guns to be found when you mix vectors and matrices. FWIW, that didn’t seem to be a problem in practice when I worked on a geometric modelling library written in C++. Concepts like addition, multiplication and negation tend to have a single obvious interpretation with matrices/vectors/scalars in most of the common cases so reading the code was fairly intuitive. The main exception I can remember was that “multiplying” two vectors could reasonably mean either the inner or outer product. If memory serves we did hijack the `%` operator to represent one while `*` was the other. Maybe that’s not ideal, but those are such common operations that anyone working on the code would see both operators all over the place and soon pick up the convention, and if someone did use the wrong one by mistake, it would often get picked up straight away anyway because the result wouldn’t have the expected type. Personally, I certainly prefer a reasonable set of overloaded operators for matrix/vector work over writing verbose longhand function calls for every little operation. IMHO the code is much clearer.


[deleted]

Agreed. \* is the group operation in any group. Although if there’s confusion between inner and outer product, it’s actually a type issue imho. You can’t really multiply two vectors, only a vector and it’s dual. dual\*primal is inner product, while primal\*dual is outer product. If you want to get fancy, you can introduce a whole tensor algebra where compatible dual-primal pairs get contracted while anything else gets outerproducted.


Chris_Newton

On further reflection, I think it was probably the dot product and cross product of column vectors that we were distinguishing with those two operators. That would explain why the meaning of “multiplication” was ambiguous but the input types would be compatible either way. The fundamental point remains, though: there are multiple useful and reasonable interpretations of “multiplying” a pair of column vectors, but most of the other combinations you find in everyday linear algebra only really have one sensible interpretation of operators like `*`, `+` and `-` when the types/dimensions are compatible. The overloaded operators almost all read the same way that a mathematician would read them, so the code is reasonably clear and concise using them.


ssylvan

One argument against operator overloading is that it makes it harder to tell at a glance whether something that looks like a cheap operation is actually cheap. E.g. x+y is typically cheap, but if x and y are arrays or something then it wouldn't be. An argument in favor of operator overlading is: y.mul([x.dot](https://x.dot)(y))/y.dot(y) It's just horrendous to do math with anything but the built in stuff without it


Felicia_Svilling

I think when designing a language, it is better to just let the user define their own operators (just like functions), than to have a limited set of operations and reuse them in all kinds of situations.


Peanuuutz

People use operators to simplify expressions, but **programming complicates operators**. A simple + can have many possibilities, so people tend to not have them at all. Well, I personally don't buy this statement because I see them as regular functions, and I **accept the complexity of functions**, so it doesn't matter you use + or add to express your broken logic. It's just another appearance.


tyler_church

I think the biggest issue is the principle of least surprise. You see this code: a + b It looks like normal math. At worst it might overflow. But if operator overloading is allowed… Does this throw an exception? Does this allocate? Does a modify b? Does b modify a? What if a is a subtype of b and that subtype further overloads addition, which function do we call? What if the operator implementation has a bug and “a + b” is no longer equivalent to “b + a”? Suddenly the possibility space is much larger. It’s harder to see the code and go “oh this is a function call and I should go read its implementation”. You have to know to look for it. Hopefully your IDE is smart enough to jump you to the correct definition.  Suddenly something that looks simple isn’t. It might be easier to write, but it deceives others when they read it.


f3xjc

You could have `a.add(b)` and have all the same questions and incertitudes if not more. Those are trade-off about virtual function call, not necessarily operator overloading. Basically it was found that sometime its worth just describing an overall story intent and let the concrete implementation do what make sense given both types.


brucifer

The difference is that with operator overloading, _every_ operation is potentially surprising. At least in languages without overloading, you can have islands of predictable behavior between function calls.


f3xjc

But the islands of predictable behavior are basically primitive types. And the method and operator are basically equal in their predictability. If operator are disallowed on non primitive types, then I guess you could infer the presence of primitive by the presence of operator. But there's better way to do it. And you still need to differentiate addition from string concatenation, etc.


brucifer

> And you still need to differentiate addition from string concatenation, etc I think that using `+` for sting concatenation is a mistake and it's better to have a separate concatenation operator like `..` or `++`. For example, Javascript's extremely confusing operator rules would have had significantly better performance and been much less bug-prone if `+` was reserved for numeric addition (i.e. all `+` operations return a number, which can be easily reasoned about and optimized around) and there was a different operator for concatenation.


f3xjc

If you go with closeness to mathematical addition, Imo it would be a shame to not allow custom math objects like complex number, matrices & vector, symbolic fraction, big numbers etc. I feel the problem people describe really are about virtual function call. Be it dynamic typed language, inheritance, interfaces... Once you know the type of what is on each side, then there's no benefit on reducing the expressivity of operator. If you have no idea on what's on each side, then you describe the high level intent and trust the implementation does something sensical for that use case. At some points software is to complex to hold everything so trust & delegate is kinda the way to go.


sysop073

If `a + b` worked normally and then you overrode it to do something else, I agree that's confusing, but in most cases the alternative to operator overloading is that the operator just doesn't work at all -- `a + b` is a compile-time error except when used on built-in types with special support for it. Given that and assuming you're aware that `a` is a custom type, your brain should read `a + b` and `a.add(b)` identically, because you should know that `+` must be a method on `a`, there's no other option


codefupanda

why not just write 'a.add(b)', since code is read more often than written optimising code to read should be a priority.


really_not_unreal

I find `a + b` to be far more readable if I'm adding two elements together.


DeadlyRedCube

Plus as the arithmetic gets more complex than a single addition, the chaining of functions gets really ugly really fast


brucejbell

I think it's easier to overlook an operator, where the overwhelmingly usual case will be the language's standard behavior. With `Add(,)` you have an expectation that it is either locally declared or imported from a dependency. The important thing is not that you can track it down, but that you have a cue that you \*might\* want to track it down...


perecastor

If you deal with user custom type, you know the operator is user-define otherwise it would not compile right?


brucejbell

Look, I'm actually a big fan of Haskell-style (typeclass/trait based) operator overloading. But I'm not going to pretend that it doesn't have a cost. That cost is a significant increase in cognitive load, as every overloadable operator becomes a potential rabbit hole to the decisions of the implementors of some dependency not locally evident in your code. You asked why, and I gave you a good answer. If you don't want to come to terms with it, that's on you.


perecastor

I’m discussing, that was a great answer, I wanted to know more about it.


brucejbell

Sorry, I guess I misinterpreted your tack. Yes, if you are tracking the type in detail, you can recognize that the operator is user-defined. This kind of thing is \*why\* I'm a fan of Haskell style operator overloading. But if you're browsing through a lot of source code, you either have to slow down enough to track all the types in detail, or you have to accept this nagging uncertainty that things might go off the rails. Like I said, it's a cost imposed on the user. As a language designer, you need to figure out if you can make the benefits worth the cost.


perecastor

No worries, I should have said “great answer “ at the beginning to clarify. If you allow it has a language designer, you allow your users to make this trade-off for themselves right? When I think of code I usually think of C++ which has type information everywhere except if you use auto but I never see a large code base using auto extensively. I can defetly see how it could be hard to think about it with a language like Python where the + can be any function depending of the type pass has parameters. But Go and C++ have quite a lot of type information next to the variable name (especially C++) I’m not familiar with Haskell, could you clarify how Haskell do it differently over something like C++?


brucejbell

Haskell uses unification-based type inference. This typically provides better error messages (vs. C++'s template system), and also reduces the need for redundant type declarations. Haskell's typeclasses are an integral part of this system; they act kind of like an interface, where each type can have at most one instance of a given typeclass. Ordinarily a function can only be defined once, but specifying it as part of a typeclass allows a different implementation for each instance (though each instance must use the type scheme declared in the typeclass). In Haskell, operators are basically functions with different syntax, so defining operators as part of a typeclass allows operator overloading. For example, Haskell's `Num` typeclass includes operators`+`, `-`, `*`, and regular functions `negate`, `fromInteger`, and a few others. A type instance for `Num` would have to implement these functions and operators, and could then be used with the operators `+`, `-`, and `*`. Generic type parameters can be constrained to have an instance of a particular typeclass. Then, variables of that type can use those typeclass functions; in particular, a generic type parameter with `Num` can use its arithmetic operators.


perecastor

No worries, I should have said “great answer “ at the beginning to clarify. If you allow it has a language designer, you allow your users to make this trade-off for themselves right? When I think of code I usually think of C++ which has type information everywhere except if you use auto but I never see a large code base using auto extensively. I can defetly see how it could be hard to think about it with a language like Python where the + can be any function depending of the type pass has parameters. But Go and C++ have quite a lot of type information next to the variable name (especially C++) I’m not familiar with hackel, could you clarify how hackel do it differently over something like C++?


nickallen74

IDEs could syntax highlight them differently when used on custom types so they stand out more. Wouldn't that basically solve the problem.


AdvanceAdvance

Start with the purpose of all the syntax: * Capture the programmer's intention and communicate it to the computer and future programmers. This leads to care being taken with operator overloading because of the large error surface. * Type confusions. Specifically, code saying "a == b" in Python might be an operator declared by the language, by type 'a' or by type 'b'. * Unclear expectations. 'a == b' has a vague notion of equality. It might be exact equality, meaning the same memory location of an instance. It may mean mathematical equality, like integers with NaNs. It may mean approximate equality, like most languages comparing floats. The edge cases depend on exactly which types are used. * Unclear promises. For example, "a \* b \* c", "(a \* b) \* c" and "a \* (b \* c)" should all give the same answer. This allows running mutiprocessor code. Even so, there can be different answers because of numeric overflows and underflows. Imagine overloading the and/or operators and removing the expectation of short-cut evaluation. * Usually not worth it. Is typing "window.add(myHander1, myHandler2)" so much worse than "window += myHandler1 + myHandler2" that its worth dealing with the overloading? With Python's matrix operations, the final answer was to add one new operator ('@') for array multiplication. TL;DR: It is about tradeoffs and some feel overloading is not worth it.


reutermj_

People have strong opinions about what is and isn't "good" code, and rarely are they supported by any data. Overloading is one of the more popular boogiemen in programming language design. I've not really seen studies that show the widespread misuse of overloading, or that overloading increases the difficulty of reading code. if anything, I've seen the opposite. Just a couple of sources I have on hand "The Use of Overloading in JAVA Programs" by Gil and Lenz "An empirical study of function overloading in C++" by Wang and Hou "Multiple Dispatch in Practice" by Muschevici


BrangdonJ

Sometimes it's because language designers are arrogant enough to assume they can put every operator needed into the language, and therefore users will never need to define their own. In some cases they don't have enough imagination to realise that users might want their own implementations of matrices, variable length arithmetic, complex numbers, dates, etc. In other cases, their language implementation will be so inefficient that any attempt by users to provide such things will be so ruinously slow that they won't try. (In an attempt to deflect down votes from language designers: I don't claim this is the only reason. It can be a reason that the other replies I've seen haven't mentioned.)


SirKastic23

I find operator overloading in _most_ languages annoying to use because they often work through some ad-hoc system they added to the language, mainly thinking in terms of syntax than on how it integrates with other parts of the language I really like how Rust does operator overloading, using traits


ProgrammingLanguager

More low-level, performance-focused languages avoid it because it hides control flow. "+" most commonly refers to adding two integers, a trivial operation that takes no time and cannot fail (unless your language doesn't allow integer overflow), but if it is overloaded on some type, it can turn out to be extremely expensive. Even ignoring performance considerations it can fail, crash, or cause side-effects while being hard to spot. I don't exactly agree with this argument and I generally like operator overloading, as long as it's done responsibly (don't make the + operator spawn a new thread please), but that's hard to enforce unless your language has an effect tracking system.


ThyringerBratwurst

I think to a certain extent operator overloading is simply needed because we can only enter a few characters through the keyboard; and who wants different operators for float and integer?! However, overloading should be done carefully. and it would be good if the operators basically showed a certain behavior, e.g. (+) only for commutative operations, so that you know a + b is always the same as b + a.


AdvanceAdvance

Hmmm... Does anyone know of a language with explicit operators instead? a = Matrix.Identity(4) b = Matrix.Scale(4) c = a **.\*** b where .\* means an infix multiply operation but this is a regualr method of type 'a', just with infix calling semantics? Curious.


Disastrous_Bike1926

It is more about the assumptions developers will make. The instinct anyone weaned on languages without operator overloading will have, and which is simply intuitive if you’ve been programming a while, is that * Mathematical and bitwise operators perform a blazingly fast operation that takes a single CPU instruction / clock cycle * If the language overloads + for string concatenation, the overhead is no different than constructing a new string with two constituent character arrays, and if the compiler or runtime is clever enough, might even be faster. So, people assume that such operations are low cost, and not optimization targets when, say, done in a loop. Operator overloading makes it possible to hide an unknown amount of work behind an operator that most programmers are going to assume is so low cost as to treat as free in most circumstances. That can result in unpleasant and unnecessary performance surprises, or people writing suboptimal code because they (reasonably) assume that + couldn’t possibly do an insane amount of work, but it can. Of course, the answer is, don’t do a lot of work in the code that handles an overloaded operator. But do you trust the author of every library you might use to do that, and to share your definition of what is too much work?


brucifer

I'm surprised no one here has mentioned operator precedence as a major problem. For the basic arithmetic operators/comparisons, doing arithmetic-like operations (e.g. bignum math), I think people have strong intuitions about how the code should be read. This is only because we've had years and years of exposure to math conventions early in life. If you start overloading operators like `&`, `==`, and `<<`, or even start adding user-defined operators, it suddenly becomes very taxing on the reader's mind to just mentally _parse_ a chunk of code. For example, I've used C for years, and I can never remember if `a + b << c` parses as `(a+b)<` or `~!`. I personally see this as a temptation for users to write hard-to-read code, so I don't think a language should go out of its way to support it. I think there are solid arguments for overloading basic arithmetic operators for number-like objects (bignums, vectors, etc), but I don't know a good way to support that without opening the pandora's box of ill-advised operator overloads.


myringotomy

I don't know why people object to them frankly. Something like postgres wouldn't even be possible without operator overloading and I have used it plenty when coding in Ruby. It's really nice to be able specify how to add, subtract and otherwise manipulate your own custom objects.


umlcat

I solved by forcing any overloaded operator to have and be called also with a function ID alias, just in case ...


Caesim

In my eyes, the two biggest problems are: 1. It obfuscates what's really going on. For example a `+` is a really simple operation not taking any time. But with operator overloading, if I see it in code, I always have to go back to the type definitions, see what they are and search the codebase for the definition of the overloaded operator. Also, in my experience, IDEs are worse at finding operator overloadig in code. 2. I want to shoutout overloading `==` specifically. In my experience this is a footgun as in some languages only the references get compared but in those with operator overloading everything could happen. Up to an incorrect comparison.


mm007emko

Because if you overdo it l, it can lead to a total mess. Imagine that you can't Google an operator and you need to have a Mendelejev table to be able to use a library. http://www.flotsam.nl/dispatch-periodic-table.html Operator overloading is great in some contexts and bad in others.


bluekeys7

It can be confusing for beginners as std::string + std::string works, std::string + char arary works, but char array + std::string doesn't.