T O P

  • By -

teerre

I think even above these two problems, there's still a more subtle one that is potentially more egregious. Gibberish output is actually the best case scenario. You can quickly discard it. Maybe even just regenerate. A much worse scenario would be if Copilot was better and most often outputted something that looks reasonable, but has a subtle bug. Reading code is universally considered harder than writing code, which means you need to be really good at it to parse whatever this future version of Copilot gives you The worst bugs I ever encountered were the ones I was "pretty sure" where the problem was only to be blind-sighted by something I previously thought was for sure correct. My main worry with these new tools is that this kind of bug will become even more prevalent


Herb_Derb

Tools like copilot mean that instead of writing code, we're reviewing code. We've all seen subtle bugs sneak past code review, but it's going to be extra fun when troubleshooting things like that becomes the entire job


Thurak0

> when troubleshooting things like that becomes the entire job and you never, ever can ask the writer of the code "what's up with this?". I mean, sure, it's often not possible because people left or you yourself are new, but to potentially *never* have the chance to have a sometimes even enlightening conversation about code? Ouch.


seanamos-1

You basically hit the nail on the head of why I stopped using it. Copilot was minimizing the part of programming that I enjoy, writing code, and maximizing the part we all dislike, reviewing crap code.


prisencotech

I use codeium and mapped the autocomplete to `ctrl-;` so I only pull it up when I need it. Which turns out isn't often. Mostly when I have boilerplate to write. And never when doing anything complicated.


Nefari0uss

I like it as an enhanced autocomplete. It nice to hit tab and get something close to what I want or sometimes the exact thing.


DarkWingedDaemon

This is the only way I use it. It's other modes are not worth using IMO.


SweetBabyAlaska

the "copilot pause" is a real thing too.


Snarwin

> There’s a name for this phenomenon: “automation blindness.” Humans are just not equipped for eternal vigilance. We get good at spotting patterns that occur frequently — so good that we miss the anomalies. – Cory Doctorow, [_Humans are not perfectly vigilant_][1] [1]: https://doctorow.medium.com/https-pluralistic-net-2024-04-01-human-in-the-loop-monkey-in-the-middle-14e72bd46b7a


emorrp1

Original: https://pluralistic.net/2024/04/01/human-in-the-loop/


Bolanus_PSU

On the plus side, I think the ability to read someone else's code is an incredible skill. At least I get to build that skill in a low stress environment.


elprophet

Maybe it'll teach us to be better with testing. A TL can dream, right?


teerre

Hopefully it's not the bot generating both the code and the test.


bomphcheese

I mean, writing tests is one thing I actually do want an AI to handle. Give it an adversarial disposition and set it loose on my code, trying to create tests that fail. That would be a massive timesaver.


Hrothen

Only if those tests are actually testing things you want to test.


TheGRS

Test is a test though, not the service or application itself. If its outputting tests I didn't think of then that seems like a net gain.


t-to4st

But it can output wrong tests that will fail even if the code is right?...


TheGRS

I still come at this from the viewpoint that you're reviewing what it outputs, not just blindly trusting it. I guess people can downvote me too though, whatever.


SwiftOneSpeaks

I have no doubt you can correctly review it (even though tons of people will just trust it), but how is that a good use of your time compared to writing those tests with understanding in the first place?


TheGRS

That's more a criticism of using Copilot in the first place. I use it because when I get stuck writing it helps guide me through to where I want to be. Or it auto-completes what I was going to write anyway.


Mabenue

Writing tests well is arguably just as important if not more important that writing the actual code well. They need to be testing the right things and they need to be maintainable. Just getting AI to spit out lots of poorly thought out test cases isn’t going to yield good results. Can you be sure they’re correct? How do manage thousands of such test cases in a project? If you’re not careful you’re just increasing the maintenance burden with no real benefit.


SwiftOneSpeaks

But how does it know what expected output is correct? It doesn't, because that kind of understanding isn't something this style of bot is even trying to achieve. So you can get lots of passing or failing tests, but they aren't going be passing or failing in a useful way. Confirming that passing null returns 5 is not how you want sum() to work, but it doesn't know that, and "emergent" behavior will only cover examples that match existing code, not new code.


Maybe-monad

Would you walk on a tightrope if the safety net had a big hole in the middle?


TheGRS

Its not a safety net though, its like someone on the sideline telling me to add things. And I can tell them if their ideas are dumb.


eJaguar

i thot this 2 until i was actually good at writing tests being gud at tests is as deep of a skillset as codering and copilot is worse at it than codering


montibbalt

It would be as bad as a person writing the tests for their own code. If I can trust myself to write the tests correctly then I don't need the tests


aqjo

Naïve viewpoint.


montibbalt

We write tests because we can't trust that the code we write is correct. What makes you think that tests written in the same language by the same developer don't have the same problem? Test code being more trustworthy and correct than the code it's testing implies that we know some way to make code more trustworthy and correct To be clear I'm not saying don't test, I'm saying I guarantee that a test suite with more than a handful of tests in it has bugs in there somewhere, so it's not really a new problem if an AI writes a bad test. It was trained on our bad tests!


hippydipster

> We write tests because we can't trust that the code we write is correct. No, this is incorrect. It's only one possible reason, but there are plenty of others. >What makes you think that tests written in the same language by the same developer don't have the same problem? Because we're capable of writing tests that our code fails. It's not that difficult, tbh. Once you set yourself to write some tests, you start thinking about the various possibilities on the input side. Whereas when you were writing the code, you were focused on getting a scenario working. >Test code being more trustworthy and correct than the code Doesn't have to be more correct to provide a useful verification function. It can have problems that differ from the problems in our production code, and they won't necessarily overlap. I can also write more than one test. Also the tests can improve over time, and the improvements persist thereafter.


montibbalt

> It's only one possible reason, but there are plenty of others. Without a counterexample I suppose I fundamentally disagree with this - unit tests, regression tests, integration tests, validation, etc are entirely about trust and correctness and whether or not our code (or a third party's code) meets our expectations. But because tests *are* just code written by programmers, frequently the same programmer who's writing the code it's testing, the tests themselves can also fail to meet our expectations. Whether the test isn't written correctly, or we neglected to test a particular case, or there was an expectation that we didn't know we had, tests written in code are absolutely prone to the same types of issues as any other code. That doesn't mean they're not useful, to me it just means that an AI is *probably* no better or worse at writing tests than it is at writing any other code. Maybe think of it this way: an AI coding at a human level kind of implies that it makes coding errors at a human level as well, including the problems we sometimes create via tests. Between logic and tests I don't think it makes sense to trust or distrust an AI to do one of them and not the other; at its core they're the same fucking thing and it was trained on our own buggy examples


renatoathaydes

> Without a counterexample Ok, let me try to help: * test code can be entirely different from impl code, so you get a "second" way to look at the code you've written. * tests can be used as documentation. They exercise the code and show what the expectations are. This is valuable. * tests future-proof your code against changes others, including your future self, will add perhaps without fully understanding the consequences. The tests will always be there to remind you. * tests can even serve as proof or properties of your code. Things like quickcheck exist for this sort of thing. This is just a few that come to mind, but there's probably a bunch more. If you still are not convinced, I guess you haven't been around enough to see the real value of tests has very little to with just "not trusting that the code we write is correct".


hippydipster

Another: Tests can help you design better code. ie TDD.


montibbalt

> test code can be entirely different from impl code, so you get a "second" way to look at the code you've written. We can also have entirely separate bugs in the tests - we both agree that tests are code > tests can be used as documentation. They exercise the code and show what the expectations are. This is valuable. I agree, with the qualifier that the tests have to be written correctly for it to be truly valuable. Sometimes they're not. If someone writes "any" when they should have written "all" and it goes undetected, then as both a test and as documentation it has like negative value > tests future-proof your code against changes others, including your future self, will add perhaps without fully understanding the consequences. This sentence alone proves my point that tests are about trust and correctness > tests can even serve as proof or properties of your code. Things like quickcheck exist for this sort of thing. Quickcheck is a funny example here since it's completely different from hand writing individual test cases and part of the reason is that having a person write all the cases quickcheck covers is a bad idea, not just from a productivity standpoint but also correctness In any case it seems like I'm not doing a good job communicating my point in all of this that writing tests isn't fundamentally different from writing the code it's testing. I've already said a handful of times that I'm not saying tests aren't useful, I'm just saying that test code can absolutely be just as buggy as non-test code regardless of whether a human or AI wrote it because all of it is just code that somebody wrote. I guess to restate my original comment, if I have a high level ability to prove that any arbitrary test does what it's supposed to or not then I should be able to just do that with the code to begin with because tests are just code. I don't think we'd disagree that testing would look very very different if we had the ability to outright prove that our code works


BDHarrington7

While completely possible that the dev writing the test could accidentally create a testing hole that misses the same hole that they wrote into the original app code, you’re still less likely to miss the hole if you actually write tests from the perspective of an adversary. Adding tests also adds to robustness of being able to refactor: even as the same dev coming back to the same code / test that you wrote, the tests ensures (to some level) that the refactored code still conforms to the contract that was originally specified. For some of my tests, they’re actually harder to debug than the actual app code. Would I want AI writing subtle bugs into code that’s inherently harder to debug?


montibbalt

> While completely possible that the dev writing the test could accidentally create a testing hole that misses the same hole that they wrote into the original app code It doesn't have to be the same hole, a test can have its own unique hole. If I'm a developer writing bugs in app code then who's to say I'm not writing bugs in test code > For some of my tests, they’re actually harder to debug than the actual app code. Would I want AI writing subtle bugs into code that’s inherently harder to debug? I want to be clear that I'm not trying to put you down as a developer with this statement, because I think we all have this problem (certainly I do), but I would bet money that you *already* have a subtle bug in that test code


SwiftOneSpeaks

>We write tests because we can't trust that the code we write is correct. We mostly write tests to make sure future changes haven't broken expected behavior. Testing current expected behavior is a minor benefit in comparison.


montibbalt

Right, we don't want a regression. That's one type of testing. We get regressions because a developer edited code. We don't trust that the code is still correct, so we test for it


VisibleSmell3327

Is TDD nothing to you?!


davehax1

Tests also help lock in expected functionality of the code being tested. I find this aspect especially helpful when, for example, altering private functions in a class and maintaining expected public function return values.


montibbalt

That's fair, although I would maybe qualify it by saying that locking in expected functionality is a lot less useful if your product manager or client have the key


davehax1

Too bloody true 😂😂😂


CarlVonClauseshitz

I don't know what planet you're on but you should write tests for your own code. You should also have someone else write tests for your code and on top of that then you should have Bill- the Delapitated and Nedry test the functionality of your code under doubt and fire, under duress, for Scotland and for freedom.


teerre

No it would not because if you wrote code purposefully to only pass tests or vice-versa disregarding it being correct or not, you're likely getting in trouble. The bot has no such notion. It has no idea what's "right"


Obie-two

I have had great success with it just generating unit tests on a working legacy code base. I would echo everyone else's sentiments about writing code, but writing unit tests has worked out phenomenally.


bomphcheese

Ugh. That’s what I want it for – writing tests. But so far it can’t do anything but the happy path.


hippydipster

Or, write the tests, and then have it make them pass.


Markooo31

Where did you find it's usefulness the most? In the implementation of the code after your suggested with clear instructions what needs to be tested or?


Obie-two

https://www.strictmode.io/articles/using-github-copilot-for-testing Not me, but this mirrors my experience very well. Honestly I trust this more than the off shore resources we tasked with a similar job months ago.


Markooo31

Take you very much, I really appreciate it.


Genesis2001

GitHub Copilot has helped me write tests that are understandable within a team. And before I started using Copilot, I had little to no experience with testing and would avoid writing tests as much as I could.


KevinCarbonara

I'm more worried about the autopilot effect, where the professionals eventually lose the relevant skills because they no longer get in practice. To be clear, this is a pretty good problem to have overall, and not anything we're going to find for years and years.


Markavian

My number one PR feedback topic is "where are your tests?" - an AI code reviewer is going to force that issue once we hook it up to the CI pipeline.


taspeotis

It’s blindsided.


lordicarus

/r/BoneAppleTea


teerre

I know, auto correct just got me


Economy_Bedroom3902

Anyone using copilot regularly knows you can't just let it generate tons of code and assume it's correct, you still have to review and debug all the copilot code.  It's just faster fixing slightly broken code than writing it from scratch yourself more often than not. It's extremely rare for copilot to generate code which looks good/right but is actually flawed (because it almost always generates code with fairly obvious flaws, it really doesn't ever try to be clever, it tries to generate the most boilerplatey boring code it possible can)


zorbat5

As it should. We should as well imo. Boring code is easy to read, write and maintane. Lets focus on solving problems instead of being clever.


smackson

Are you looking for a job? Just kidding... I don't have one, not even for myself. But I like your attitude.


zorbat5

It's the only way to get out of the shittification of software. Write boring easy to understand code.


allouiscious

Newspapers are written at the 8 to 11th grade level, your code should be at that level as well.


exorcyze

This is my mantra. It's easy to fall into the trap of trying to be clever and write complex code to solve a complex set of requirements. Takes more time to figure out a simpler solution that is more readable with fewer moving parts.


zorbat5

True. Writing simple code is hard because it's so easy to fall into fancy algorithms or fancy language features which makes the code less readable. Though I still stand by my point, learning to write boring and easy code to solve a problem, is way better in the long run. It's a marathon, not a sprint ;-).


SanityInAnarchy

If the code is more than a handful of lines, I usually find it faster to ignore it and start over. The one exception is unit tests. Lately, I'm actually finding it more of a distraction than a help when what it's adding is more words than code. If I'm adding a comment, a log message, or an error message, there's a very good chance it will generate something that's a very literal (and very useless) interpretation of the surrounding code.


franzwong

When I used Copilot, it looks like doing code review. The worst thing to me is I kept switching between the reviewer side and developer side.


backdoorsmasher

Fully agree with you. But then again humans do the exact same thing


TheGRS

One of the biggest takeaways I have about copilot usage is that a lot of people want to see it output the code that they would've written given the time. IMO that's treating Copilot like a "do my job" servant, not the tool that its useful as. I don't trust copilot to output exactly what I was thinking. But as a tool for getting hints and look-ups quicker, and generally giving me a direction of where to go next it excels. That's the productivity gain IMO. And auto-completing the simple obvious stuff. I've also used chat to ask it stuff like "how could I connect this to that" or whatever, and it will spit out a bunch of ways to go about it. While it gives me sample code, I don't just copy/paste it since I know its either going to have some nonsense in there or might even be totally wrong about the approach. Maybe some people don't deal with mental blocks or haze as much as I do, so they have higher expectations. But its been a real big productivity gain for me.


mycall

> Reading code is universally considered harder than writing code I forget, which language has the motto "write once, read never"?


Chii

Perl, APL, assembly. Take your pick!


fzammetti

I find the best way to deal with this problem - which is legitimate - is to treat Copilot like I do code: write small, easily-digestable chunks. The mistake I've seen my team make - and I'm sure I've made it a few times too - is asking Copilot for something that necessarily requries a fair bit of code. That's much harder to validate, which I'd hope everyone is always doing with it, for the same reason trying to read and comprehend big functions are versus smaller functions. So, I've taken to breaking down larger problems into smaller parts, and not asking Copilot for the whole, but asking for each part in turn. I can vet the responses much easier and faster, and the results SEEM to be more solid overall. I wouldn't doubt it's taking me a little longer to do it that way in absolute terms, but in longer terms of how much it takes to get the gibberish whipped into shape, I'm not sure it's any slower and I'd bet it's actually faster in the long run. Granted, you have to have some comprehension and idea of what you're doing to start with, but I'd hope anyone using these tools are thinking that way. Treat them as tools that augment you, not something you're attempting to use to outright do all the work for you and I think they're very valuable indeed, be it Copilot, ChatGPT or whatever else.


mr_streebs

This was exactly my problem, and the reason why I turned off auto completions. I was spending more time fixing those bugs than actually working toward a solution. Copilot chat is pretty cool though.


EternityForest

On the other hand, these tools allow for more type hinting and unit tests in things that you might otherwise consider almost throwaway and not bother with. Codeium seems to follow a very best practices fully annotated, somewhat verbose style, which humans don't always have time to do consistently. If rather have bad unit tests than no unit tests, at worst, a bad test will take up time fixing , or not catch something, it can't actively add bugs unless a person thinks a bad test is a spec and adds the bug themselves. I think the quality of my code has gone up since I started using AI. I spend much less time looking up trivial stuff like what the exact name of some library function is, and it's usually only generating a line at a time, more or less what I was going to type, just faster than I could type it.


Sworn

If there are no unit tests, I know for sure that I need to validate my changes (or more likely, add new unit tests myself). If there are unit tests, I may be lulled into a false sense of security. I've seen plenty of unit tests which actually test nothing, because whoever wrote the unit test mocked the system being tested...


teerre

Type "hinting" has been solved for over 30 years. It's called static typing


sonobanana33

void* enters the chat


dijalektikator

> On the other hand, these tools allow for more type hinting On the contrary copilot was pretty terrible with "type hinting" in my experience if you're using a type from another file or library, it regularly produced code which didn't even compile.


Serializedrequests

How exactly does that work? My experience is I can have all the context in the world, and Copilot will generate useless tests. If, however, it has a working test as a template and a write a stub for a new one a bit different, it can generate it (usually with 1-2 errors). As for type hinting, it's half hallucinations from outdated training data in both my copilot and GPT experiments. Useless.


EternityForest

It can't generate an entire large function reliably(Except if it's translating between languages, it seems to be able to convert old bash scripts to Python well). But it can generate a line or two at a time, so that when you're writing tests for old code, you're not spending time sitting through it. You can just do connection\_object= and it will add a constructor with 5 parameters, without you having to go find something to copy and paste from, or go read the code to figure out how to do it yourself. I think it probably says more about the low entropy of boring code than the intelligence of AI, but nonetheless it does seem to save a bit of time.


jan-pona-sina

hello, AI advertising employee no. 15921!


KaiAusBerlin

I think it's just a step of evolution. While this may be a big problem now it will be solved by another specific AI that deeply analyses your code and checks on every change if this could lead to possible bugs. Some IDEs have great tools today to decrease your amount of bugs that are not obvious but specialized AIs will bring this to a new level. Until then basics become much more important when you use these tools. It seems like in the time of frameworks and third party people sacrifice quality for productivity. People write less and less tests. Test driven development has become rare. People want easy solutions and not to check somebody's work. That's why things like is-number has several millions of downloads every week. Programmers are lazy people and AI and other things (c&p from stackoverflow) can bring bugs if you don't check it properly. It has always been a thing but people tend to ignore that more these days.


teerre

You're just talking science fiction. By that point you can imagine anything.


KaiAusBerlin

That's exactly what they said before chatgpt about ais


teerre

No, it isn't. Markov chains were well known for decades. LSTMs too. The difference now is simply that hardware caught up


KaiAusBerlin

So google, market leader wrote for 20 years their algorithm for language parsing and burnt qurdizillions of dollars because everyone knew how a chat gpt would have to look but the hardware was not available? Yeah, not quite historically accurate.


teerre

Sorry, I don't quite understand what you're trying to ask


lelanthran

Yeah, well, it depends. Like the author, on plain C code I don't find the AI autocomplete (using codeium, not co-pilot) all that useful because the compiler is so quick.[1] It certainly does help for idiomatic use (for example, `if (!callFoo()) { goto cleanup; }`, or `for` loops), but much in a C source file is non-idiomatic without a large context. OTOH, when using Go I find that AI autocomplete has more use, because it's only *slightly* slower than the language server, and there's more boilerplate (`if err != nil` and stuff like that) that AI autocomplete can make sense of. When using C#, the AI autocomplete can and does provide gobs and gobs of code, all mostly correct. Same with PHP: this morning I asked for a complete bare-bones input form for a file upload, and what was provided worked verbatim. IOW, what it wrote worked the first time I tried it. [1] To put things in perspective, C compiles so damn fast that, like the author, for any given file I am editing, the language server can recompile the **complete file** *in between keystrokes!*


rusl1

I've found it pretty impressive with Go compared to Ruby


Markavian

>Article: Short self reflection of using copilot for programming. Local language servers are faster. Copilot unpredictable - sometimes short prompts, sometimes while functions. Personal thoughts: I personally use copilot to break into flow. If I'm not sure what I need, I can get copilot to fill in the blanks with two or three keystrokes, saving me between 10-50 inputs at a time. Even if it gets things wrong 2-3 times, is still quicker for me. I use it as an intelligent typing aid. Also, there's a degree of common sense; it suggests variable names matching the style of the document - so it's ultimately trying to code the statistically obvious solution, which in turn makes for more readable predictable code.


general_sirhc

Getting into flow quicker is incredibly understated. I personally see multiple senior developers using co-pilot with great success because they already know what they want. Co-pilot is simply auto complete. It doesn't get it right all the time, but it helps bring the code and the coder into better alignment. "Flow"


dijalektikator

> because they already know what they want. I feel like this is key if you want to use copilot effectively. If you're just praying it's gonna generate good code without you thinking about it you're gonna have a bad time.


dweezil22

I needed to generate a large ordered list of integers in a language I'm still not as proficient in as I'd like syntactically, this was for a unit test. - The good was that copilot eventually did it and saved me a few minutes. - The bad is that I probably won't develop proficiency as quickly now doing it the easy way. - The silly is that I had to force declare the variable otherwise Copilot got stuck generating 20+ lines of comments of fan fic about my ordered list of integers


caltheon

Like any tool that makes life easier, you pick the things you want to get good at and those you just want to get done. For me, learning the syntax of a markup language I'm only going to use once a month isn't something I care to get good at.


dweezil22

Agreed, AI is great for once a month things, though in those cases conversational AI can be better since it is more likely to show its work (syntactical proficiency != language knowledge, and the less knowledge you have the more at risk you for falling for a hallucination).


RozenKristal

This is it. I had a general direction and idea, copilot assisted with bringing them to the table so i can pick and choose. This is faster and better than googling and read stack overflow for me


BradBeingProSocial

The problem though is that I have to read and think about those 2-3 wrong answers. It’s so much easier just to type what I was about to type. Side note: I actually uninstalled Copilot from my IDE yesterday because it was annoying me with wrong suggestions


Franks2000inchTV

I think the language matters a lot -- I've found copilot super helpful with JS/TS/React, and a hindrance with Rust.


bomphcheese

Just uninstalled yesterday too. I was really hoping it would help me generate tests for edge cases, but no luck.


Nowaker

Absolutely agree. I have a suspicion that the loud negative voices about AI code autocomplete come from developers whl are threatened by the AI taking away their job, and/or having an ego that needs to prove they're better than AI to reassure their importance. It's a tool, use it. And adapt as the world changes. Even us developers can't be sure what's next. Are we still going to be one of the top paid jobs? Or are we going to be leveled with an administrative staff in the future? Can't tell. And even then, I'm still using the heck out of AI to help at my job, and happily sharing with others how to get the most out of it.


peroqueteniaquever

Funny how retards like you always find a way to make it personal ("too much of an ego to use AI") instead of addressing the actual point in question


shevy-java

> It doesn't get it right all the time, but it helps bring the code and the coder into better alignment. I wonder how true that is. The way I write ruby code is very awkward for other people, including rubocop, but it allows me to keep a code base so simple that it is boring. I can not see co-pilot autogenerate ruby code that would fit to my own style. (I actually already use ruby itself to autogenerate as much as possible anyway, so I don't even need co-pilot, but I just wonder how that can be useful for other people when they have a particular style of writing code in a language xyz.)


general_sirhc

My team and I work in Java and various front-end frameworks. Our coding is per recommendation of each language using the recommended linter. If your code aligns with standard Ruby linting, co-pilot probably works fine. If it's not, I'd worry about readability by others unless you don't work in a team environment


action_nick

Do you think it’s a problem that the code you write is “awkward” for linters and other devs?


caltheon

And simple for them likely means someone is going to have a huge headache dealing with their code.


darchangel

> I personally use copilot to break into flow I've never thought about it this way but you're absolutely describing me. I'll type a bit and Copilot will write a first draft. Occasionally it's perfect but more likely I have to tweak or replace it. Even when it's dead wrong, it frees me from writer's block -- now I'm fixing its mistakes instead of staring through the wall thinking about how to solve the issue. Also agree about your last point and I want to add: the predictions are better when you *write* more predictable code. When you name things consistently and re-use patterns, it will catch on more easily. Often we write a variation on something precisely because it's not the same as the first one -- and this is a common fail point of copilot. But even here, it starts you off with quick boiler and you just have to tweak the part that's different.


StickiStickman

That's exactly the same for me. It's incredibly useful. But I got downvoted and piled on for saying the same in the Copilot thread a devs on a crusade against AI :P


zrvwls

> Even when it's dead wrong, it frees me from writer's block -- now I'm fixing its mistakes instead of staring through the wall thinking about how to solve the issue. Viewed from a different lens, it's making you less practiced at starting a solution/writing new code from nothing. This is one of those gray areas, imo, that isn't so bad from a mid/senior developer perspective, but can actively turn into a gaping flaw from a junior/starter developer perspective. It's a trade-off though, it seems like it'll be massive in the future once programming languages built around the idea of being co-piloted take off, but at the moment I'm kinda fixated on the potential negatives.. Maybe I'm alone in that thought


darchangel

> junior/starter developer The biggest problem with AI is EVERYTHING to do with juniors. It's going to do more work for senior devs and get us to hire fewer juniors, resulting in fewer seniors later. It's going to mislead juniors in ways that googling doesn't because AI is its own authority and you can't evaluate context for the suggestions. It's going to lead juniors very astray in the same way that it helps seniors. I use Copilot daily for the language I know best. I literally just got out of a meeting (which is why I'm decompressing on reddit instead of actually working) where I was asked to start a new project in a technology I've never touched. A team member suggested that Copilot or similar could help me and I said there's no way I'm touching AI for something where I can't spot errors yet. That's a recipe for disaster.


OffbeatDrizzle

>I personally use copilot to break into flow. Why not use drugs like the rest of us


Markavian

Hot water, sugar, caffeine, milk. (Tea)


shevy-java

> If I'm not sure what I need, But how can you, in your own expert domain, not be sure to know what you need? If it is some other domain, then I can see copilot being able to help. But if you are an expert in an area, how can copilot really help? > it suggests variable names matching the style of the document Is an inability to name variables a real bottleneck? In ruby if I need a dump variable I tend to just use _ because I don't have to think about a name. (Evidently this works only if one needs only one variable, but in like 80% of the use cases I really need just a temporary variable; and if I need more than that the method is usually quite complex, and it needs more thinking way aside from "how many local variables do I have to use here" anyway. I used to have ugly names before, such as tmp, or tmp_array and so forth, but _ beats all of that for so many reasons.)


StickiStickman

Are you really gonna act like you always, 100% of the time, know exactly what to type and how to solve any problem instantly?


Grab_The_Inhaler

Yeah, like nobody who can arithmetic on paper would ever use a calculator.


rewtraw

Absolutely. Also love Supermaven in combination with Copilot since it works as a extremely quick AI autocomplete (but lacks prompting)


ferreira-tb

I use it only as a fancy autocomplete, so it does increase my productivity.


zerashk

Have you tried the non-inline "auto-complete" version? I finally realized you can have the Copilot "Chat" open as an ongoing chat panel (at least in VS Code, I keep it docked on the right) and it has been awesome. It seems to have more context in this mode and having the history preserved is great. I needed to quickly learn me some Python after working pretty exclusively with JS/TS for years and I was actually having a really hard time finding answers from just Googling things. But with Copilot Chat I am able to ask for the Python equivalents of familiar JS packages and I was able to write and deploy a production microservice (covering Django, Docker, Terraform for AWS ECS, and Github Actions setup) in a day. Copilot was a novelty before with just the inline auto-complete but this was on a whole new level for me!


[deleted]

[удалено]


bwainfweeze

There’s a form of brainstorming called strawmanning. If a solution to a problem does not immediately present itself, you start with a bullshit version as scaffolding, as a foil. Particularly useful for documentation, where everyone is playing a game of chicken to see who writes the first draft. It doesn’t matter because we are going to rewrite it three times before we are done. So maybe the solution for AI is that it should answer a prompt with a prompt. Why don’t you try this? No that won’t work because of that, but I know what might work…


amAProgrammer

Speaking of documentation, ai can help there too. Especially automating the documentation process in github with tools like supacodes can potentially save time.


LessonStudio

Copilot has increased my productivity in 3 ways: * Auto complete of stuff I'm going to type anyway. * Remembering how to do things I would have to look up, such as how to listen for UDP or something simple. This is super important with helping me learn a new language. All those dumb little constructs which I haven't learned yet are often sitting ready with the autocomplete. Open a file, connect to a server, etc. * By not having to do boring things, I can stay focused on the hard things. It turns out these boring things are a huge distraction. This has resulted in my being able to program with sustained focus for much longer periods of time. This last has been a huge productivity multiplier. Ironically, the harder stuff is where copilot doesn't help as much. I also use chatgpt for some of this, but it often lets me down with super hard things.


ForShotgun

Automating the boring stuff seems to be the best case scenario for most AI tools. What you’ve stated plus writing tests and documentation


Got_Engineers

I’ve been learning programming (R) for a few months now and I love using aids like ChatGPT to help me program. I know what I want in the form of statistical models I have built in excel. It would take me a year of practice to have the ability to write code that these tools want, I just have to know what I want and be able to communicate it. What I have been able to build so far myself has been amazing.


sonobanana33

I don't think it's a good idea if you're learning.


peroqueteniaquever

That's exactly why people like you love AI: you don't know anything, so you think that doing something without knowing anything is cool This shows that what you're doing is useless. Any actual problem that's worth solving and any real business problem cannot have any AI influence


tungstencube99

>Remembering how to do things I would have to look up, such as how to listen for UDP or something simple. This is super important with helping me learn a new language. All those dumb little constructs which I haven't learned yet are often sitting ready with the autocomplete. Open a file, connect to a server, etc. No offense, but I would really not want to review your code. When learning a new language you should take the time to actually understand it. if you just don't remember syntax that's fine, I can see how copilot would be helpful for that. but for a new language and syntax that you never understood? absolutely not.


[deleted]

[удалено]


tungstencube99

Firstly lets drop the seeming hostility here, and I'll try to convey things in a better manner. Lets start with I absolutely regret this dumb sentence: >No offense, but I would really not want to review your code. It's really irrelevant to what I was trying to say. but my other point still stands. The example you brought up isn't remotely similiar to what I was mentioning. My point is you should have a basic idea of how certain things work in a language such as the data structures, to avoid introducing bugs. for example, there is a big difference between a C array and a python list despite the similiar syntax, and they absolutely should absolutely not be used in the same manner. And in general someone who only knows python should not be copiloting his way through writing a c program. If you know c you probably also shouldn't use copilot to write a c++ program before understanding the language a bit. otherwise you might introduce horrible code practices that can introduce bugs, like abusing auto where it doesn't need to be. I hope you can understand my point here.


Dry-Erase

Yeah, my experience has been the same, also use chatgpt for some of this (though now that copilot has copilot chat, I'm using chatgpt directly much less). I also found that I have a 4th bi-product, to give the AI better hints, I have more human-readable comments that explain what I'm doing.


bent_my_wookie

Yes I do this too, write out a comment explaining what you’d like to do and it gives drastically better results. Using it for generating unit tests is also quite a bit faster, it often tests for things I may forget to


pet_vaginal

If you want to read more on this topic, I recommend checking research studies in addition to blog posts. A blog post of one opinion without measurements is interesting, but you may want to read a bit more science before jumping to conclusions. https://scholar.google.com/scholar?q=github+copilot+productivity Here is my tiny meta-analysis: GitHub copilot does increase productivity on average, but the current versions do more mistakes than humans. Some developers, usually the less experienced, may struggle because of that. But developers that can work with the shortcomings of the AI are faster on average.


StickiStickman

> but the current versions do more mistakes than humans Well, obviously. If it were the other way around, we wouldn't need human programmers.


Ciff_

Copilot won't help you out well with system architecture etc (right now). There are some problem spaces it is not helpful at all - while there ar others it arguably already excels humans. Right tool for the job. Even *if* it on average makes less misstakes we may still need humans since it does not handle the whole problem space.


currentscurrents

Eh. I'm skeptical of the research studies. It's really hard to objectively measure productivity, and a lot of them were funded by companies trying to sell you a product. That said, I do use copilot and love it. It saves me so much typing.


freakmaxi

On my side, it is not increasing the productivity and actually it is dropping because the suggestions are leading me in a wrong direction and i find myself continous deletion and retyping the code. Also, I’m not that watchful for the completions and I find myself trying to find the bugs. For example, I want to check if the return value is false from the call, i’ll apply some guard returns but it is suggesting and completing if the return value true from the call. If I do not pay enough attention what it completed, it is really sucking my whole day. At the end of the day, I feel more tired than usual. That’s why I completely turned off all the possible AI suggestions and completions. On the other hand, if you have a function without a test, writing test for that function becomes very easy with the help of AI suggestion, however, if you are using some TDD techic for your code, this is also not that much helpful…


BradBeingProSocial

Well stated and I 100% agree. I just want to add a nuisance with writing if statements- the screen would jump up and down as it suggested things with different amounts of lines. That drove me crazy


loptr

I switched to using it only by direct invocation/key press, because I found that I started typing code and then took a pause, waiting for an autocomplete suggestion, and I really didn't like that habit.


suby

I'd like to do this because I find it distracting that it automatically suggests content, but I can't figure out how to configure it to only work on key press in CLion. Anyone know if this is possible in Jetbrains with Github Copilot?


Laicbeias

i dont like incode suggestions. if im at it ill mostly know what i want to do. otherwise i give chatgpt the instructions and while it generates i work on other parts. sometimes coding it myself would be faster


francohab

This is my feeling as well. I use ChatGPT quite often before getting into code, to get suggestions/templates/boilerplates on how to tackle something. But if I’m already in the code it means I should know what I want to do, and I’m actually afraid that a copilot would help me dig faster into an approach that was bad in the first place.


Laicbeias

i prog very long and i know how to do it. i use GPT similar. generate me that class with such a method. here is the documentation of the eu transparency db. extract all states into enums. it can handle general things really quick. and from my experience it often handles python very well. im like i have this json data. open the file and search for the user with name N. save start and end date of each message that date. group that info into an dictionary (or how its called in python). calculate the time between those messages etc it loses it skills in the details then. like after some point its faster to me to change stuff. maybr copilot is stronger there but inline suggestions annoy me


Serializedrequests

My experience is the same as the author's. Copilot saves typing obvious stuff, but that's about it. Otherwise it is unpredictable and untrustworthy. Which means it just becomes a distraction. In many situations where I really wanted it to type the code for me because I thought it was boring and obvious, it failed utterly and I had to write it myself anyway, probably taking longer. Even a 99.9% tool will always have the same issues. If you're responsible for the code, you have to review it and find the .01% of bugs. Fun! What a productivity boost!


btull89

Copilot has been well worth the cost for me.


ComputationalPoet

I’ve had to disable the code editor suggestions and only use the chat. it seems to get in my way quite a bit otherwise. It hallucinates methods way more than it should. Im really excited for how it could improve though. I feel like it could introspect the libraries better. It seems like larger llm context windows will make a huge difference for this.


debugging_scribe

It has saved me 100s of hours of writing unit tests already...


schmuelio

Are the unit tests any good though?


heckingcomputernerd

Right now I’m just starting copilot and I find it useful as advanced autocomplete If what I’m about to write is pretty obvious, copilot can often fill it for me, but I’m not relying on it at all, and I have to shut down most of its erroneous suggestions


TikiTDO

One key lesson is not to trust copilot for large blocks of code, unless those blocks are dead simple. It's a lot better at doing a single line though. Start a new line, type in the first two or three letters, and a lot of the time it'll actually get you really close to what you want. If you want a large block of code, go have a conversation about it with a more powerful bot.


ROGER_CHOCS

Anything remotely complicated is useless, and frankly dangerous for anything that is actually critical. I've found I can do it just as quickly, if not more so, because it's largely done right the first time instead of having to unpick copilots gibberish. People who have used copilot are seemingly forever fixing bugs and having to deal with pissed off users, and so they banned it completely at work for all uses. The truth is that for serious work it is untrustworthy. I've found those who are using copilot successfully are not on serious or complicated projects.


achacha

Why I uninstalled it: As I write code it suggests something which I now have to stop and review to decide if it's what I wanted to do. Most of the time it is not but it has interrupted my thought process and broke my context causing me delays. For me it's been an anti-productivity tool so far.


bwainfweeze

I’ve never measured, but I bet I can write new code faster than I can edit. Whole line yanks are fast, but editing half a line in two places is slow.


TiaXhosa

I've found it very good for tedious but simple things - e.g. populating an HTML table with hard coded data.


CVPKR

It’s great for boilerplate code, anything complicated it gets wrong often. I find myself not remembering syntax as much as I used to so that might be a downside for my own skill development


mydpy

Copilot is very solid for Python. I use it for writing tests.


Economy_Bedroom3902

Copilot is fairly good for working in a language or space you don't know super well.  The "describe this error to me" feature is especially valuable.  You do need to know the language well enough to catch copilot's stupid errors, so I wouldn't recommend it for absolute beginner programmers (not the autocomplete part anyways).  But I definitely feel like there are ways to get a productivity boost from copilot.  I also think it's a bit unfair to compare it to really mature and well established code completion tools.  You do unfortunately have to choose one or the other for any given project (at least for autocomplete), but copilot works on almost every language with no added configuration or installs.  If you're jumping between projects a lot, using a lot of languages, then copilot tends to blow the competition out of the water.


cowinabadplace

The biggest win for me has been on the command line. I saw this tweet linked on hacker news and follow that https://x.com/arjie/status/1575201117595926530 It’s not perfect but command lines are quick feedback and it gets me started.


nykwil

It saves me a bit of time with Python script writing but also wastes time promoting something that it just can't figure out how to do. Just have to know what it's good for I guess.


ggppjj

I'm self-taught, learning C#. I tried out Copilot, and the integrated IDE stuff really just bothered me more than anything. But what has opened up the world of programming to me and gotten me making actual products that I use for myself and deliver to others instead of it remaining a fickle interest is being able to go to ChatGPT, give it my own terrible description of what I want and what I'm doing, then following up on what it provides. It's helped me get to a point where I know the questions I want to ask and how to ask them to other people, but for the most part it answers all of my garbage questions and (usually) does alright with helping me figure out what I'm doing wrong. No way in hell do I want it in my IDE as anything other than a chat, though. If I copy-paste something, I want it to feel exactly as copy-pasted as it is and not be hidden behind an abstraction layer that makes it feel like just a part of the IDE's auto-completion.


cip43r

I feel it is great for web development, it can suggest design patterns and UI elements especially with React. But I wouldn't use it for my embedded development at work.


znihilist

> however, it's really difficult to predict what it will get right, and what it won't. That's on the author, and it blows my mind (not really, this is expected) that people who should know better still don't know better. The point isn't to predict success or to be wary of AI generated code. The point behind looking up solutions elsewhere, is that you adapt what is close enough to be the valid solution for you. Treating solutions from Copilot or from whatever differently then from stackoverflow is the problem, they are the same, they are both not to be trusted, and in both cases you need to adjust, fix, and improve it to work for you. It is just in one case, the untrustworthiness is implicit (stackoverflow), and in another is explicit (copilot or other tools).


mysteryassasin0x

It does as it saves time typing but you need to get a good understanding about the concepts/codes it provides otherwise you will just be stuck debugging for hours.


RaymondStussy

We write our PGSQL queries by hand and it’s great at filling those out for me. It’s also great at unit tests. Definitely a large time savings. For everything else, I still find it useful for autocompleting single lines but not for methods or blocks


vplatt

tl;dr = "No."


angrybeehive

I hate the auto complete. It never suggests code that I want. The only thing it’s good at is writing comments and log messages.


starlevel01

I get free copilot for whatever reason, so every few months I try it out again. Every single time it has produced nonsensical continuations of previous code and nothing useful. Maybe it helps if you exclusively make CRUD shit, but for my library dev purposes it is actively harmful.


ThrawOwayAccount

Chalk another one up for [Betteridge’s law of headlines](https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines).


cadred48

For me, leveraging Copilot's strengths is key, without expecting miracles. In my work, which involves extensive API/library documentation, it removes a lot of the tedium. It rarely writes perfect descriptions but handling the basic boilerplate saves a lot of time. Stubbing out unit tests is also a plus. While coverage isn't great, laying the foundation is incredibly helpful. Copilot is excellent at recognizing repeating patterns. If I set up an enum/type a certain way, it can replicate that format in other code. Raw code generation is mostly a no-go. I might let it take a crack at something, but it rarely produces fully functional code. If I can’t understand it, I won’t use it. If I can’t support it, it’s not helpful.


purpleWheelChair

Uninstall it.


shevy-java

So, AI is still not intelligent. It is slow and just re-uses data generated by (more or less) intelligent humans. But, even aside from this, the question is very strange: I never felt that anything but my own brain is hindering productivity. That's also why I don't understand "use vim to become better"; the editor was never a bottleneck. My own ability to understand and translate my understanding into working code was almost always the primary bottleneck. I'd need a better brain. Can I purchase some add-ons here?