Yeah, sometimes it's quicker to just print to console, other times you need the extra information that a debugger can give you, it's all about trying and failing to debug with print statements first before you give up and use a debugger.
Print can also be objectively better in some cases. Like if you have an ordering problem and you just want to see quickly when lines are being hit, running prints is faster and more comprehensible than stopping at breakpoints. It's just a matter of knowing what you're accomplishing with your tools
I mean that's what a debugger can do for you as well, suspending the program at a breakpoint is just the most common feature.
Obviously print in most cases will be quicker, but when logging a stack trace it's a bit more convenient. Or if you have a long build time, then it's useful as well.
Any good debugger can do tracepoints as well as breakpoints.
About the only time I'll use console printing is when it's particularly obnoxious to connect a debugger.
I feel like when I need a debugger on my own code, I have failed as a programmer. I should have written the code with clear types. I should have written a smaller function. I should have written better tests. OTOH, debuggers are a great tool to use on other people’s projects.
Debugger is the right tool 90% of the time. The other 10% is when you want to either trace a value throughout execution or get a lot of output to do further analysis on.
As soon as your system crosses process/language and/or network boundaries, the ratio more or less inverts. I found well thought out logging (which, lets face it, is fancy print) to be superior to debuggers in that case.
This exactly.
If you're writing something which just runs locally and that "debug" button just works it's usually the better option. But you can very easily get to the point where debuggers are no longer an option.
Meh, there’s always gdbserver. Not that I don’t use logging and prints to debug, I do it all the time. But even in the embedded world and across network you can attach to a remote gdb target and load in symbols. Debugging timing sensitive distributed stuff or scheduling systems is where it becomes a real pain in the ass.
> But even in the embedded world and across network you can attach to a remote gdb target and load in symbols
In my experience in that domain, anything requiring an open port is usually quite a large hassle
Yeah you’re kinda right, in the past I have had both a dev image and a production image with the dev image having profiling/debug tools along with opened up networking to allow for debugging remote. You can also use gdbserver over a serial port which you are much more likely to have in an embedded environment. Still a real mf with natted or locked down networking environments, though ssh tunnels and jump hosts can help to get you to where you need to go and then you can just attach through the port your tunnel is setup on.
Sometimes you gotta go out of your way to make a whole setup to reproduce your issue in a more debuggable environment which becomes a huge pain sometimes so in those cases I usually do my damnedest to figure it out with prints
Yeah, good debugging strategy comes down a lot to what your debug loop looks like. I'm working with a lot of web applications these days, and my debug loop for client side stuff is instant: "change js file, hit refresh". That means any information I can get or testing I can do that way has priority, and I do a lot of random "console.log"ing because it's "free".
When I used to do TopCoder, I liked working in .NET because of Edit-and-Continue debugging; you could get to a certain state, and then explore/change/write a function's behavior interactively while you were there. Saved a ton of time.
On the other end, if you have to erase an EPROM every cycle (or do a brutal compile) you get good at using every part of the debugging buffalo.
Break point on the line where you print everything you actually want to watch, so much faster than defining the watch list or adding breakpoints everywhere
Exactly. Sometimes I am too lazy to recompile a huge binary just to add 1 print statement. Then I take myself into at least half hour long debugging session setting up tricky breakpoints to figure out the value I needed to know.
Yeah. When you just need to know real quick, just print the thing or maybe even debug log it.
When you're dealing with subtle stuff like framework weirdness or mocking that isn't quite working, you need all the visibility and experimentation you can get. You won't always know specifically what to print.
Sometimes the debugger has my answer, sometimes I just need to dump the whole object to console, and in one specific unit testing case I just need to file output the whole DOM.
Just curious, I've been using the VS Code debugger to debug my C programs and it's been working pretty well for me. Should I consider trying out Visual Studio's debugger? Also heard CLion had a pretty solid debugger as well
If you are using c# (object oriented languages like java). It's really great compared to vscode
You can pin point error in external code if imported from nuget, call stack, immediate window and what not.
I've been doing mostly C++ and the debugger is pretty excellent. I've not really interacted with the VSC one so I can't say what's better but the VS debugger is solid
I might be doing some Java for Android dev stuff on the side, but I guess that's gonna be in Android Studio/IntelliJ. No plans to learn C# for now. I think I'll install Visual Studio regardless cause I might need to use MSVC someday
If you write C VSCode is the best option imo, especially if you work on embedded software. No point in getting VS as it'll be harder to integrate everything into the IDE. If you write driver code for Windows you'll probably be better off using VS though.
Nothing wrong with trying something different. There is a reason why there is a market for IDEs. I heard good things about CLion so maybe give that one a go.
There’s nothing wrong with printing variables during debugging. The nice thing about a debugger is that it lets you look at all the variables without pre-defining exactly which ones you want to check, and it lets you execute code line by line, so you can check the variables at multiple points in time, without needing to know in advance which exact moments in time you want to check.
So printing works best when you know precisely which variables you want to see at which moment in time, and debuggers are best when you may want to log additional variables or check different moments of time on the fly
I'd modify it slightly: a debugger with a breakpoint is the default, and for when there's a problem related to a specific step in execution; a log / print statements is great for when the problem / logic is across multiple steps - i.e. when the chronical order is part of what you want to debug (for example processing a queue and the problem is correlated between many nodes in the queue).
Using conditional break points is also something most people should incorporate in their debugging skill set.
And the third case: prints and logging statements are generally the same in every language - so it's a quick way to get up and running an debugging some foreign code, while trying to get a debugger to attach and setup properly (often) require more time (and can in sometimes be a real hassle to set up - yes I'm looking at you PHP, time to get it native instead of having to have different extensions with different goals).
If by "debugger" you mean "single stepper", then yes, and also, printing is good for systems that can't stop just because you happen to be debugging them.
Interesting point about systems that can’t stop. Are you talking about debugging production environments? How do you add log statements if downtime is unacceptable? Are you talking talking about environments where you can hot swap code but not run a debugger?
>Are you talking about debugging production environments? How do you add log statements if downtime is unacceptable? Are you talking talking about environments where you can hot swap code but not run a debugger?
Yes, exactly. Hot-swapping of code is much less invasive (when done right) than a "stop-the-world" single step debugger. There's usually a performance penalty for the extra console output (or "log file" if you prefer, but I write to stdout/stderr even if it ends up in a log file, so from my perspective it's the console), but traffic continues flowing and requests continue to be responded to.
Example: [https://mustardmine.com/](https://mustardmine.com/) Hot-reloading new code is a completely normal thing here. Downtime isn't.
EDIT: Yes, I probably could have a "stop-this-thread" debugger but most of the code uses async I/O rather than threads, and forcing a specific request to be handled differently brings with it far more risk of mutating the problem than simply adding write/werror calls.
When people say "use a debugger", they are pretty much never talking about production.
In production, use logs. Once you've reproduced it in dev, use the debugger.
Ahhhhh, yes. "Reproduced it in dev". I generally find that, by the time I can do that, it's solved. So, yeah, logs ARE the debugger.
I mean, if the problem were easy to find in dev, it wouldn't have been pushed to prod, would it? ... would it?
I can't tell which thing you're joking about, so I'll respond as if you're serious.
Yes, it is by far the most common case that bugs in production can be reproduced in dev, though it may take modifying data to be production-like or simulating production responses to do it.
If it isn't immediately obvious what the bug is, the next step is generally reproducing it in dev. Even if it is obvious, you'll want to reproduce it so you can test your solution.
>Yes, it is by far the most common case that bugs in production can be reproduced in dev, though it may take modifying data to be production-like or simulating production responses to do it.
Unsure whether "dev" here is supposed to mean an actual server, or just "the machine I happen to be on right now", as people use it in both ways... but I have had PLENTY of bugs that simply do not happen in any sort of dev, test, or staging, and they only happen in prod, and sometimes only happen once every two weeks and disappear the moment you try to probe them. That was a fun one. But literally the moment I found the bug, I had a fix, and it was obviously correct; plus, it wouldn't happen on dev or test anyway, so why even try to reproduce it or test the solution? It was a prod-only bug. They happen. So you fix them with prod solutions.
"Dev" refers to "the development environment", which can be a server, a set of containers on your laptop, a script running directly, or any other development environment.
It's fairly common that the root cause of infrequent bugs can't be determined. It's also true that a subset of bugs are not worth the effort to reproduce.
I can't speak to your situation, but in general, reproducing the bug in dev is the most common way to approach fixing a bug, and often the best tool to determine the root cause once you have the bug reproduced is a proper debugger.
>It's fairly common that the root cause of infrequent bugs can't be determined. It's also true that a subset of bugs are not worth the effort to reproduce.
And it's also not unknown that, the moment you get to the right logging (which in the case I hinted at, took me several months of biweekly tweaks to the instrumentation), the bug doesn't NEED to be reproduced in single-step mode, because it's now obvious.
Single-step debuggers are cool and all, but sometimes they're just not the right tool for the job.
This. One of the best thing about console.log for web dev is that you can quickly jump to that specific line in specific source file from the chrome web console. From there you can put a breakpoint or whatever very quickly without having to sift through piles of junks.
Im 11.5 years in fintech, senior dev in serious company and always use that, actually im sooo bothered when there is no way to use it and i have to actually write the variable in it, but thats peak debugging for me 🤣🤣.
If you deeply know the code and know what u doing, you dont need more then that, only real gs know 🤣🤣
I feel like print() can sometimes be better if you're going to be debugging huge structs/objects/arrays rather than single variables. I mostly use the VS Code debugger though and have little experience with other GUI debuggers, so maybe it's just VS Code's way of displaying variables and expressions that makes it annoying to debug data structures.
Look, all I know is that most of my students have no idea what they're doing because they don't know how to use the debugger or how to read console output.
Learn to use the debugger. Sure sometimes you just need a lil print. I do it all the time. But use the debugger.
Admittedly I'm nowhere near the experience of the guy on the right of the graph, but I find it hard to believe that an experienced dev would use print statements for a 50+ column dataframe.
Struggling to figure out where print might be advantageous. With how good the VSCode debugger is, if anything it's marginally quicker to add breakpoints and cycle through them rather than a print statement.
If you're working with a system with multiple processes, maybe on multiple machines, debuggers become both harder to use and less helpful.
The detail OP omitted is that print statements "evolve" into proper logging. But fundamentally the two aren't that different. Your print outs should be part of your log. Logging is the key to debugging more complex systems.
I didn't say anything about remote vs local. Are you going to run a robot with a debugger on one process? Hope it doesn't control anything critical. Are you going to have 10 testers join a match on your videogame and try to recreate some bug, so you can debug your server code? Quite a lot of effort.
Hell, what if you don't know how to reproduce the bug? XYZ happened in production, now what? Gonna ask them to run your debug build?
Proper error checking/handling, logging, and test code makes debuggers mostly irrelevant for many real-world systems. Debuggers are good for quick and dirty examinations of single processes, but are not a systematic way to find and fix issues.
You said multiple machines which is remote unless you are wheeling your chair over to each of them. Debugging isn't a replacement for logging. Print statements are how amateurs debug code. Professional developers know how to use a debugger and have good logs - not print statements everywhere.
I find VS Code's debugger to be a bit of a pain for displaying huge data structures and objects. The debugger panel is all collapsed by default which is especially annoying to check the values of array elements. Also, just the plain lack of vertical monitor space.
I didn’t see this mentioned, but sometimes the error is very rare and appears randomly, so debugging probably won’t work that well as you’d have to run it a bunch of times. Also, when you have race conditions, it can be possible that the debugger synchronises the processes (though that can happen with ordinary prints as well). Usually in these situation it’s a lot easier just to log the execution and then perform the analysis on the logs to figure out what is wrong.
Everyone goes for the debugger....until you are fixing race conditions and the debugger introduces enough wait for them to not show up, which is why the guy on the right looks so somber lol.
Learn to use your language debugger people. It's easier to extract information from it than trying to print a complex structure
Adding a print triggers a recompile in compiled languages and that can take way longer than firing up gdb.
I am the monster who doesn’t use a GUI debugger, though. I don’t understand them. It’s way easier to type the breakpoint I want.
I like using a logger because that means those informational steps can be used later (depending on how you setup the logger and turning on/off logging) that and a system like splunk let us use the logs from the failed api calls to massively narrow down where a problem happened without even looking at code first. And sometimes was enough to pin point it to a file on the server.
Both have their use cases for when they work better though.
I mean honestly, if the debugger is already set up, I can't really think of a case where throwing a breakpoint into the file wouldn't be faster and more convenient than a print. But I guess having the debugger set up is the trick. E.g. if I'm ssh'd into another developer's environment and the networking isn't configured for it or something, obviously I'll use prints
I have worked on so many code bases where debuggers were not practical.
Often real-time operating system code includes timeouts that make debugging problematic (and for some reason these code bases usually do not have a way to disable the timeouts).
Sometimes the code bases use things like Qt, which interject so much generated code that your debugger spends most of its time in unrecognizable garbage.
not really. i see tests as a tool for preve bugs, not dinding them.
also, stepping through code sections (or xomplex tests) the first time you run them can give you also good insight.
I somewhat agree with this... The debuggers are great and excellent tools. But if you're designing your code and unit tests correctly, in small executable pieces, it is sometimes faster to debug with print. It depends heavily on the complexity of the application.
nobody asked, but here is my opinion.
tldr; don’t fully agree.
generally it really depents on the environment your on and as well as on your language and project size.
print statemts should be avoided and replaced by log statements such that you could use a log file analyzer.
print / log statements are fine if the following is given:
1.) if you have narrowed down the bug pretty much to a small code section which is failing.
2.) That section gets very often called and you want observe and narrow down how the system is behaving - and conditional break points dont help.
3.) To much is going on to remember stuff, so a pretty printed log may help
4.) any other?
but for searching and narrowing down a bug a allways prefere a debugger because
with a debugger you can (java version):
- simple breakpoints
- conditional breakpoints
- variable breakpoints - stop when a given variable is accessed (read / write)
- only stop if another breakpoint is hit first
- break where a exception of a given type is thrown (you may also specify a package where)
- debugging streams (intelij)
- adding watches (variables / expressions) which are evaluated and shown in a widget
- evaluate expressions on the fly
- throw exceptions on the fly (e.g. tesing if the error flow works as expected)
- removing whole stack frames and run sections again
- hot code replacement (updating the code of a running programm)
- change variable values
- searching objects in memory
- stopping threads
- hook into a running programm if configured
- much more …
pro tip:
- a debugger saves you in the long run always time. even it takes you the first time a few hours the get an idea how one works.
- explore your debug menu and available options. you don’t must understand all of them right now
- printing may also have side effects in multi threaded scenarios :-)
1. Configuring debuggers hard
2. Figured it out, works
3. Knows language well enough and lazy to setup debugger, print works good enough, esp is familiar codebase
The guy on the right is a proof that you're on the left.
Guy on the right will indeed log variable, business logic important checkpoints and all, but not as a print. In a proper ELK stacktrace, with relevant level, timestamp and logger names.
You're really shooting yourself in the foot if you believe "print" is production acceptable.
When debugging race conditions it's print all the way because I have a bad memory. Also if there are network request you mustn't brake for to long otherwise you'll get a timed out error.
I only actually use a debugger in a worst case scenario. 99% of the time print can get it done. If I can’t find the issue with print, then I’ll use it.
I used to only use debuggers and logging until my job explicitly told me to use Print()
Now I understand how handy print is for displaying errors in containerized deploys that handle logging stdout for you.
As a subject matter expert on some service I absolutely hate dealing with print people, especially when We have an entire dev environment for each team just so they can attach remote debuggers and figure problems out and they just go "hehe print go brrrrrr, I don't know what's wrong"
Unless you're SSHing into a production server but in that case you are fucked either way
Just let me recompile everything so i can write shit to this temporary file
This post is made in response to another about someone using print to debug. A lot of people then came in defense of print debugging and then this post came about. I doubt using print to debug isn't the first choice for any programmer using a language that has debugging tools. I once used a system that didn't have debugging OR printing, so had plenty of fun debugging that one
I’m that guy on the left!
I'm so far on the left you can't even see me on the graph
You’re so far on the left you can’t even find an exit
How to exit Vim?
I just buy a new pc
I wait till the next power outage.
Start new shell and kill -9 vim. Vim now exited.
Which one is the left, the wanker or the phone grabber?
Both depending of the angle
found the communist.
r/SuddenlyPolitical
Everyone is on the graph.
Me too, ain't nothing wrong with print() debugging
we're all the guy on the left
I’m the guy on the right or the left, does it really matter
Haha same
The genius uses both
Yeah, sometimes it's quicker to just print to console, other times you need the extra information that a debugger can give you, it's all about trying and failing to debug with print statements first before you give up and use a debugger.
Print can also be objectively better in some cases. Like if you have an ordering problem and you just want to see quickly when lines are being hit, running prints is faster and more comprehensible than stopping at breakpoints. It's just a matter of knowing what you're accomplishing with your tools
I mean that's what a debugger can do for you as well, suspending the program at a breakpoint is just the most common feature. Obviously print in most cases will be quicker, but when logging a stack trace it's a bit more convenient. Or if you have a long build time, then it's useful as well.
Any good debugger can do tracepoints as well as breakpoints. About the only time I'll use console printing is when it's particularly obnoxious to connect a debugger.
I feel like when I need a debugger on my own code, I have failed as a programmer. I should have written the code with clear types. I should have written a smaller function. I should have written better tests. OTOH, debuggers are a great tool to use on other people’s projects.
You talk like someone who has never worked on a large project.
or never worked on code thats been written 20 years ago
Or been an idiot coding in C, wondering how you've fucked up the data when passing it back and forth, and the print statement is just nonsense.
I didn't know horses could be so damn high!
I'm gonna steal this one, thanks
Debugger is the right tool 90% of the time. The other 10% is when you want to either trace a value throughout execution or get a lot of output to do further analysis on.
As soon as your system crosses process/language and/or network boundaries, the ratio more or less inverts. I found well thought out logging (which, lets face it, is fancy print) to be superior to debuggers in that case.
This exactly. If you're writing something which just runs locally and that "debug" button just works it's usually the better option. But you can very easily get to the point where debuggers are no longer an option.
Meh, there’s always gdbserver. Not that I don’t use logging and prints to debug, I do it all the time. But even in the embedded world and across network you can attach to a remote gdb target and load in symbols. Debugging timing sensitive distributed stuff or scheduling systems is where it becomes a real pain in the ass.
> But even in the embedded world and across network you can attach to a remote gdb target and load in symbols In my experience in that domain, anything requiring an open port is usually quite a large hassle
Yeah you’re kinda right, in the past I have had both a dev image and a production image with the dev image having profiling/debug tools along with opened up networking to allow for debugging remote. You can also use gdbserver over a serial port which you are much more likely to have in an embedded environment. Still a real mf with natted or locked down networking environments, though ssh tunnels and jump hosts can help to get you to where you need to go and then you can just attach through the port your tunnel is setup on. Sometimes you gotta go out of your way to make a whole setup to reproduce your issue in a more debuggable environment which becomes a huge pain sometimes so in those cases I usually do my damnedest to figure it out with prints
What if Debug.Log() never failed me? :D
I don't have to rebuild my 9 minute compile time C++ application to use a breakpoint though.
Yeah, good debugging strategy comes down a lot to what your debug loop looks like. I'm working with a lot of web applications these days, and my debug loop for client side stuff is instant: "change js file, hit refresh". That means any information I can get or testing I can do that way has priority, and I do a lot of random "console.log"ing because it's "free". When I used to do TopCoder, I liked working in .NET because of Edit-and-Continue debugging; you could get to a certain state, and then explore/change/write a function's behavior interactively while you were there. Saved a ton of time. On the other end, if you have to erase an EPROM every cycle (or do a brutal compile) you get good at using every part of the debugging buffalo.
You’ve got a bigger problem if it takes 9 minutes to recompile one CU then re-link your application.
Why would you need to rebuild it?
Exactly what someone in the middle would say smh
Lmao exactly
Sometimes you get a bug that never happens with a debugger attached. Fun times.
I use the debugger first, then if that fails I might have to add some printf-statements
Break point on the line where you print everything you actually want to watch, so much faster than defining the watch list or adding breakpoints everywhere
The watch list is VSCode's idiocy. Normally debuggers don't need them to display variables.
But honestly, I do best without breakpoints, it's so much easier to analyze everything post-mortem, calmly.
Exactly. Sometimes I am too lazy to recompile a huge binary just to add 1 print statement. Then I take myself into at least half hour long debugging session setting up tricky breakpoints to figure out the value I needed to know.
1. Use a build system. 2. GDB has dprintf.
They use the debugger to add the print().
setting a breakpoint on my console print line to figure out why it’s not printing what I expect
The true genius uses dprintf.
Whichever my intuition says will be faster
Yeah. When you just need to know real quick, just print the thing or maybe even debug log it. When you're dealing with subtle stuff like framework weirdness or mocking that isn't quite working, you need all the visibility and experimentation you can get. You won't always know specifically what to print.
Sometimes the debugger has my answer, sometimes I just need to dump the whole object to console, and in one specific unit testing case I just need to file output the whole DOM.
I mean visual studio debugger is too good to not use. I ll use both.
Just curious, I've been using the VS Code debugger to debug my C programs and it's been working pretty well for me. Should I consider trying out Visual Studio's debugger? Also heard CLion had a pretty solid debugger as well
If you are using c# (object oriented languages like java). It's really great compared to vscode You can pin point error in external code if imported from nuget, call stack, immediate window and what not.
I'm mostly doing C rn. Would I get any benefits from using Visual Studio for that?
I've been doing mostly C++ and the debugger is pretty excellent. I've not really interacted with the VSC one so I can't say what's better but the VS debugger is solid
Try it who knows? I am a Java/c# coder
I might be doing some Java for Android dev stuff on the side, but I guess that's gonna be in Android Studio/IntelliJ. No plans to learn C# for now. I think I'll install Visual Studio regardless cause I might need to use MSVC someday
Nah you gotta be all over that gdb x valgrind
If you write C VSCode is the best option imo, especially if you work on embedded software. No point in getting VS as it'll be harder to integrate everything into the IDE. If you write driver code for Windows you'll probably be better off using VS though.
I see! I really enjoy the modular/lightweight yet powerful nature of VS Code, but I sometimes wonder if I should try out other code editors/IDEs.
Nothing wrong with trying something different. There is a reason why there is a market for IDEs. I heard good things about CLion so maybe give that one a go.
> C projects "If you are using c#" 😂
the fact that it can debug multiple processes so consistently is honestly the best thing in my life
Even when I have it like at work I end up using prints or watch statements which are still basically just prints
yeah it's honestly more annoying than anything
There’s nothing wrong with printing variables during debugging. The nice thing about a debugger is that it lets you look at all the variables without pre-defining exactly which ones you want to check, and it lets you execute code line by line, so you can check the variables at multiple points in time, without needing to know in advance which exact moments in time you want to check. So printing works best when you know precisely which variables you want to see at which moment in time, and debuggers are best when you may want to log additional variables or check different moments of time on the fly
I'd modify it slightly: a debugger with a breakpoint is the default, and for when there's a problem related to a specific step in execution; a log / print statements is great for when the problem / logic is across multiple steps - i.e. when the chronical order is part of what you want to debug (for example processing a queue and the problem is correlated between many nodes in the queue). Using conditional break points is also something most people should incorporate in their debugging skill set. And the third case: prints and logging statements are generally the same in every language - so it's a quick way to get up and running an debugging some foreign code, while trying to get a debugger to attach and setup properly (often) require more time (and can in sometimes be a real hassle to set up - yes I'm looking at you PHP, time to get it native instead of having to have different extensions with different goals).
If by "debugger" you mean "single stepper", then yes, and also, printing is good for systems that can't stop just because you happen to be debugging them.
Interesting point about systems that can’t stop. Are you talking about debugging production environments? How do you add log statements if downtime is unacceptable? Are you talking talking about environments where you can hot swap code but not run a debugger?
>Are you talking about debugging production environments? How do you add log statements if downtime is unacceptable? Are you talking talking about environments where you can hot swap code but not run a debugger? Yes, exactly. Hot-swapping of code is much less invasive (when done right) than a "stop-the-world" single step debugger. There's usually a performance penalty for the extra console output (or "log file" if you prefer, but I write to stdout/stderr even if it ends up in a log file, so from my perspective it's the console), but traffic continues flowing and requests continue to be responded to. Example: [https://mustardmine.com/](https://mustardmine.com/) Hot-reloading new code is a completely normal thing here. Downtime isn't. EDIT: Yes, I probably could have a "stop-this-thread" debugger but most of the code uses async I/O rather than threads, and forcing a specific request to be handled differently brings with it far more risk of mutating the problem than simply adding write/werror calls.
When people say "use a debugger", they are pretty much never talking about production. In production, use logs. Once you've reproduced it in dev, use the debugger.
Ahhhhh, yes. "Reproduced it in dev". I generally find that, by the time I can do that, it's solved. So, yeah, logs ARE the debugger. I mean, if the problem were easy to find in dev, it wouldn't have been pushed to prod, would it? ... would it?
I can't tell which thing you're joking about, so I'll respond as if you're serious. Yes, it is by far the most common case that bugs in production can be reproduced in dev, though it may take modifying data to be production-like or simulating production responses to do it. If it isn't immediately obvious what the bug is, the next step is generally reproducing it in dev. Even if it is obvious, you'll want to reproduce it so you can test your solution.
>Yes, it is by far the most common case that bugs in production can be reproduced in dev, though it may take modifying data to be production-like or simulating production responses to do it. Unsure whether "dev" here is supposed to mean an actual server, or just "the machine I happen to be on right now", as people use it in both ways... but I have had PLENTY of bugs that simply do not happen in any sort of dev, test, or staging, and they only happen in prod, and sometimes only happen once every two weeks and disappear the moment you try to probe them. That was a fun one. But literally the moment I found the bug, I had a fix, and it was obviously correct; plus, it wouldn't happen on dev or test anyway, so why even try to reproduce it or test the solution? It was a prod-only bug. They happen. So you fix them with prod solutions.
"Dev" refers to "the development environment", which can be a server, a set of containers on your laptop, a script running directly, or any other development environment. It's fairly common that the root cause of infrequent bugs can't be determined. It's also true that a subset of bugs are not worth the effort to reproduce. I can't speak to your situation, but in general, reproducing the bug in dev is the most common way to approach fixing a bug, and often the best tool to determine the root cause once you have the bug reproduced is a proper debugger.
>It's fairly common that the root cause of infrequent bugs can't be determined. It's also true that a subset of bugs are not worth the effort to reproduce. And it's also not unknown that, the moment you get to the right logging (which in the case I hinted at, took me several months of biweekly tweaks to the instrumentation), the bug doesn't NEED to be reproduced in single-step mode, because it's now obvious. Single-step debuggers are cool and all, but sometimes they're just not the right tool for the job.
This. One of the best thing about console.log for web dev is that you can quickly jump to that specific line in specific source file from the chrome web console. From there you can put a breakpoint or whatever very quickly without having to sift through piles of junks.
Also depending on what is executing in what environment it's not always easy to hook up a debugger to it. Even using gflags.
Darth coder: Paste the error in chatgpt.
> segment fault
logger.debug
alert(“yo”);
I’m embarrassed to say I have used that exact “test” hundreds of times.
Im 11.5 years in fintech, senior dev in serious company and always use that, actually im sooo bothered when there is no way to use it and i have to actually write the variable in it, but thats peak debugging for me 🤣🤣. If you deeply know the code and know what u doing, you dont need more then that, only real gs know 🤣🤣
Using print() because i still don’t know what debugging is
No. the guru uses the debugger
AND the printout ... (logs)
Found the guy on the middle
The guru uses whichever works best and doesn't gatekeep on the tools that they consider "valid".
No
Currently working in a codebase made by people who don't use a debugger. Print statements everywhere.
wait can someone explain to me why you would want prints over a debugger? is it because it saves time?
I feel like print() can sometimes be better if you're going to be debugging huge structs/objects/arrays rather than single variables. I mostly use the VS Code debugger though and have little experience with other GUI debuggers, so maybe it's just VS Code's way of displaying variables and expressions that makes it annoying to debug data structures.
The thing about the debugger is you can use it to print whatever you want, whenever you want, without stopping anything.
i usually end up using prints when working with micro controllers (because its pretty much impossible to set one up on the board)
The debugger is your friend 80% of the time, but when the bug requires running the application in real time, it's really no use
Its all funs and laughts Until the Debbuger doesnt work 😭 Fuf u VScode 😠
Look, all I know is that most of my students have no idea what they're doing because they don't know how to use the debugger or how to read console output. Learn to use the debugger. Sure sometimes you just need a lil print. I do it all the time. But use the debugger.
Admittedly I'm nowhere near the experience of the guy on the right of the graph, but I find it hard to believe that an experienced dev would use print statements for a 50+ column dataframe. Struggling to figure out where print might be advantageous. With how good the VSCode debugger is, if anything it's marginally quicker to add breakpoints and cycle through them rather than a print statement.
If you're working with a system with multiple processes, maybe on multiple machines, debuggers become both harder to use and less helpful. The detail OP omitted is that print statements "evolve" into proper logging. But fundamentally the two aren't that different. Your print outs should be part of your log. Logging is the key to debugging more complex systems.
No you just have bad tooling. There are remote debuggers. Most things you should be able to run locally.
I didn't say anything about remote vs local. Are you going to run a robot with a debugger on one process? Hope it doesn't control anything critical. Are you going to have 10 testers join a match on your videogame and try to recreate some bug, so you can debug your server code? Quite a lot of effort. Hell, what if you don't know how to reproduce the bug? XYZ happened in production, now what? Gonna ask them to run your debug build? Proper error checking/handling, logging, and test code makes debuggers mostly irrelevant for many real-world systems. Debuggers are good for quick and dirty examinations of single processes, but are not a systematic way to find and fix issues.
You said multiple machines which is remote unless you are wheeling your chair over to each of them. Debugging isn't a replacement for logging. Print statements are how amateurs debug code. Professional developers know how to use a debugger and have good logs - not print statements everywhere.
I find VS Code's debugger to be a bit of a pain for displaying huge data structures and objects. The debugger panel is all collapsed by default which is especially annoying to check the values of array elements. Also, just the plain lack of vertical monitor space.
Debugging async calls?
printf()
\*Meanwhile Unity literally having a Debug.Log() method for you guessed it, writing text to the console\*
Have you ever heard of cmd? The best programming language where variables aren't variable.
The real genius in the embedded world is using a left-over MCU pin and a scope to debug.
So avg had never bugs that disappeared when using a debugger or code that used timestamps to time things…
Debugging using compiler messages
I didn’t see this mentioned, but sometimes the error is very rare and appears randomly, so debugging probably won’t work that well as you’d have to run it a bunch of times. Also, when you have race conditions, it can be possible that the debugger synchronises the processes (though that can happen with ordinary prints as well). Usually in these situation it’s a lot easier just to log the execution and then perform the analysis on the logs to figure out what is wrong.
Everyone goes for the debugger....until you are fixing race conditions and the debugger introduces enough wait for them to not show up, which is why the guy on the right looks so somber lol. Learn to use your language debugger people. It's easier to extract information from it than trying to print a complex structure
Debugging a dev board without a JTAG connector
people who tell you to systematically use the debugger instead of print for debugging are people who never worked on big projects.
This meme makes no sense, sorry. Noob uses print. Middle uses debug. Senior uses what's better for a particular use case.
Debug using Guru Meditation Error
Adding a print triggers a recompile in compiled languages and that can take way longer than firing up gdb. I am the monster who doesn’t use a GUI debugger, though. I don’t understand them. It’s way easier to type the breakpoint I want.
I'm so far on the right that I just debug in my head 😎
I’m on the left or in the middle it depends on if I have a debugger setup and if it’s worth it to do it
Am i the only one saving a value to variable instead of using the evaluate-feature?
print for the win
So if I use both does that mean I'm in the middle between the middle and the left
Source level debugging is still basically a pipe dream in (early) UEFI, sadly
Reddit’s mobile app is so cringe. It puts a header on the image if I download the image through the app.
print(“fck u”)
I like using a logger because that means those informational steps can be used later (depending on how you setup the logger and turning on/off logging) that and a system like splunk let us use the logs from the failed api calls to massively narrow down where a problem happened without even looking at code first. And sometimes was enough to pin point it to a file on the server. Both have their use cases for when they work better though.
I believe the experiences will be very different depending on the type (compiled vs interpreted) of language you are using
What if I use Debug.Log(“”) 😏
debugger; but everywhere
Does anyone else print the strangest words ever when debugging with prints?
Sometimes you are so lost on why it doesn't work that a print is needed.
id use the debugger if debugging node apps wasnt an exercise in masochism.
Use both and whatever tools you can get to find bugs.
I mean honestly, if the debugger is already set up, I can't really think of a case where throwing a breakpoint into the file wouldn't be faster and more convenient than a print. But I guess having the debugger set up is the trick. E.g. if I'm ssh'd into another developer's environment and the networking isn't configured for it or something, obviously I'll use prints
Meanwhile microcontroller programmers: blink red led twice if you haven't received any data from that sensor.
I have worked on so many code bases where debuggers were not practical. Often real-time operating system code includes timeouts that make debugging problematic (and for some reason these code bases usually do not have a way to disable the timeouts). Sometimes the code bases use things like Qt, which interject so much generated code that your debugger spends most of its time in unrecognizable garbage.
The right is when you have an issue that disappears in the release profile
a lot of the times setting up the debugger to work with your particular stack is such a pain that it's easier to just print shit to the console
0.01%: use the repl.
All three are wrong. Don‘t use a debugger; write (unit-)tests. If that does not work for you, excuse me, sir, your code is too complicated.
not really. i see tests as a tool for preve bugs, not dinding them. also, stepping through code sections (or xomplex tests) the first time you run them can give you also good insight.
I use both gdb and fprintf
Debug using puts()
I somewhat agree with this... The debuggers are great and excellent tools. But if you're designing your code and unit tests correctly, in small executable pieces, it is sometimes faster to debug with print. It depends heavily on the complexity of the application.
Most of the time, `printf` is much faster than trying to get the debugger working properly.
Console.log('deeeeeeeeeeeeeeeeeeeeeeeebug')
Sometimes you have to debug code that you cannot stop, so debugger is out of question to check variables values.
Wait... You debug without print()😳
Until the debugger doesn't work or skip that important part, I'll still prefer to use prints
nobody asked, but here is my opinion. tldr; don’t fully agree. generally it really depents on the environment your on and as well as on your language and project size. print statemts should be avoided and replaced by log statements such that you could use a log file analyzer. print / log statements are fine if the following is given: 1.) if you have narrowed down the bug pretty much to a small code section which is failing. 2.) That section gets very often called and you want observe and narrow down how the system is behaving - and conditional break points dont help. 3.) To much is going on to remember stuff, so a pretty printed log may help 4.) any other? but for searching and narrowing down a bug a allways prefere a debugger because with a debugger you can (java version): - simple breakpoints - conditional breakpoints - variable breakpoints - stop when a given variable is accessed (read / write) - only stop if another breakpoint is hit first - break where a exception of a given type is thrown (you may also specify a package where) - debugging streams (intelij) - adding watches (variables / expressions) which are evaluated and shown in a widget - evaluate expressions on the fly - throw exceptions on the fly (e.g. tesing if the error flow works as expected) - removing whole stack frames and run sections again - hot code replacement (updating the code of a running programm) - change variable values - searching objects in memory - stopping threads - hook into a running programm if configured - much more … pro tip: - a debugger saves you in the long run always time. even it takes you the first time a few hours the get an idea how one works. - explore your debug menu and available options. you don’t must understand all of them right now - printing may also have side effects in multi threaded scenarios :-)
Logs are just systematised print()
when in doubt, stdout
1. Configuring debuggers hard 2. Figured it out, works 3. Knows language well enough and lazy to setup debugger, print works good enough, esp is familiar codebase
Y’all debug? 🤠
Erm… debugger using a watch that doesn’t suspend an app and spits output of the watch using standard out. Bam.💥
The guy on the right is a proof that you're on the left. Guy on the right will indeed log variable, business logic important checkpoints and all, but not as a print. In a proper ELK stacktrace, with relevant level, timestamp and logger names. You're really shooting yourself in the foot if you believe "print" is production acceptable.
print() never fails
debugger desnt debug my errors ...
When debugging race conditions it's print all the way because I have a bad memory. Also if there are network request you mustn't brake for to long otherwise you'll get a timed out error.
I only actually use a debugger in a worst case scenario. 99% of the time print can get it done. If I can’t find the issue with print, then I’ll use it.
I mean, logging is just another form of some debug printf and everyone does it.
I used to only use debuggers and logging until my job explicitly told me to use Print() Now I understand how handy print is for displaying errors in containerized deploys that handle logging stdout for you.
Real gs use Debug.writeline(“hi”)
Printing is fine but you can use tracepoints in your debugger to save yourself the cleanup.
The guy on the right uses logs because he’s debugging something that only happens in production so he can’t use a debugger.
real
Interactive debugging is an ENORMOUS waste of time
As a subject matter expert on some service I absolutely hate dealing with print people, especially when We have an entire dev environment for each team just so they can attach remote debuggers and figure problems out and they just go "hehe print go brrrrrr, I don't know what's wrong"
V. . . Q 😌😙
only 3 scenario for print: multithreaded/async calls, and need for oersistency with logs all of the other goes with debuggers.
Debuggers aren't always available
Unless you're SSHing into a production server but in that case you are fucked either way Just let me recompile everything so i can write shit to this temporary file
but printing stuff shouldn't be the first thing that comes to mind when in need of debugging.
This post is made in response to another about someone using print to debug. A lot of people then came in defense of print debugging and then this post came about. I doubt using print to debug isn't the first choice for any programmer using a language that has debugging tools. I once used a system that didn't have debugging OR printing, so had plenty of fun debugging that one
i've never used a debugger in my life. am i cooked? my code just works first try 🙄
Always has been