I know profilers and debuggers are a boon for productivity, but anecdotally I’ve found they’re seldom used. How often do you use debuggers/profilers in your work? What’s preventing you? conversely, what enables you to use them?
deleted by creator
How do people do stuff without debuggers? :D
Another way to develop would be through iterating within a Unit Test that you don’t plan to keep around.
Uh, I set a breakpoint and run the app?
To add a bit more context, it’s more difficult to configure a debugger when the application is running within something like Docker. How difficult? That depends on the language and tools you’re using.
I’ve seen the fun of “prints everywhere” in production when a colleague forgot to remove a “Why the fuck do you end up here?” followed by a bunch of variables before committing a hot-fix… Customers weren’t to amused…
Edit: That was a PHP driven web shop and the message ended up on to of the checkout page
That must’ve prompted a bit of existential crisis in some shoppers. I can see going to purchase some useless consumer shit online and getting a message “Why the fuck do you end up here?” and just closing my browser and rethinking my life decisions.
@Nicktar I usually prefer the prints everywhere approach, but of course printing to STDERR not STDOUT - so it ends up in a log, and not in the program output 😅 won’t make that mistake again!
deleted by creator
We run almost everything on bare metal during development. The ci/cd pipeline runs containerized and also produces a container with the application inside, that then gets deployed to production. But we don’t debug on production, so that isn’t an issue.
Uh, what? How do people do stuff without debuggers? :D
printf.
I use debuggers all day every day. If I’m running something in development, there’s a very good chance I have it connected to a debugger. Also use it whenever I encounter an unexpected behavior in production (we use our own product for work too)
The profiler is a lot more specific and I haven’t used it in a while.
As a C# programmer I use the debugger every single day, since it’s so natural and easy to use as to just run the application. I’ve grow spoiled actually, when I program in Go or Rust I really miss the “it just works” debugger.
I can’t imagine programming without regularly pausing execution to inspect intermediate variables, run some quick checks in the immediate window or set conditional breakpoints. I’m always a bit surprised when I remember there are people who don’t work like that
Same here. The Visual Studio debugger is excellent, and there’s never a day that goes by without me using it.
I seldom use profilers because I seldom need to. It’s only usefull to run a profiler if your programm has a well defined perfomance issue (like “The request should have an average responsetime of X ms but has one of Y ms” or "90% of the requests should have a response after X ms but only Y% actually do).
On the other hand I use a debugger all the time. I rarely start any programm I work on without a debugger attached. Even if I’m just running a smoke test, if this fails I want to be able to dig right into the issue without having to restart the programm in debug mode. The only situation, where i routinely run code without a debugger is the red-green-refactor cycle with running unit tests because I’ll need to re run these multiple times with a debugger anyway if there are unexpected reds…
What enables me? Well there’s this prominent bug-shaped icon in my IDE right besides the “play button”, and there’s Dev-Tools in Chrome that comes with the debugger for JS…
Running your code without a debugger is only usefull if you want to actually use it or if you’re so sure that there aren’t any issues that you might as well skip running the code altogether…
I have a tendency to just use console logging, and only use debuggers when things are starting to get hairy.
I’ve used a debuggers only a handful of times in the last decade or so. The projects I work on have complex stacks, are distributed, etc. The effort to get that to run in a debugger is simply not worth it, logging and testing will do 99.9% of the time. Profiling on the other hand, now that’s useful, especially on prod or under prod load.
Really often, even since TurboDebuger in the 90s, no other way to trace your code, step over/into, watch variables, etc. For compiled program it’s necessary. For javascript I use print lol
Don’t forget being able to watch the stack in realtime, and run your code backwards to roll back its state!
At my last job, doing firmware for datacenter devices, almost never. JTAG debugging can be useful if you can figure out how to reproduce the problem on the bench, but (a) it’s really only useful if the relevant question is “what is the state of the system” and (b) it often isn’t possible outside of the lab. My experience with firmware is that most bugs end up being solved by poring over the code or datasheets/errata and having a good long think (which is exactly as effective as it sounds – one of the reasons I left that job). The cases I’ve encountered where a debugger would be genuinely useful are almost always more practically served by printf debugging.
Profilers aren’t really a thing when you have kilobytes of RAM. It can be done but you’re building all the infrastructure by hand (the same is true of debugger support for things like threads). Just like printf debugging, it’s generally more practical to instrument the interesting bits manually.
I find debuggers are used a lot more on confusing legacy code.
Lately, monitoring tools such as OpenTelemetry have replaced a lot of my use of profilers.
My primary languages are Java (for work), Javascript (for work), and C/C++ (for hobbies). Earlier in my career, I used to use the debugger a lot to help figure out what’s going on when my applications were running, but I really don’t reach for it as a tool anymore. Now, I’ll typically gravitate towards either logging things (at a debug level that I can turn on and off at runtime) or I’ll write tests to help me organize my thoughts, and expectations
I don’t remember when, or if ever, I made the deliberate decision to switch my methodology, but I feel like the benefit of doing things in logging or tests gives me two things. 6 months later, I can look back and remind myself that I was having trouble in that area; it can remind me how I fixed it too. Those things also can serve as a sanity check that if I’m changing things in that area, I don’t end up breaking it again (at least breaking it in the same way)
With that said, I will reach for the debugger as a prototyping tool. IntelliJ IDEA (and probably other Java debuggers) allow you to execute statements on-the-fly. I’ll run my app up to the point where I know what I want to do, but don’t know exactly how to do it. Being able to run my app up to the point of that, then pause it and get to try different statements to see what comes back has sped up my development pretty well. Logging or testing don’t really apply for that kind of exploration, and pausing to run arbitrary statements beats the other options in how quickly and minimally that exploration can be done
I recently started doing xeyes debugging.
We have so many debug logs that trying to find your log of a background takes a non zero amount of time.
So just inserting
system("xeyes");
is actually way easier, to get instant feedback, and you can just usesystem("xmessage msg")
, if you need a message.That makes me so happy.
Pycharm debugger has been great for me recently, I love the feature where you can drop into an ipython repl and interact with your program state.
All the time. I deal with a lot C# code that makes and responds to HTTP API requests, and being able to check if requests and responses are properly formed without having to slap print statements everywhere is a godsend.
For microcontrollers, quite often. Mainly because visibility is quite poor, you’re often trying to do stupid things, problems tend to be localized, and JTAG is easier than a firmware upload.
For other applications, rarely. Debuggers help when you don’t understand what’s going on at a micro level, which is more common with less experience or when the code is more complex due to other constraints.
Applications running in full fledged operating systems often have plenty of log output, and it’s trivial to add more, formatted as you need. You can view a broad slice of the application with printouts, and iteratively tune those prints to what you need, vs a debugger which is better suited for observing a small slice of the application.
Always, but I’m a former Googler, so performance was always a huge concern with each and every frontend change we made.
Rendering something to a page without errors should be the starting goal, where you then shift focus to readability, accessibility, maintainability, interoperability - all that other stuff that actually matters more but is opaque to users - but in most cases, it’s the end goal, and all that other stuff isn’t considered at all.
IMO, the web would be a lot better if frontend devs spent more time learning how to use their tools instead of logging everything to the console.