Yeah. I’m starting to think the misspelling is not deliberate, but ironic - it’s one thing to have guns and a written warning saying you will use them. It’s another to loudly convey that you’re this dumb and also have guns.
Put any distro in front of me and provided I don’t need to master it, I’m good. Ubuntu is fine. Debian is fine. RedHat is fine. Fedora is fine. I even have a tiny low-end system that is using Bohdi. Whatever. We’re all using mostly the same kernel anyway.
90% of what I do is in a container anyway so it almost doesn’t matter; half the time that means Alpine, but not really. That includes both consuming products from upstream as well as software development. I also practically live in the terminal, so I couldn’t care less what GUI subsystem is in play, even while I’m using it.
Thank you! I want to ::sniff:: thank my coach, the whole team ::sniff::, and especially my mom for helping make this happen! ::sniff:: Love you mom!
Here’s a version for my fellow gluten-free peeps:
Honestly, this is why I tell developers that work with/for me to build in logging, day one. Not only will you always have clarity in every environment, but you won’t run into cases where adding logging later makes races/deadlocks “go away mysteriously.” A lot of the time, attaching a debugger to stuff in production isn’t going to fly, so “printf debugging” like this is truly your best bet.
To do this right, look into logging modules/libraries that support filtering, lazy evaluation, contexts, and JSON output for perfect SEIM compatibility (enterprise stuff like Splunk or ELK).
Heisenbugs are the worst. My condolences for being tasked with diagnosing one.
Last time I did anything on the job with C++ was about 8 years ago. Here’s what I learned. It may still be relevant.
const
, constexpr
, inline
, volatile
, are all about steering the compiler to generate the code you want. As a consequence, you spend a lot more of your time troubleshooting code generation and compilation errors than with other languages.valgrind
or at least a really good IDE that’s dialed in for your process and target platform. Letting the rest of the team get away without these tools will negatively impact the team’s ability to fix serious problems.1 - I borrowed this idea from working on J2EE apps, of all places, where stack traces get so huge/deep that there are plugins designed to filter out method calls (sometimes, entire libraries) that are just noise. The idea of post-processing errors just kind of stuck after that - it’s just more data, after all.
“this is as far as you can go if you don’t want to get involved in management”
Yes. That exactly. This typically comes with a nice perk: Principals are supposed to have the same clout as lower-level managers. Which is to say they usually report to Directors or even the CTO in some organizations.
Another one is “Independent Contributor” which is similar but, as the name would suggest, is very self sufficient and does not work on (or for) a team. They’re basically one-man engineering shops and are expected to perform well everywhere in the company’s tech and talent stacks. As a result, ICs are very rare.
The other pivot point is The Pragmatic Programmer, which is totally understandable.
That book does a good job of grounding the reader through examples and parables from everywhere else but IT. By the end, you realize that good software engineering makes the best of general problem-solving skills, rather than some magical skillset peculiar to computing. You wind up reaching a place where you can begin to solve nearly any problem through use of the same principles. So @codex here, perhaps effortlessly, went on to management instead.
Same. Let’s rock.
Printed on the bomb:
| In case of accidental detonation: have a nice day. Thanks for reading.
I’ve tried to enjoy IPAs, really. I’m not discounting the role of interesting terpenes and flavonoids here, but the raw in-your-face excessive bitterness of IPA-level hops pushes all that great stuff so far from the stage of my experience, that it’s all left waiting in the lobby to get seated. For me, it’s like someone mixed LaCroix, light beer, and a drop of dish soap in a glass. Every time.
Oh no, I have to press up
200+ times if we’re counting all the detritus and failure in my command history.
There is an advantage to this approach though: fewer errors. You’re plucking a known working command from a list instead of manually typing a (possibly) broken version of it. Worse yet is when it’s a command where typematic mistakes cause unintended side effects like data loss. So, mashing up
100 times can be pretty smart, especially if you’re not a great typist.
Upvoted for the dancing and singing emoticon. Nice art.
I actually tried to use marketplace a few weeks ago. It was an unmitigated disaster. People either didn’t respond, had stale posts for items, or couldn’t get their act together to have a conversation (even with 12 hours between messages) about how to get shit out of their house. I have never yearned for old-fashioned yard sales so much.
This thought exercise has been explored in Dead Boy Detective Agency. You probably don’t want a demon using your body as a hot-rod.
Estimate for CD lifespans was in the 100 year range, but the only way to really put that to the test is to try them in 100 years.
FWIW, I have some CDs that are pushing 40+ years at this point. They work fine, scratches and all.
In my experience, CDRs and other record-able media can’t handle a single summer in a hot car. Mistakes were made. If you have your hands on anything like that, I agree: focus there first for your data hoarding activities.
I completely understand the sentiment here, but I have to respectfully disagree with part of your argument.
The internet itself is this fundamentally ephemeral, thing. Our relationship to it, as a medium, has persisted for decades at this point and may continue to do so for a long time. At the same time, it lives and dies by the whims of corporations and millions of other users, and so its trajectory is largely beyond the control of any one individual. It’s like this by design: properties like distributed control, flexible routing, easy duplication/destruction of data, give it resilience but also make it temporary. This also makes it a volatile place to keep things permanently, which is a real problem for a lot of different mediums.
With that in mind, there exists a lot of media today that has no non-digital equivalent. So, having a local data cache you control - DVD, BluRay, forvever moving data between online services, even a personal NAS - is the only hedge you can get for the net’s volatility. And even then, that medium has a service life.
So I don’t think it’s a shame, per se, that things are like this now. Rather, it always has been. It’s never been easier to consume (and pirate) media online, but the underlying rules have not changed.