Skip to main content
CrankySec

Fork that.

Right off the bat, let me state for the record that I have, in fact, used LLMs. I still do. They're good for some inconsequential bullshit you don't want to do like "Write a short bio about you!", or a RACI matrix that would otherwise take 40 hours to produce just so it can sit somewhere being used by no one, or your "goals" as an individual contributor (lol), or any OKR.

It's kinda like ordering a couple of Chicago-style hot-dogs, a Double Fatso with Cheese, and fries from Fatso's Last Stand—true beauts, by the way—when you can't be bothered to cook something a little more healthy. It's not good for you, it's not good for the environment, and it's not good for public health. It is convinient, and it's probably good for pharma companies that charge a trillion dollars for insulin, but that's another matter.

It's a lot of damage for very little gain. And that's not even touching instances where you have to bend over backwards and change everything about the way you work just to make the job of the fucking thing you created to do the job in the first place easier.

Don't want to cook? Order some takeout! But, it would be even better if you could come over here, chop the onions, flip the burgers, man the fryer, assemble everything, bag it, pay, take the payment, tip, and drive back home to deliver it to yourself.

Leo Pointing

"We got a lot of money and told everyone that this technology would be able to do all sorts of things, doing the heavy lifting, handling time-consuming and boring things humans do not want to do. The only thing we ask is for the whole world to change everything about the way we do anything in order to make it easier for this incredible technology that demands terawatts of energy and cubic kilometers of water to spit out some CliffsNotes. Ain't that a deal?" - Andrew Ng, probably.

If you piece it all together, you'll probably come to the same conclusion as I did: the economy is driven by a handful of dudes in Patagonia vests with a six-figure cocaine budget: Wall Street aNaLySts. Otherwise healthy and profitable company does earnings call, shit's looking good, and some fuck who will need to replace large portions of his skull with platinum because it's been utterly corroded by cocaine asks "But what about... AI?" Stock tanks, CEO who gets paid in shares panics, R&D money that would go to, you know, products, gets diverted to AI, everyone who's not AI is fired, your 401(k) goes to shit, your energy bill goes up because these HyPeRsCaLeRs need more power than it would take to kickstart fusion in a main-sequence star, and, if you manage to keep your job, your boss will say "Either embrace AI or get out of this career." Don't threaten me with a good time, bro.

But this is Crankysec, so let's talk sec. Let's, if you'll indulge me, perform a little bit of speculation. Unlike some popular cybersec influenzas out there who will say "I give AGI being achieved by Q3 2025 a probability of 'maybe?'. And everyone being replaced by LLMs by FY 2026 a 'who knows?' chance.", we don't assign any probability at all because we're not stupid cunts, and we know this speculation is going to come back to bite us in the ass as is.

Let's say your employer hired someone for, let's veer into the absurd here for dramatic effect, 250 million dollars a year. What's that? Companies are actually paying 250 million US dollars for one dude. Motherfucker better be batting .500. What's that? We're not talking about the Los Angeles Dodgers and that money is not for playing baseball? AI research? Damn. Reality is way more absurd. Let's roll with that. They hire someone for a quarter of a billion dollars, and fire everyone at, I don't know, cybersecurity, and replace them with some bot.

People from developing countries where savvy folks have very little job prospects know what happens. Savvy peeps with plenty of skills, no money, and nothing but time will find a way to make do. Add a dash of "I got replaced by AI, but I know where the skeletons are!", and you'll have an army of people with the skills and the insider information to wreak some havoc. But you won't really know about that, will you? Who's going to prevent, detect, respond, fix and report those issues? The same unreliable bot? Shit's going to be on fire, and no one will even know. Not that we do know a lot now, but you catch my drift here. Things are going to get even more opaque.

Desperate people with nothing to lose will find a way to subvert systems, and no one will be there to stop them. You might even be tempted to side with these hypothetical post-modern Robin Hoods—I know I am–until your bank account is drained and you have no recourse because there's no one there to help you except some LLM agent that will talk around you, refuse to acknowledge the problem, and end the chat after 5 minutes because the company doesn't want to spend too many tokens on shit like this.

I truly believe this is a fork on the road for our profession, but the path labeled AI is a fucking autobahn with beautiful, pristine tarmac, and the other one is a dirt road that's also a minefield. We all know which path the people who are driving this thing will take. Let's just hope this bubble bursts before we get to this point, and we can all look at this technology and put it where it belongs: in our toolboxes, alongside the other tools that we use to make our lives better.

Did I use any LLM to write this? I did not. All dumb takes, embarrassing typos, and misplaced commas are my own. Certified organic.

Join our Discord, will ya?