Anybody who thinks “quickly churns out high performing code!” and “mass-manufactures vulnerabilities and disaster UX designs” are mutually exclusive has clearly never met a programmer in their entire life.
Anybody who thinks “quickly churns out high performing code!” and “mass-manufactures vulnerabilities and disaster UX designs” are mutually exclusive has clearly never met a programmer in their entire life.
@baldur makes me think of one dev on my first project as a lead. I was desperately glad to have him because he worked SO VERY FAST, but we eventually had to refactor everything he did.
I keep seeing people respond to observations that these tools seem to output highly vulnerable code with numerous flaws with claims that the output runs faster so it must be of a high quality. 🤷🏻♂️
@baldur I've noticed the "it runs faster!" people don't also say "it runs correctly!" -- it's pretty easy to make a complex function run faster if you actually just do it wrong and skip most of the steps. (Or, in some cases, all of them...)
@baldur in the words of @pluralistic "you can now create technical debt at scale!"
SMH have these people never heard one of the great quotes of software engineering lore: "if it doesn't have to work I can make it as fast as you like"?
Isn't that from The Mythical Man Month or something?
@petealexharris @baldur
Your comment triggered a somehow related memory: I had a university mate that made an exxxxtremely efficient compression algorithm; and as long as you didn't have to decompress the payload, everything was impressive. 😅
@baldur Funny how efficiency never seems to be an issue when running LLMs for just about any use-case imaginable, but for justifying the use of LLM output, it suddenly becomes *very* important.