Hahahahahahahaha

"Coding assistants are now generating code that fails to perform as intended, but which on the surface seems to run successfully, avoiding syntax errors or obvious crashes."

mastodon.social/@ieeespectrum/…

#ai #coding #syntaxerror #gigo

teilten dies erneut

Als Antwort auf AI6YR Ben

Had my first meeting with a “can y’all tell me why this source code doesn’t work?” a while back.

The gorgeousness of the code, the dearth of that same code, and the evasiveness of the questioner were all clues.

That, and that the code compiled and ran without error, did nothing useful, and there was far too little code for the intended task.

“Where’s the rest of the code for [large task]?”

“That’s it.”

[not nearly enough code, even with very abstracted frameworks]

LLMs can double the effort for answering questions, too. Once to answer the question, and then again to explain why the slop is outdated, misdirected, confused, or just plain wrong.

There certainly are cases where this stuff is useful (vulnerability scanning of curl and OpenSSL with Aisle included), but I’m not at all certain LLMs as currently applied aren’t a net negative.

And we’re deep in the “throw AI at the wall and see what sticks” part of the hype cycle, unfortunately.