Hahahahahahahaha

"Coding assistants are now generating code that fails to perform as intended, but which on the surface seems to run successfully, avoiding syntax errors or obvious crashes."

mastodon.social/@ieeespectrum/…

#ai #coding #syntaxerror #gigo

Kristian 🌒 hat dies geteilt.

Als Antwort auf AI6YR Ben

@AI6YR Ben i recently had to listen to a managment pitch where one of the bosses enthusiastically claimed how easy and fast programming had become these days with ai. even his eleven year old son could produce very impressive software by vibe coding ... it was an online meet, so he couldn't hear all the dev and ops attendees cry ....
Als Antwort auf AI6YR Ben

Had my first meeting with a “can y’all tell me why this source code doesn’t work?” a while back.

The gorgeousness of the code, the dearth of that same code, and the evasiveness of the questioner were all clues.

That, and that the code compiled and ran without error, did nothing useful, and there was far too little code for the intended task.

“Where’s the rest of the code for [large task]?”

“That’s it.”

[not nearly enough code, even with very abstracted frameworks]

LLMs can double the effort for answering questions, too. Once to answer the question, and then again to explain why the slop is outdated, misdirected, confused, or just plain wrong.

There certainly are cases where this stuff is useful (vulnerability scanning of curl and OpenSSL with Aisle included), but I’m not at all certain LLMs as currently applied aren’t a net negative.

And we’re deep in the “throw AI at the wall and see what sticks” part of the hype cycle, unfortunately.