Engineering, Coding, and AI
There’s a distinction one of my senior developers used to make when we were doing a code review of some other developer’s work: “I don’t know if we should merge this; it looks a lot like he’s coding, not engineering.” What he meant by that was that in isolation, this code probably did what it was supposed to do. But it neglected some important part of the big picture of the overall application. If it was a CMS application, maybe the end user would have trouble editing it; for a scraper, maybe it was opting some other part of the app into doing more translation than it should really have to do. No enormous harm, but while it accomplished the basic function, it made the codebase just a little worse.
This sort of story is often told as a cautionary tale about using Gen AI in your development practices. Sure – ChatGPT or Copilot can put together perfectly good code, but it’s going to miss important context and introduce subtle bugs. And in general, I think Gen AI optimists have a pretty good answer to this objection: you’re using it wrong. If you give your favorite AI the context, it will successfully, as a rule, take it into account. Even better if you give it a fuller prompt, instructing it to pay close attention to exactly that context.
But it’s important to keep in mind some human limitations when using Gen AI at scale. Yes, in principle, the tool itself can keep context fixed and advise perfectly reasonable architectural directions. But when it is employed by a more junior developer, the problem is exactly that they don’t know what they don’t know. The Gen AI tool can preserve the context but will not unless the user instructing it knows the context of the application in the first place.
This is why I always advise any of our teams that use Gen AI in their work to review the code skeptically. Don’t treat ChatGPT like an engineer; treat it like a coder.
Want to talk about the proper use of AI? Book a call with me here!