AI’s Impact on Software Development: Where We Are & What Comes Next

Categories: AI and MLTechnology

The explosive popularity of Open AI’s ChatGPT and impending launch of Bard, Google’s long-awaited conversational AI, have set the world ablaze with speculation of just how powerful artificially intelligent systems have become. It’s a surprising development for the general public; however, for many of us this was never a question of “if” but “when.” 

Back in 2009, I published a blog post entitled “Software, the Last Handmade Thing.” We recently republished that blog, just as it was written more than 13 years ago. 

In that earlier blog, I made two predictions:

  1. “[I]n the future, ‘programming’ will be done at a higher level, with better tools, more reusable frameworks and even perhaps artificially intelligent assistance.”
  1. “At some point, machines will probably become so smart, and the collection of reusable frameworks so deep, that artificially intelligent systems can assemble better software from vague requirements than people can.”

I think we in the software community can manifestly agree that prediction #1 has come true, and continues to come true. 

Programming is indeed being done at a higher level with AI assistance.

I also think that we can safely predict that now that AI is ‘real,’ our ability to create software in partnership with them will greatly expand what an individual engineer can accomplish. 

We can imagine simple improvements such as better and more nuanced recommendations for choices on reusable components, as well as more profound ones, like complete system or subsystem generation. This would be along the lines of what we had once hoped Rational Rose could do, but starting with natural language specifications (“specs”), and using AI assistance. 

All of these activities would still need human developers at their core, however, to develop those natural language specs, and to ensure that the software being produced was actually solving the problem it was meant to. The major challenge, predictably, will be ambiguities — or AI-perceived ambiguities — in the specs.

So is AI a threat to career software developers?

While a ChatGPT-type code generation AI might look like a threat to software development as a career, in general, previous productivity improvements (even significant ones) have not decreased the total number of engineers required or the type of salaries they command. Quite the contrary. The more productive the engineer, and the more complex the problems they can tackle, the more demand for engineering talent there has been and, in my opinion, will continue to be. 

There will certainly be a shakeout as lower-skilled engineers who currently perform more routine, repetitive coding tasks are replaced by better tools including AIs that can generate entire simple or niche-specific software systems. We already see this as improved non-AI tooling, such as robotic process automation systems, IFTTT systems and others eliminates the need for many previous routine, repetitive coding tasks. 

However, those who can master the new tools and AI-amplified technology will now be enabled to address bigger and tougher engineering challenges. Given the great need for high-quality software that exists in the world, I believe that even with AI assistance, the total number of human engineers will continue to grow for years to come — along with the salaries they command.

Will AI improve on our ability to write software from vague requirements?

I do think that, at some point, prediction #2 will come true: AIs will do a better job at writing software starting from vague requirements than humans can do. 

I think this is especially true given an iterative approach to such software development in which an AI will generate a system, then humans and other AIs evaluate it and refine the ‘specs’ accordingly. Another system will be generated from the new specs, and the cycle repeats until the desired system can be deployed. In fact, at a lower level, such self-annotation and regenerative learning is part of what makes ChatGPT so powerful, especially for text generation. 

However, as I point out in my original 2009 blog, a key issue with software in particular is that determining when it’s “finished” and works is an open loop task. That is, someone (or something) external to the developer of the software needs to make this call. 

This is because except in the case of a typo, unexpected interaction, or other careless error (which AIs will presumably eliminate), the person or AI producing the software is already implementing the coder’s understanding of the specs. Except for such accidental errors, he/she/it is therefore fundamentally incapable of determining when the finished system departs from the as-desired specs, because the coder already believes they understood the spec and did the right thing. 

It takes an external entity with a different perspective on the specs such as a product manager or an end user to find that the specs themselves (or the coder’s understanding of them) were in error, and to fix the specs accordingly.

Recommended reading: Testing in Production: A New Paradigm for Shift-Right

The ‘true’ set of specs for any software system are unknown, and to large extent unknowable, until a system (or a portion of a system) has actually been built. This is a challenge that, I believe, can eventually be overcome by AIs. 

To take it out of the software world for a minute, could an AI write a novel that is as engaging as one from your favorite author? Would the depth of characters and believable (or suitably unbelievable) situations be present in that work? I would argue not yet, but that it’s possible. By generating enough such novels, getting them critiqued by enough human readers, and trying again, I would argue that an AI system could, in time, give human authors some competition.

Similarly, in software, the key issue is closing the loop on the output. 

Is the generated system what I as a product manager or end user really wanted? This is not something that can be answered by the person or system generating the software; it needs an outside entity, whether human or competing AI, to determine. Creating such ‘unambiguous’ specs to be executed by machine, be it API or CPU, is fundamentally an engineering task. At a lower level, it’s what engineers do today when they code.

For simple systems with clear and unambiguous specs, AIs will become powerful coders very soon, I believe. 

But where complex systems are concerned, human engineers, product managers and related disciplines are here to stay for some time to come. Unless, of course, the end user also happens to be an AI…

More helpful resources:

  • URL copied!