GenAI and the Midas Touch

Insight categories: AI and MLPerspectiveTechnology

GenAI & the Midas Touch: Overcoming the Ambiguity of Natural Language


With all of its promise, Generative AI calls upon us to be careful what we wish for.


We all know the story of King Midas: Though already extremely wealthy, when granted his fondest wish, everything he touched would turn to gold. Initially, he was delighted with this gift, but then he realized that he could not eat or drink because his food turned to gold as soon as it touched his hands or lips. 

Finally, when his precious daughter jumped into his arms and instantly became a golden statue, he realized that what seemed like a great blessing was, in many ways, a curse. He was overjoyed when his wish was revoked, and his daughter (and presumably his food) were restored to their normal state.

The moral of the story: Be careful what you wish for.

In many ways, we face the same situation as King Midas in the age of GenAI. The promise of this technology is that many of our wishes, once expressed in the form of a “prompt,” can be instantly answered. Today, those wishes include unparalleled access and summarization of information, the creation of custom textual and graphic content, and even the automatic generation of software code. 

It’s also not too far-fetched that in the near future, those same verbal instructions will begin to shape the physical world around us, as in the case of King Midas. This is not speculative; even today, GenAI is being used to take verbal instructions and generate the code needed to control industrial robots, which, in turn, build products. In the immediate future, this GenAI-driven fabrication will be done using robots, 3D printers, and CNC machines housed in factories. However, it is not much of a stretch to think that in the near future, manufacturing will be “democratized” and begin to move into our homes or shops, as well. We simply describe the clothes or other items we want, and our machines produce them.

Generative AI Systems Will Give Us What We Ask For

While these generative systems are near-miraculous in many ways, the problem is that, like King Midas, they will give us what we ask for. They will generate the code we tell them to produce, the story we tell them to write, the picture we tell them to draw, the goods we ask them to produce, and so on. 

Further on, in the future, this may not be so. Our systems will surely grow more adept at reading our intent rather than following our literal instructions. (In software design circles, this used to be called “DWIM,” for “Do What I Mean.”) However, for now – and for at least the next few years – our increasingly smart systems will largely give us what we ask for rather than intuiting what we really want.

The reason disciplines like engineering, law, and others exist in the first place is that experience has shown that it’s very challenging to describe specifically what you want. 

Recommended reading: Prompt Engineering for Generative AI Defined

In the case of King Midas, the unforeseen consequences of his ambiguous verbal ‘prompt’ are obvious from the fable. But as a layman, when we try to write a complex Excel spreadsheet or draft a legal agreement, we rarely get it right the first time. Indeed, it would be very surprising if a computer – or another user – understood your spreadsheet the same way you intended the first time or if another reader of your contract interpreted every provision the same way you meant. 

Those variances often revolve around how challenging it is to unambiguously express what we want and expect, either to a machine or another person. AIs, at least in the short run, will face the same challenges that we humans do in figuring out the user’s intent.

Precision, Refinement & Specificity are Needed More Than Ever in a GenAI World

In software, through the years, tools and approaches like IFTT (“if this, then that”), Low Code / No Code, graphical programming, and others have attempted to make it easier for humans to describe what they want a software program to do. These were successful in small-scale systems and restricted domains but had limited success in the general-purpose complex engineering space. 

Consequently, they were not generally used to build more complicated systems. This is primarily because the simplified language these paradigms supported was not expressive enough for the person using them to ‘communicate’ complex ideas to the computer as clearly and as concisely as they could by using a computer language. Classic computer languages like C, C#, and Java are made for great specificity when communicating with a computing device. 

The downside is that these precise ‘programming’ languages require much study and rigor. Still, with effort, they can be made to communicate the programmer/author’s intent quite clearly, which is why they are widely used in complex systems.

Another challenge faced by the Low Code / No Code paradigm and graphical programming tools will also, I predict, be faced by GenAI: these were positioned as ‘democratizing’ software development. In other words, these and other paradigms are seen as more accessible to a wider pool of ‘developers,’ including those who do not know conventional programming languages. And these claims are, indeed, true. In the areas where they are applicable, it’s easier to author a simple low-code system than one in, say, C#. This means you can use less skilled – and therefore lower cost – people to create your system, or you can even do it yourself as a non-programmer. 

Recommended reading: Using AI to Maximize Business Potential: A Guide to Artificial Intelligence for Non-Technical Professionals

However, except in certain niche areas, the precision of thought and expression is still required. Otherwise, users tend to find that even though programs are easier to create, they will not deliver the results you expect or want without significant additional effort and refinement.

Natural language prompts seem, on the surface, to take the ‘democratization’ of programming to the extreme. Since nearly all of us learn to talk as young children (and to write not long after that), it seems that with GenAI, we can all be programmers. This is certainly true for non-critical questions – we don’t need to be a programmer or a data scientist to get surprisingly good answers from any of the popular public GenAI systems.

The problem, as King Midas found, is that while natural language is easy to use, it also carries inherent ambiguity. This ambiguity is built into the language due to many factors, including the fact that natural languages evolve organically over many generations, resulting in layer upon layer of meaning and nuance being attached to the same word or phrase. 

Also, being a ‘human’ language, the language speaker inherently assumes that the listener is a person who has ‘common sense’ and shares her or his human experiences, and will therefore automatically avoid the negative interpretations of a literal statement – King Midas’s issue. The social function of language also makes it intentionally imprecise at times to avoid offending the listener. Natural language is designed for expressiveness and social interaction with other humans, not primarily for rigorous clarity.

While it is certainly possible to be clear and precise when using natural language, it also takes thoughtful effort. That is why good lawyers can command high salaries and why writing a good blog, report, or even an email that doesn’t require further context or clarification takes time and attention. The rapidly emerging discipline of ‘prompt engineering’ is a human or machine-aided human activity to develop precise and specific prompts for GenAI systems that are within the limitations of a given AI to answer correctly. This activity begins to look more like engineering or programming than the truly democratized natural-language paradigm, which excites us so much about Generative AI. 

However, before blaming the paradigm, we should realize that even humans find natural language ambiguous. Consider that the next time you get a cryptic text from a friend or your spouse or significant other! Given the ambiguities inherent in natural language itself, it’s not surprising that a discipline needs to emerge that allows people and AIs to communicate without mistakes on either side—and that this discipline looks a lot like engineering.

I truly believe that GenAI will give us all the “Midas Touch” and lead to unparalleled and democratized access to information – and, eventually, personalized goods and services, as well. We should all keep in mind, though, that like King Midas, we must be careful what we ask for. In the near future, we just might get it!

Learn more:



Dr. Jim Walsh


View all Articles

Trending Insights

If You Build Products, You Should Be Using Digital Twins

If You Build Products, You Should Be Using...

Digital TransformationTesting and QAManufacturing and Industrial
Empowering Teams with Agile Product-Oriented Delivery, Step By Step

Empowering Teams with Agile Product-Oriented Delivery, Step By...

AgileProject ManagementAutomotiveCommunicationsConsumer and RetailMedia

Top Authors

Lavanya Mandavilli

Lavanya Mandavilli

Principal Technical Writer

Oleksandr Fedirko

Oleksandr Fedirko

Senior Solution Architect

Mark Norkin

Mark Norkin

Consultant, Engineering

Deborah Kennedy

Deborah Kennedy

Associate Vice President,Client Engagement

Randy Merry

Randy Merry

Chief Technology Officer, Medical Technology & Healthcare

All Categories

  • URL copied!