Accelerate Your Automotive Software Innovation

GlobalLogic's SDV Cloud Framework and Eclipse Automotive Integration


Download our whitepaper to discover how GlobalLogic's SDV Cloud Framework and Eclipse Leda integration can transform your automotive development processes.


Key Highlights

  • Understand the Shift to Software Defined Vehicles (SDVs): Learn about the profound changes SDVs bring to the automotive industry.
  • Overcome OEM Challenges: Explore the key obstacles faced by OEMs and how to address them effectively.
  • Leverage the SDV Cloud Framework: Discover the benefits of a scalable, flexible cloud framework tailored for automotive development.
  • Maximize Efficiency with Virtual Workbench: See how virtualization can enhance collaboration and reduce costs.
  • Streamline Management with Control Center: Centralize project management and infrastructure control for seamless operations.
  • Enhance Development with Eclipse Leda Integration: Benefit from a pre-configured environment that accelerates development and testing.


Want to learn more about the benefits of the SDV cloud framework and Eclipse Leda integration? Download our whitepaper and read what our experts have to say about these key factors that contribute to business-added value.

Accelerated Time-to-Market

  • Standardized development processes reduce errors and rework.
  • Early problem identification and rapid response to market changes.

Enhanced Quality and Reduced Costs

  • Feature pipelines ensure timely product quality.
  • Virtualized testing reduces costs and simplifies change management.

Expanded Business Opportunities

  • Modular architecture enables tailored SDV solutions.
  • Scalability and adaptability to changing market conditions.

Increased Developer Agility and Productivity

  • Integration with IBM Doors or Codebeamer allows developers to work across multiple platforms efficiently and reduces manual data entry.
  • End-to-end transparency ensures developers can easily track and manage their work, identify issues, and collaborate.

Collaboration and Ecosystem Benefits

  • Collaboration with multiple stakeholders within the Eclipse SDV ecosystem, defining common standards and integrating tools seamlessly.
  • Provide plug-in flexibility for OEMs to integrate various tools from various partners.
  • Developing precise, specialized tools to address OEM challenges, ensuring consistency and acceptance within the ecosystem.


Are you ready to redefine your automotive development processes with Eclipse Leda integration opportunities?

At a recent Hitachi Energy conference, I saw a very interesting presentation by Hitachi partner nVidia—the fabless semiconductor company whose GPUs are key drivers of the GenAI revolution. The speaker described nVidia not as a GPU company but rather as a “simulation” company. He described a spectrum of simulation technologies NVidia supports ranging from “physics-based” to “data-based.”

As a person who was educated as a physicist, several light bulbs clicked on for me in this description. What the speaker meant, of course, was that simulations or video games can either be based on ‘algorithms’—that is, a set of physical or un-physical laws (for fantasy worlds, for example)—or they can use extrapolations based on data.

When we as developers write code, we establish a set of ‘laws’ or rules for a computer to follow. Learned behavior, on the other hand, abstracts a set of patterns or probabilities from the data encountered. The latter is the nature of large language models—they are not programmed; rather they are trained based on a selection of natural language text, photographs, music, or other sources of information. 

The models essentially ‘draw their own conclusions’ in a learning process. (Or, more strictly speaking, the models are the artifacts embodying the learning that took place when an algorithm processed the training data.)

Recommended reading: Using AI to Maximize Business Potential: A Guide to Artificial Intelligence for Non-Technical Professionals

Again, this stuck with me very forcefully as an analogy of the human learning process and of the way physics and science work. 

There is a famous anecdote about the physicist Galileo, who was born in the 16th Century, observing the swaying of a chandelier during a church service in the town of Pisa Italy (of leaning tower fame). There was a breeze that occasionally set the chandeliers in motion with larger or smaller oscillations. 

Galileo observed that regardless of how high the chandelier was blown by the wind, once it started to fall, a given chandelier always took the same amount of time to complete an oscillation. In other words, the time the chandelier took to swing back and forth depended only on the length of the chain holding it, not on the height when it was released.

This is quite an extraordinary observation, and the fact that this phenomenon apparently was not noticed (or at least recorded and acted on) for the first 300,000 years or so of human history indicates the degree of insight and curiosity Galileo had. 

Note that Galileo did not have a watch he could use to record the time—they had not been invented yet, and could not have been until this ‘pendulum effect’ had been discovered. Galileo timed those initial oscillations using his pulse—though he later refined his observations using, I presume, the water clocks or sand glasses that were known in his time.

Why is this interesting? Because Galileo, like other discoverers, used observations or ‘data’ to infer patterns. From the data, he was able to make a prediction—namely, that the period of a pendulum depends only on the length of the pendulum, and not on its height of oscillation, or (as was later found) its weight.

Why is this important, and how does it relate to GenAI? There are two broad branches of Physics, called “experimental” and “theoretical”. The goal of experimental physics is to make observations and determine what happens. The goal of theoretical physics is to explain why something happens—specifically, to discover the underlying principles that manifest themselves in observations, or that predict what will be observed.

What is interesting to me in the context of GenAI is that there is a middle ground between these two areas of physics that is sometimes called phenomenology. The term phenomenology is used in different contexts, but back when I was a graduate student in high energy particle physics (theoretical physics, by the way) the word ‘phenomenology’ was used to describe predictions that we did not yet have the theory to explain. 

In other words, we knew that something happened or would happen, but we didn’t yet have a satisfactory explanation for “why.”

Galileo, in his pendulum observations in the church and subsequently in his ‘lab’, was doing what today we would call experimental physics. That is, he was making observations about what happened, and describing what he saw. 

In my limited historical research, I didn’t find a record that he did so, but we can imagine that Galileo could have taken his observations one step further and made quantitative predictions about the behavior of pendulums. That is, based on his experimental results, he could have discovered that for small oscillations, the period of a pendulum was proportional to the square root of the pendulum's length. 

However, even if he had produced such a quantitatively accurate predictive model, history does not record that Galileo ever really understood WHY the pendulum rule he discovered was true. A satisfying qualitative explanation had to wait for roughly 100 years for Dutch scientist Christiaan Huygens’ work on harmonic motion in 1673. A full quantitative explanation required Sir Isaac Newton to first invent calculus and lay out his three laws of motion. (For the theoretical basis of simple harmonic motion, such as a pendulum, see here for example.)

So how does this history relate to GenAI? 

We can readily imagine our current-generation GenAI models acting like Galileo—observing what happens, identifying patterns, and making extrapolations and predictions based on those patterns. We can even imagine them doing the curve fitting and other math required to turn those fresh observations into mathematical models. 

It’s more difficult to imagine a current-generation GenAI model acting like a Huygens or a Newton and inferring from first principles WHY something happens unless the model already contains that information and simply retrieves it. 

I don’t believe reasoning from first principles is impossible for GenAI, and people are working hard on enabling it. Approaches such as “chain of thought” and “train of thought” come close. But ‘theory’ is not the strong suit of current-generation (2024) GenAI technology. Current LLMs are “phenomenologists”, not “theorists”, which is in no way intended to underrate their value.

Why do we care about the theory? If we can predict “what” will happen, do we really care “why”?

This is a good question, and it rapidly gets metaphysical, hinging on the nature of consciousness. Moreover, what constitutes a “satisfying explanation” and “first principles” gets really philosophical fast. But in a practical sense, we can see that both theory and phenomenology have value, each in a different context.

Phenomenology has ‘rough and ready’ practical value. Astronomers and, earlier, astrologers could predict the phase of the moon and the progression of the seasons long before they understood that the Earth orbits the Sun, and the Moon orbits the Earth. These purely phenomenologically-based predictions had a profound impact on human history, including the invention of agriculture which, in turn, led to the creation of cities and civilization. 

But it is the nature of the human mind to try to discern the reasons behind what it observes. People developed theories—initially what we’d now term religious or mythological—to explain why the Sun and Moon behave as they do. They did this many centuries before the discovery of calculus and the law of gravity by Newton; the increasingly refined observations made by Kepler and, earlier, Galileo; and Copernicus’ hypothesis that the earth obits the Sun. It is in the nature of humans to keep asking “why” until a satisfying ‘theory’ is presented to explain the observations.

Watch: Getting GenAI Ready with GlobalLogic

Besides being intellectually satisfying to us humans, the value of theory is that, by reducing observed behavior to an outcome of basic principles, it lets us solve problems and see connections that phenomenology alone does not. 

For example, the theory of simple harmonic motion outlined in the Feynman lecture above not only explains the motion of pendulums (Galileo’s observations), but also the vibration of plucked strings on musical instruments and the movement of weights on springs. When we generalize this slightly, driven harmonic motion (a pendulum pushed by the wind or by the escapement mechanism of a clock) also leads to insights in the area of “resonance”. 

This, in turn, helps us understand diverse phenomena such as the structure of Saturn's rings and the behavior of physical structures like bridges under the influence of an external force, such as the wind. 

By uniting our understanding of multiple observations, a theory helps us discover the underlying connection between phenomena that initially appeared distinct. This process of forming a theory is not confined to physics but is something all of us do in everyday life. We have a theory of the motivations behind our spouse’s or friend’s behavior; as infants, we form the theory that an object continues to exist even when we don’t see it; as students or engineers we form a theory of what it takes to get a good grade or promotion. 

We also form ‘theories’ every day in the software space, when we develop an “architecture” or algorithm that produces a (hopefully) simple system that solves not just one but multiple problems. 

We also abstract out commonalities between diverse systems—for example, logging, observability, and security—and structure them as “cross-cutting concerns” rather than re-inventing them afresh for every system. In general, people consistently synthesize observations and try to discern the underlying cause behind them. It’s our nature.

The human brain functions using a combination of observation, phenomenologically-based prediction, and abstraction or “theory” to understand what it observes and expects. Currently (in 2024), GenAI is strongest in the first two aspects—observation and phenomenologically-based prediction. 

To deliver on the ‘holy’ (or ‘unholy’) grail of artificial general intelligence, AI-based systems need to not only predict but also be able to form abstractions and ‘theories’ based on their observations and predictions. They will need to combine a ‘Galileo brain’ with a ‘Sir Isaac Newton’ brain. 

I expect that we will indeed see such a ‘meeting of minds’ in GenAI, even though we’re not fully there today. We have ourselves as examples that these two modes of thought can co-exist in a single entity. We also know first-hand the power of intelligence that not only predicts “what,” but also understands “why.”

You might also enjoy:

Transforming Telco: 5 GenAI Trends Reshaping Experiences & Driving New Revenue

In the fast-paced realm of telecommunications, where constantly connected customers demand increasingly personalized and seamless experiences, innovation is a necessity. Enter GenAI – the catalyst for a profound shift in how telcos interact with their customers and manage their networks. From the bustling discussions at industry events to the boardrooms of leading companies, the buzz surrounding GenAI use cases is palpable.

Join us in exploring the transformative potential of GenAI within the telecommunications landscape. From redefining customer experiences to revolutionizing network operations, GenAI offers a myriad of opportunities for telcos to thrive in an increasingly competitive market.

1. Reimaging Customer Experiences in Telco

We’ve been having many interesting and productive conversations with clients and at the recent Mobile World Congress about GenAI use cases in telecommunications. One area of focus that’s getting a lot of attention and mindshare is GenAI’s impact on customer experience.

As telcos attempt to reimagine customers' experiences across the telecom journey, it’s become clear that intelligent GenAI applications add a lot of value. Imagine you’re a consumer wanting to buy a new service – what’s that experience like today, and how can we make that seamless and engaging? Well, we can start with intelligent chatbots. 

Chatbots aren’t new; they’ve been around for a while now. But they haven’t lent well to seamless customer experiences. In fact, many found them quite frustrating until machine learning and GenAI made them more intuitive and accurate. All the way from discovery and search through order processing to completion, these technologies are making customer interactions with chatbots seamless and frictionless.

2. Autonomous Networks Powered by 5G Advanced & 6G

As 5G has increasingly been deployed, networks have gained momentum. Now, we see an increasing development of self-organizing networks. For telecommunications in particular, GenAI plays a pivotal role in these autonomous networks. 

The combination of machine learning and AI can help us predict network outages and detect anomalies in the network. We can also leverage AI to help us with cell network interference patterns, providing seamless coverage and reducing operational costs. 

3. Activating & Monetizing the Full Spectrum of Telco Data

GenAI adds a lot of value to service operations, too. For example, addressing Wi-Fi network glitches and outages presents significant challenges. Imagine being at home, confronting a network outage, and urgently seeking assistance by contacting a customer service call center or engaging with a chatbot. The frustration often lies in the prolonged wait for a customer agent to assist. 

Enter AI—a transformative force in this scenario. Envision a future where customer agents comprehensively understand each customer’s data, history, and concerns. With their vast data reservoirs, telcos hold immense potential for leveraging AI to enhance customer service. With AI's capabilities, this wealth of data translates into actionable insights. It enables customer agents to navigate service operations efficiently, guiding customers through technical challenges precisely and efficiently.

This vision represents the future of customer service—a harmonious integration of AI and data, where every interaction leads to greater satisfaction. The key lies not only in troubleshooting but also in the synergy of technology and empathy, paving the way for a more connected and fulfilling tomorrow.

4. Bridging Technical Gaps in the Telco Ecosystem

As we attempt to connect the dots, making sense of and monetizing our data wherever possible, we’ll see more use cases for using GenAI for new revenue-generating services. There are many technical gaps between where we believe these innovations can take us and what we need to wade through to get there. 

For instance, telcos can access location information and other data to indicate when consumers are traveling or planning a trip. They can use that to power data roaming sales or even offer travel insurance. How will they connect those dots and integrate with ad tech or insurance platforms for offerings like these? 

Here’s another example: what are the technical gaps between education platforms and telcos? Consider that a North American telco might have 100 million customers. There's a huge potential upside if you start offering new revenue-generating services in the education sector, but that requires both strategic partnership and technological integration. 

There are countless opportunities for new revenue-generating services in this market with machine learning and GenAI helping us uncover relevant data. Those revenue streams can be realized as we develop new ways to bridge the technology gaps.

5. Evolving from Prototypes to Proof of Value & MVPs

In the context of the AI loop, we are still probably in the early phases of this journey. There's a lot of hype, and the last year was all about working on prototypes, experimenting, failing fast, and discovering what could be relevant and contextual

This year, we will see increasing MVPs, real products, and proof of value. As we mature in this journey, as with any other technological disruption we’ve seen before (whether it was the mobile revolution or the desktop revolution before that), there will be an inflection point. It may be a few years down the line, but it’s coming. Then we will see more AI-first products being developed.

From a telecommunications perspective, this will mean a shift from digital telco journeys to fully native AI telco journeys.

GlobalLogic is already putting two accelerators and our collaborative model for co-creating innovative use cases to work for our customers. With our GenAI "platform of platforms" integrating numerous publicly available LLMs we're crafting GenAI solutions that precisely align with our customers' objectives and requirements.

Want to learn more? Explore our GenAI Strategy & Solutions and get in touch with GlobalLogic’s GenAI experts today.

Special thanks to Allyson Klein at TechArena for the conversation that inspired this article. You can listen to ‘The Future of AI and the Network with GlobalLogic SVP Sameer Tikoo’ with Allyson here.

Evolution of Industrial Innovation: How IIoT Will Impact Manufacturing in the Future?


The Manufacturing Industry is entering a new era thanks to the Industrial Internet of Things, or IIoT. This revolutionary technology is dramatically reinventing manufacturing with the integration of digital technology into processes that enhance output quality, reduce costs, and increase productivity. IIoT is a shining example of innovation, pointing to a time when connected ecosystems and smart factories will propel industrial advancement.


Understanding IIoT

What Is IIoT and Why Does It Matter?

IIoT or Industrial Internet of Things, combines the physical and digital domains of industrial manufacturing and information technology to build a network that allows machines and devices to communicate, analyze, and use data to make intelligent decisions. This connectivity is transforming industry operations by increasing process efficiency, predictability, and flexibility. It's not just about optimization.


The Core Components of IIoT Systems

The fundamental elements of the IIoT are its sensors, which gather data, its data processing units, which analyze it, and its user interfaces, which facilitate communication and interaction. Together, these elements provide more operational efficiency and intelligent decision-making by transforming data into actionable insights.




How IIoT Impacting the Manufacturing Industry?


Streamlining the Production Process

Using IIoT, manufacturers can easily gather data from different equipment and machines in the factory, and that helps them identify areas for improvement. Production lines are changing as a result of the high levels of automation and efficiency brought about by IIoT. Real-time monitoring and control, together with waste reduction and production time acceleration, are made possible by smart sensors and gadgets. This change not only improves the output but also enables enterprises to respond quickly to market requirements and challenges.


Predictive Maintenance

IIoT-based predictive maintenance helps the manufacturing industry monitor equipment performance, anticipate potential breakdowns, and schedule maintenance and repairs, reducing time spent on reactive maintenance. This method represents a major improvement over conventional, reactive maintenance techniques since it decreases downtime, increases equipment life, and lowers maintenance expenses.


Enhancing Safety and Quality Control

IIoT raises the bar for quality assurance and safety. Together, sensors and analytics track operational parameters and the environment to make sure manufacturing operations stay within safe bounds and that the quality of the final product doesn't change. By proactively monitoring, accidents and faults are avoided, protecting both workers and customers.


Key Technologies Behind IIoT


The Role of Big Data and Analytics

The IIoT is not possible without big data and analytics, which allow for the analysis of enormous volumes of data produced by sensors and devices. By identifying patterns and insights, this research may help make better decisions, optimize workflows, and forecast trends, all of which improve operational effectiveness and strategic planning.


Connectivity Solutions: The Backbone of IIoT

In IIoT, connectivity is pivotal to tying systems and devices together throughout the manufacturing floor and beyond. The latest technologies that facilitate real-time data exchange include Wi-Fi, Bluetooth, 5G etc. These advanced technologies guarantee smooth connectivity. The synchronization of activities and the application of automation and advanced analytics depend on this interconnection.


AI and Machine Learning: The Brains Behind the Operation

IIoT systems are becoming intelligent entities with the ability to make decisions, forecast results, and learn from processes; thanks to artificial intelligence (AI) and machine learning. Automating complex decision-making processes is made possible by these technologies, which increases productivity and sparks creativity. Artificial intelligence (AI) can foresee equipment breakdowns, optimize production schedules, and customize maintenance schedules by studying data patterns.


Challenges in Implementing IIoT


Integration Complexities

There are several obstacles to overcome when integrating IIoT into current production systems, from organizational reluctance to compatibility problems on a technological level. Manufacturers need to devise a strategic approach that encompasses gradual deployment, ongoing review and stakeholder participation in order to effectively manage these challenges.


Cybersecurity: Protecting the Digital Frontier

New cybersecurity threats are introduced by the interconnectedness of IIoT. Ensuring the integrity of industrial processes and safeguarding confidential information are critical. To protect themselves from cyberattacks, manufacturers need to put strong security measures in place, such as encryption, access limits, and frequent security assessments.


Overcoming the Skills Gap

A workforce proficient in both digital technology and conventional manufacturing is necessary given the trend towards IIoT. It is imperative to close this skills gap in order to implement IIoT successfully. Manufacturers can overcome this obstacle by implementing focused training plans, forming alliances with academic institutions, and encouraging an environment that values lifelong learning.


IIoT in Action: Case Studies


Case Study 1: Predictive Maintenance in Brazil's Manufacturing Sector



A leading manufacturing firm in Brazil, specializing in automotive parts, faced challenges with equipment downtime and maintenance costs. Traditional maintenance strategies were reactive or scheduled at fixed intervals, leading to unnecessary maintenance or unexpected equipment failures.


The company embarked on an IIoT project to shift towards predictive maintenance. IoT sensors were installed on critical machinery to monitor various parameters such as temperature, vibration, and noise levels in real-time. This data was transmitted to a cloud-based analytics platform where machine learning algorithms analyzed the data to predict potential failures.


  • Integrating IoT sensors with legacy equipment.
  • Ensuring data accuracy and reliability.
  • Developing predictive models specific to their machinery and failure modes.


  • Reduced unplanned downtime by 40%, as maintenance could be scheduled before failures occurred.
  • Maintenance costs decreased by 25% due to eliminating unnecessary scheduled maintenance.
  • Extended equipment lifespan and improved overall equipment effectiveness (OEE).

Case Study 2: Production Optimization in Germany's Automotive Industry



A German automotive manufacturer aimed to enhance its production efficiency and product quality. The traditional quality control process was reactive, with defects often identified only after production, leading to waste and rework.


The company implemented an IIoT system to collect data from sensors placed throughout the production line. This system provided a real-time view of the manufacturing process, enabling immediate adjustments to maintain quality standards. Additionally, the company developed digital twins for key components, allowing for virtual testing and optimization before physical production.


  • Achieving seamless integration of IoT data across different stages of production.
  • Ensuring data security and privacy.
  • Training staff to interpret IoT data and make informed decisions.


  • Product defects were reduced by 30%, significantly improving product quality.
  • Production efficiency increased by 20% through real-time adjustments and optimization.
  • Reduced costs associated with waste and rework.

How Will IIoT Affect Manufacturing in the Future?


Current Shifts and Forecasts

Innovations and constant improvement will characterize IIoT-driven production in the future. The adoption of 5G for improved connection, the creation of digital twins for sophisticated testing and simulation, and the use of AI and machine learning for more complex analytics are examples of emerging trends. These developments should improve manufacturing's flexibility, efficiency, and customizability even more.


Artificial Intelligence and Machine Learning's Next Wave

It is expected that machine learning (ML) and artificial intelligence (AI) will have a significant impact on the IIoT in the future. These technologies will propel improvements in industrial processes, increasing their autonomy, intelligence, and predictability. Manufacturers will be able to take full advantage of the IIoT with the aid of these technologies, from autonomously optimizing production processes that alter without human intervention to real-time supply chain optimization.


Formulating a Sustainable IIoT Plan


Important Steps for a Successful Launch

An effective IIoT strategy should consider several important factors, such as clearly defining objectives, selecting appropriate technology, and ensuring a seamless interface with existing systems. Manufacturers must put cybersecurity, employee training, and stakeholder engagement first to enable the successful deployment of IIoT.


Measuring the Impact: ROI of IIoT Applications

Evaluating IIoT project outcomes is critical to justifying investments and guiding future efforts. Manufacturers should establish specific criteria, such as higher output, reduced downtime, and better product quality, to calculate return on investment. If manufacturers regularly monitor and evaluate these KPIs, they may maximize their IIoT strategy and achieve long-term benefits.


Frequently Asked Questions (FAQs)


  • How does IIoT differ from traditional IoT?

While standard IoT covers a wider spectrum of consumer and corporate applications, IIoT concentrates on industrial applications, highlighting efficiency, dependability, and connectivity in production environments.

  • What immediate benefits does IIoT offer to manufacturers?

Immediate advantages include improved safety and quality control, decreased downtime due to predictive maintenance, and increased operational efficiency.

  • Can SMEs leverage IIoT? 

Yes, SMEs can gain from IIoT by beginning with scalable solutions made to match their unique requirements, which will increase their productivity and competitiveness.

  • How does IIoT contribute to sustainable manufacturing?

IIoT improves sustainability by using resources more efficiently, cutting waste, and using less energy during production thanks to more intelligent manufacturing techniques.

  • What are the best security practices for IIoT systems?

Strong encryption implementation, frequent security audits, access controls, and keeping up with the most recent cybersecurity threats and defenses are examples of best practices.

  • Starting with IIoT: Where do beginners begin? 

Before using IIoT technologies widely, novices should first conduct a thorough assessment of their needs and goals. This should be followed by pilot projects where users may test and learn from the technologies.


Manufacturers have a revolutionary opportunity to reimagine their operations and adopt an efficient, innovative, and sustainable future when they utilize IIoT. By understanding the potential, overcoming the challenges, and leveraging the technology driving IIoT, producers can achieve previously unobtainable levels of productivity and competitiveness. Going forward, it will not only be possible but also imperative for those who want to be in positions of leadership in the industrial landscape of the future to integrate IIoT into manufacturing processes.

This is probably a well-known fact in sociology or some other such discipline, but it struck me the other day that only the generation that knows how to do something can be the one to make that thing obsolete.

Take driving a car, for example. My generation and the ones preceding me in the U.S. eagerly learned how to drive a car as soon as we were legally allowed. Like most of my contemporaries, I started driver's education as soon as the law allowed, at age 15.5, and had my license in hand as soon as I turned 16 years old. But more recently, the number of Americans receiving their drivers license at age 16 has declined from an already low 46.2% in 1983 to a mere 25.6% in 2018 according to []. While not as dramatic a decrease, fewer adults in the US had drivers licenses in 2018 than in previous years.

It’s not inconceivable to me that in a few decades, between the proliferation of ride-sharing services (a technology-driven business model) and  self-driving cars (a new technology), relatively few American adults will know how to drive. This is in the country, the U.S.A., that introduced ‘car culture’ to the world.

But today, in 2024, about 91% of American adults still have a driver’s license. And I think that’s a necessary condition for self-driving cars to evolve.

Any new technology will be imperfect. This means that people who know how to use the previous generation of technology are the ones who need to be the pioneers that introduce the next generation. Those are the people who can revert back to the ‘old’ way when necessary, because the ‘new’ way isn’t quite up to some aspects of the task. While my Tesla does some things very well already, I will still over-ride the self driving features when I believe it’s not doing the right thing. But if I didn’t know how to drive, I would be at the mercy of the car—instead of seeing it as an ally and a tool. Except in controlled and limited (or remotely supervised) conditions, I don’t think a non-driver would feel completely safe in even the best of today’s self-driving cars in all circumstances. But for those of us who can already drive, self-driving functionality is a great thing; we can turn it on or off according to the situation and our needs.

There is no doubt in my mind that self-driving cars will be perfected, and will some day soon drive better and more safely in all circumstances than I do. In some specific areas, my ‘self-driving’ Tesla already does a better job than I would. Today’s children—or (if you’re a pessimist) their children—will truly have no need to learn to drive once they become adults. Except for recreation, I doubt if many will bother learning to drive. Driving ‘manually’ will become a forgotten skill.

But for the present, the only way to perfect self-driving cars is to put them in the hands of people who already know how to drive. Only those people with driving skills can “rescue” the algorithms when they don’t work quite right, or can act as trainers and perfecters of the new technology. In other words, only the people who are masters of the old technology can become the pioneers of the new.

I see the same thing happening in the software industry, as we adopt GenAI-driven development tools. There is no doubt in my mind that at some point in the future, GenAI will produce better code, tests, architectures and other software artifacts than we can create manually—and certainly faster. But as in the car-driving example, only those people with the skills to develop systems manually can be the ones to make the new technology successful.

GenAI-based development will predictably have gaps. While there are some areas where GenAI-driven development can add tremendous value already, it does not seamlessly cover the entire software development lifecycle, and won’t for some time. Humans with ‘traditional’ skillsets are very much required to realize the advantages of GenAI-based development.

For this new technology to succeed, the people who know how to develop software the ‘old fashioned way’ will need to make it successful. But why would we do that? We all believe—reasonably, I think—that this new technology will change our work fundamentally. Why would we risk working ourselves out of a job or, at the least, risk changing our current jobs beyond recognition?

My work experience has shown me, time and time again, that those people who try to make themselves indispensable by withholding knowledge are often the first to lose their jobs in any major transition. We can all probably think of a few examples of people who did manage to avoid being let go in a work transformation by hiding ‘secret knowledge’. But what a miserable existence they must have had! Hoarding knowledge and constantly worrying that someone else would displace them by learning what they think makes them valuable. Such behavior reminds me a bit of Golem stroking the one ring and repeating “My precious!”.

While our generation is the current flag bearer for the accumulated wisdom of software development know-how, its techniques and best practices are far from secret knowledge. Countless books, articles, blogs, training courses, examples and other artifacts exist and can be accessed over the web. When AI’s get smart enough, they have ample material from which to learn—as we’re already seeing. Also, there are fortunes to be made from teaching them, and new job opportunities to be created because of GenAI-native development. There’s no way any of us—or all of us—could hold back this tide, even if we wanted to. GenAI will transform the software industry: that is a given. We can argue about ‘when’ and ‘how’, but I don’t think the ‘what’ is in dispute.

Take heart, though. If you love to drive, I think that even in the upcoming era of truly self-driving cars you will have the opportunity. Manually driven cars will still be available, as an option on new or specialized models, for rental to hobbyists, or through the ‘vintage’ market. Similarly, if you love to program, I’m sure you still can. But our generation will indeed be the generation that makes GenAI-native software development a reality. The only question in my mind is: will it be because of some of us? Or all of us?

Early 20th Century motivational speaker and author Dale Carnegie once wrote “Today is the tomorrow you worried about yesterday.” I believe that Mr. Carnegie’s point was that unless today is the literally the worst day of your life (and my sincere sympathies if it is), then the energy you spent worrying about it yesterday was largely wasted. I haven’t read much by Mr. Carnegie (who was not related to industrialist and philanthropist Andrew Carnegie by the way), but perhaps those who are staying up nights thinking about the impact GenAI might have on their jobs should take a look at his book titled “How to Stop Worrying and Start Living”. I’m not familiar with the content, but the title seems very much on target.

To give some context on where I’m coming from: I’ve spent my entire adult life developing software products—as a coder and automated test designer / developer; as an architect and (believe it or not) UI designer; as a tools automation and “DevOps” leader; as a quality and process evangelist and ‘digital transformation’ champion; as an acting “Product Owner”; in multiple Manager, Director and VP of Engineering roles; and, now, for the last 15 years, as a CTO. Also, I took on a good array of supporting roles along the way, including leading product support, IT (briefly) and tech writing. And probably other roles that don’t immediately come to mind. In other words, I’ve held basically every job in the software industry. And GenAI has the potential to change all these activities, everything that I’ve learned, almost beyond recognition: including my current job as a CTO. Also, I also believe that in some of these areas, this transformation will start to happen in the very near future—possibly this year (2024).

Like many in my “generation” of software engineers, I have a very diverse skill set—even above and beyond that litany of roles. The current generation of developers are often surprised at the range of different activities that I and my contemporaries all seem to know about—but for us, this was just routine. People of my generation generally knew how to build a computer from chips—as I knew in my early career. We had to know about CPU architectures and peripheral device controllers so that we could program in assembly language when required—which it was, sometimes. While the bulk of my career has focused on developing various sorts of application software, I have also worked with ‘system software’ such as operating systems, device drivers, the network stack, and with compilers and debuggers. In other words, I and my generation knew—and had to know—computer and software systems from the inside out, from ‘chip’ to ‘cloud’ to ‘app’.

This breadth has served me well, especially in a company as diverse as GlobalLogic. Because of the path my career has taken, and the technology changes I have lived through, I have ended up knowing a little about a lot—and that has really come in handy sometimes. I also know a lot about a few things—but in the era of change like we’re headed into, I suspect that will actually be my least valuable attribute. I think it’s my breadth and range of experiences that will serve me best in the future.

Years from now, I suspect that you will find yourself in the same position that I am today. In particular, at that future date, you will find yourself knowing things that the more junior people around you will find mystifying, because of the roles you have played, and the experiences you have had.

I predict that the next generation of software engineers coming up after you will be amazed that you know how to write code—by typing it! You know how to deploy software systems on the cloud, and how to manage a kubernetes cluster—by yourself! You can write a user story, and even test it—just from your own imagination! You can deploy incremental side-by-side canary or blue/green releases—with scripts you wrote! When the AI based system gets something wrong, you know how to get in there—and fix it! In other words, the skills you have learned and use today will give you tremendous insight into the more automated systems in the future. Your current skills will continue to be a real advantage to you as we transition to a more AI-driven style of software development, no matter how automated it gets.

The trick to thriving in a changing landscape is that you will need to do what I have had to do many times in my career: Be willing to set aside hard-earned knowledge or skills to embrace a new technology, process or role that delivers a better end result, both for yourself and for the products you build. You may think that leaving your current skillset behind makes you less valuable, but my experience is that it’s the opposite. Understanding and embracing the future is what makes you most valuable. Even when you move on to the new, what you have learned before will stay with you. While you may not do things in the same way, the context you’ve gained from your previous experience will remain useful even if, in an AI-driven future, the way you once did it becomes ‘obsolete’. I’ve found time and time again that the effort I put into doing something well has never really been wasted, even when jobs and technologies change. I think with GenAI we’ll all see the same thing.

But your current hard-won skills can be a stumbling block if you confuse the means with the end. Always keep in mind the end goal: to architect, develop, test, document or deploy a system that delivers business and/or social value; to solve a business or technical problem; or to deliver a quality product to market quickly. Then use the best means available at that time to do it, even if it means abandoning the way you’ve done it before. If you keep doing things the way you do them today, when a better way emerges, you will indeed be left behind. On the other hand, the bright side of NOT embracing the future is that maybe if you wait long enough, you could get lucky: Your old skillset may become so “retro” that it turns into a high-demand niche area. We see this when 1970’s and 1980’s mainframe developers are called back from retirement today. But if it’s true that the ‘fashion’ cycle is 40 years, the 2060’s is a long time to wait for early 2020’s development techniques to come back into style. If, indeed, they ever do. My advice is to adapt.

Because our world is changing, I think it’s legitimate for all of us to be concerned about the future. The changes we all see coming do concern you, because if you work in the software industry and related areas, your job will indeed be affected. But concern is different than worry. Concern leaves room for curiosity, excitement and action, while worry can be paralyzing. When the world is changing around us, standing still is usually not a good option. It’s much better to be engaged in the change, and to make it work for you. My advice is to be concerned, but not worried.

In the software field, all of us have begun to realize that we are part of an industry that will be disrupted to its core by GenAI. Your job will change--if not today, then in the next few years. But will your job go away? In its current form, your job may indeed change beyond recognition. But I think the need for software in the world is nearly inexhaustible. In particular, I think the demand for new or improved systems will grow faster than our efficiency at bringing those systems to production, even if our productivity increases many times over. In growth there’s opportunity. If you have a good mind and master this new technology, I think there will be a place for you.

To acquire the skills you already have today, you have learned how to learn, how to think, and how to apply technology to solve business and other real-world problems. My advice: Don’t confuse the ‘means’—what you know today—with the ‘ends’—solving a real-world problem using technology. That end goal of software development—making an impact in the real world, and in people’s lives—will not change. The need to apply technology to meet the world’s needs won’t go away. The opportunity to make a difference for yourself, your family and for society, along with the excitement of learning new skills, are probably what attracted you to engineering in the first place. Go back to your ‘roots’ about why you became an engineer—and be willing to rethink how you can now accomplish those goals in a new and even better way.

I had a stimulating conversation with the head of our GenAI practice, Suhail Khaki, a few weeks ago. Suhail made the remark that the more he works with GenAI, the more it strikes him that it’s less like conventional computer software, and more like a person in the way it interacts. He made the remark: “Intelligence is Intelligence”. That got me thinking: a lot of so-called “issues” with GenAI are actually attributable to the fact that it’s modeled on the way people think. It’s really not GenAI that’s to blame—to a large extent, it’s just surprisingly good at behaving the way we humans do.

If someone asks you the same question on two different occasions, how likely is it that you will give exactly the same answer, word-for-word each time? You won’t, unless it’s some memorized speech. If you ask two different developers to implement the same algorithm, how likely is it that each of them will write exactly the same code? It won’t happen. They may both get it ‘right’, but the two programs will be different—slightly, or even radically.

So why does it surprise and frustrate us when GenAI behaves exactly the same way? Humans give different responses to the same question because many variables influence our behavior, including what we ate for breakfast that morning, who our audience is, how the question was phrased (including intonation), and what we’ve learned and thought about between the first iteration of the question and the second. GenAI has different factors that influence it—no need for it to eat breakfast, yet—but it essentially behaves in a ‘human’ way when it gives a different answer to the same question. Similarly for coding. There are many correct answers to the same software development problem. Which one a given developer picks, or which one the same developer picks on different occasions, are determined by a lot of internal and external variables, not least of which is the sum of our previous experiences and training.

What we call “hallucinations” in GenAI, likewise, are also common to us human beings. In the US, politicians on both sides of the aisle give us ample demonstrations of made-up facts to cover lapses of memory or inconvenient truths. We can argue about whether these political misstatements are deliberate or not, but sometimes human hallucinations are done with no bad intent. An elderly woman I knew had vascular dementia, a brain condition that cuts off access to certain memories or faculties. Her intelligence, however, was largely unaffected. If you asked her about her day, she would happily make up a story about activities that, on the surface, sounded very plausible—but in fact never occurred. There’s no way I believe she did this intentionally, with any attempt to deceive. Instead, absent the actual facts available in her memory, I think her brain creatively generated a response that sounded plausible, but was unfiltered by the truth. She was not diagnosed until she was formally interviewed by a psychologist who asked her objectively factual questions, such as the names of her children. It was only then that it became obvious that she had a medical condition, and that her responses in normal conversation were largely made up.

While I’m not a psychologist, I suspect that human intelligence, when denied access to appropriate information but finding itself in circumstances compelling a real-time response, tends to fill in the blanks—or make stuff up. We’d prefer our politicians and my elderly friend with vascular dementia to simply say “I’m sorry, I don’t know”, “I’d rather not say”, or “I don’t remember”.  But where the person feels an imperative to give an answer regardless of missing or internally suppressed information we get “fake news”, false memories or hallucinations. The same is true of GenAI—it defaults to a plausible-sounding but invalid response when it can’t find an accurate one.

My wife is a psychologist, and she tells me that in the human brain there is concept called “filling in the missing gestalt”. The brain tries different strategies and options to fill in missing data. This presentation of options contributes to human creativity and problem-solving behavior. We’ve all experienced this when we’ve been puzzled trying to solve a problem, and then suddenly the answer comes to us. This happens largely sub-consciously, below our level of awareness. When there is insufficient rejection by our brain of wrong alternatives, then we can get human confabulation to “fill in the blanks”, even though the best option might be to ‘leave it blank’, and say you don’t know. But where our brains make a good choice among the generated alternatives, we get originality, spontaneity and invention.

In an LLM, this is controllable to some degree by setting a parameter called the “temperature” which, essentially, governs the degree of randomness used to generate alternative responses. While lowering the temperature limits hallucinations in an LLM, it also reduces the number of good alternatives that are being considered. The downside of fewer alternatives is that the ‘better’ and ‘best’ alternatives may not be generated at all, limiting the effective ‘intelligence’ of the AI. Rather than surpressing the generation of alternatives, the right answer, in my view, is better filtration of multiple generated alternatives. Indeed, a number of GenAI startups are working on hallucination prevention by intelligently filtering generated responses. But the generation of alternative responses, even wrong ones, is actually a characteristic of human-type intelligence itself—it’s a “feature”, not a “bug”. We’re just at a relatively early state-of-the-art in terms of filtration—though I’m convinced that is coming.

Why do these ‘human’ inconsistencies and confabulations surprise and annoy us when it comes from GenAI? Most of us have grown up with computers. While they can be frustrating or bewildering to deal with at times, computers are also predictable. In particular, when programmed, computers do the same thing in the same way every time, and consistently give you the same answer to the same question. We experience computers as machines or ‘robotic’ (in the narrow sense) in the interactions we have with them.

GenAI is not that way. While it runs on a machine, it acts in important ways more like a person. Compared to a programmatic device, GenAI is relatively unpredictable and inconsistent.

I would argue that the unpredictability and inconsistency of GenAI is an essential feature of any intelligence that tries to emulate, in some respects, the human brain. Perhaps inconsistency is a feature of intelligence in general. It may not be a feature we always like, but if we want the advantages of intelligence in our machines, I think we will also learn to make do with its quirks.

Does that mean we can’t use GenAI for useful work? I would argue that, despite our own foibles and lapses, we have used fallible people to do useful work for many, many generations. We can follow some of these same practices in using GenAI.

When managing people, we often have multiple specialists assigned to different aspects of the same activity. Often work is overseen by a manager, who ensures the consistency and quality of the output. For critical tasks, we have documented procedures that people are required to follow. And in emergency situations, or those requiring real-time body control (like sports), we fall back on training. Trained responses are those where people follow pre-defined or pre-learned guidelines—essentially programming—automatically, and largely without thinking. These same principles of human work can, and are, being applied to GenAI today.

Consciously or sub-consciously, analogs to human organization are being developed and applied to GenAI’s today, with more in the works. “Ensembles” of specialized LLMs are being orchestrated by “agents” and other technologies to leverage the strengths of each model, analogous to a human team with complementary skillsets. Like a human supervisor, GenAI management approaches such as “LLMs for LLMs” and programmatic analysis of model outputs, are emerging to filter and  evaluate the quality of an AI’s output. These managers can also trap hallucinations, and send the AI’s—or team of AI’s—back to the drawing board to come up with a better answer. For critical or end-user facing tasks, implementations may combine the best features of programmed approaches and GenAI models. For example, a customer support application might use dialogflow [] for the structured element of dialogs, together with one or more LLMs for call routing, information gathering and results summarization.

The final frontier is, perhaps, machine or industrial control systems, or control of life-critical real-time systems. For these systems, we need deterministic outputs. Creativity may be useful in some situations, but even with humans in the loop, we generally used trained responses and documented step-by-step procedures that we expect people to robotically follow. This is because in an emergency there is rarely time or mental energy to improvise—and documented procedures have been researched, vetted and tested. The robotic following of directions is probably the least human thing that we do, but it’s necessary sometimes—for example, in an emergency situation like steering your car out of a skid when it slips on the ice. Improvising from scratch is the wrong approach in that case—we’re better off if we have trained ourselves to turn into the skid to regain control, without having to process the physics in real-time. For activities like sports, piloting an aircraft in an emergency, and others real-time decision-making, learned and trained skills are an important foundation. Creativity is still beneficial at times, but only on top of a firm foundation of learned skills.

Like human trained responses to emergency or real-time sports situations, control systems operated by computer tend to be more automatic, rules-based and deterministic. That’s not to rule out AI entirely. We already have good examples of conventional, non-GenAI models playing an important role in such systems: For example, your car’s advanced driver assistance controls for lane-following, collision avoidance and adaptive cruise control have important aspects that are AI based. My experience is that these AI-based systems add substantially to my safety. However, I think all of us, myself included, would hesitate before putting our lives in the hands of a technology that is subject to hallucinations. On the other hand, I drove myself for many years without any driver assistance at all, and my fallible human brain still enabled me to survive. So maybe—properly supervised—GenAI has a role here too. After all—intelligence is intelligence, even if it’s artificial.

I was one of the early buyers of the first release of Apple Vision Pro AR headset early this year. I got up at 5am my time to place an order on-line at the first moment when the device became available for pre-order. I then made an appointment at my local brick-and-mortar Apple Store to pick it up as early as possible on the first Saturday after they shipped. Needless to say, I was excited about this technology (and still am).

I got to the store early and waited in line with other eager Vision Pro buyers also picking up their orders. When the store opened and we all filed in, we were each assigned to a store associate for a demo and familiarization session with this new device. While I was overwhelmed by the experience offered by the device itself, what struck me just as forcefully was the nature of the retail experience. The air of excitement in the store was palpable, not only among the group of buyers, but also among the sales associates and even the store manager. I felt like we were all conspiring to share something truly unique, truly special. And that, I think, is the essence of a great retail experience: It’s a conspiracy between the buyer and seller to share a thing of value.

I’ve had this experience a few other times in my life, generally with the owner of a “Mom & Pop” or family enterprise. I particularly remember an experience in India where I bought a leather wing-back chair. I had seen the chair on display at the old Bangalore airport, and because I couldn’t find a retail outlet, I made a visit to the local factory where it was made. The owner of the factory personally showed me around, and actually took my measurements, like I was buying a suit of clothes. They made the chair “to order” and to fit my dimensions (hip-to-knee, length of torso, etc.)! When his workers brought the finished chair to the company apartment where I was staying at the time, you could feel their pride in the finished product. They waited eagerly until I sat in the chair and pronounced it a perfect fit. Years ago, I brought the chair back to the US with me, and it sits proudly in my study to this day, full of very pleasant memories around how it came into my life.

Why are retail experiences like this rare?

Well, when I’ve had them, I was buying something that was at least relatively expensive, and from someone to whom the sale would be significant in some way. In the Apple Vision Pro example, I assume that—in addition to the sales people’s genuine excitement around this new product—the store probably had incentives, a contest, or at least recognition in place for their associates to first learn and then present this new technology. I don’t know that for a fact, but I did get a survey soon afterward, from which I assume (hope) the sales associate was recognized in some way for his great work. In the factory owner example, in addition to their genuine pride in a great product, I think they were hopeful that I’d spread word-of-mouth among my colleagues and fellow ‘ex-pats’ (which I did). In other words, even beyond the money they received, it was worthwhile for someone to deliver an exceptional product and experience.

Notably, none of my best retail experiences were entirely on-line. This being the 21st Century, they all involved the Web: I originally ordered the Apple Vision Pro on-line, and I tracked down the manufacturer of the chair through Google. Other “peak” retail experiences have all likewise involved the Web in some way—education, awareness and so on. But none of my best retail experiences to date have been entirely on-line; all have required a personal touch.

Why is this? Thirty years after the Web came to prominence, why is it not delivering such ‘peak’ retail experiences by itself?

I think it is because of the relative lack of personalization. A critical factor in any ‘conspiracy’—which I believe a great retail experience requires—is collusion between one or more people. Sophisticated websites use machine learning and demographic data to present an experience that is in some ways tailored to the buyer—or at least to a similar type of buyer. However, in my own on-line shopping, I have not yet encountered a retail website that really directly engages me, individually, the way a good human retailer does. My best experiences to date have all required a “human in the loop”.

Back in the early days of the Web I was inspired by a then-recently-published 1993 book called “The One to One Future: Building Relationships One Customer at a Time”, by Don Peppers and Martha Rogers. Peppers’ and Rogers’ thesis was that businesses can prosper by building strong relationships with their best customers, and by growing their share of that customer’s wallet over time. When my team was developing an early e-commerce platform at Apple in the 1990’s, we built in a number of features that enabled automated individualized shopping experiences—such as intelligent cross-selling. But, while we aimed to achieve it, the technology did not exist at that time to truly enable the kind of “mass personalization” and personalized relationships that Peppers and Rogers envisioned. To this day, thirty years later, developing a deep personal relationship with a customer has required a person—a human in the loop. It’s not something the industry has truly been successful at automating.

Earlier attempts at automating ‘personalized’ customer interaction, such as chat-bots and IVR systems, have had limited success. This is largely, I believe, due to the necessarily scripted and programmed nature of these interactions. While some experiences are far better than others, people do not interact naturally on a ‘programmed’ basis—it tends to sound phony unless the system vocabulary is very large, and the options are truly flexible, or even dynamic. That’s exactly what we have in GenAI—interactions that are increasingly open-ended and therefore more human-like.

Could a GenAI-based retail system learn to understand me as an individual—what drives me, what decision criteria I use, what I value in a product? And then map that information to identify the products I would love to buy, and help me understand why I’d love and can afford those products? I believe the answer is an emphatic “yes”, going far beyond traditional ‘propensity to buy’ determinations. The main wildcard here is access to accurate customer behavioral and motivational data and, ultimately, the customer’s willingness to be known deeply by a given system. In general, people are more disclosing to a machine than to other people, because the fear of judgement is reduced. People’s main concern when disclosing information to a machine tends to be the privacy of the information, and concerns about the manner in which that information will be used. These are solvable problems. In particular, I think shopping ‘agents’ will be developed who understand a customer and a customer’s finances and goals intimately—perhaps presenting an anonymized face to the world, for privacy sake.

The other wildcard is how disclosing a retailer is willing to be about the true nature of their products. Mapping an individual customer to a product description only creates loyalty if that customer is truly delighted after receiving and using the product itself. Marketing hype may help sell a product once. But alone, it doesn’t create the kind of “lifetime customer” or increasing “wallet share” that Pepper and Rogers describe in the “One to One Future”. Thankfully, we live in an age of increasing transparency, with customer ratings and reviews now a routine part of the buying experience, and suppliers increasingly honest about sourcing, provenance and other criteria of concern to various individuals. While these systems and data can be—and often are—gamed, they point the way toward a more objective and satisfying retail experience in the future.

I remember Steve Jobs once sent a thoughtful email to the employees at NeXT talking about how retail experiences always involve individual ‘values’—not in a moral sense necessarily, but in terms of the importance they give to certain factors. He gave the example of himself and his wife Laurene considering the purchase of a high-quality European washing machine for their home. He said that the washer produced softer clothes and used less water and laundry soap, and was therefore better for the environment. On the other hand, the washing cycle was longer, so they couldn’t clean as many clothes in the same amount of time. Steve and Laurene opted for the European machine I believe. But you can readily imagine someone with a large family valuing the higher throughput of a cheaper US-made machine above the quality and environmental advantages posed by the more expensive European option. With its natural language understanding, consideration of these kind of value judgements are a very good use case for GenAI. It’s exciting to think of the opportunities for delightful retail experiences for us all, both as consumers, and as sellers, in the one-to-one GenAI future.

Also with GenAI, we have an expanded opportunity to not only create personalized experiences, but personalized products as well. As a simple example, on request a telco analyzes a specific customer’s historic data and voice usage, media viewing habits and other factors—such as payment history—and dynamically creates and prices a product tailored to that specific individual. On a slightly more speculative level—but by no means far-fetched—physical goods can be produced to individual order using GenAI-produced programs and models to drive a combination of industrial robots, 3D printing, CNC milling machines and other computer-controlled manufacturing devices. These technologies exist today and, indeed, are being used to produce products under GenAI control in isolated pockets and on a small scale already. The only speculation is that these approaches will become affordable and widespread.

While it has taken thirty years and counting, I think the technical basis to realize a true “One to One” future is now in place. I think the next thirty years will transform retail almost beyond recognition, by allowing people and AI’s to conspire together to create value and deliver delightful buying experiences. And, not incidentally, create tremendous customer loyalty to those who deliver this service.

The hype cycle has little to do with the merits of a particular technology. It simply has to do with the amount of publicity the technology has received. In particular, if the publicity jumps ahead of what the technology can immediately deliver, then the technology quickly gets labeled as “over hyped”. This is not the ‘fault’ of the technology—just of the overinflated expectations for immediate benefits that grow up around it.

A case in point is, believe it or not, the world-wide web. Back in 1994, my team set up NeXT Software’s (now Apple’s) first website. At the time, there were only something like 10,000 websites on the entire internet (at this writing there are well over a billion). Even at its beginnings, though, it seemed obvious to me—and to a lot of other people—that Web technology was transformational. However, in the late 1990’s, believe it or not, the Web was considered over-hyped.

With the benefit of 25 years of hindsight it seems almost incredible to us that the world-wide web and the internet could possibly be considered overhyped. If there’s a single technology that truly transformed the world, I think most of us would agree that it’s the Web (plus the internet and the ‘personal’ computer, but those are stories for perhaps another day). The Web and the follow-on technologies it spawned have completely transformed our world, and their impact continues to fill our working and personal lives. Web-related and web-motivated technologies include social media, the cloud, smart handheld devices (phones, tablets, etc.), massive multi-player games, on-line dating, dynamic content creation, shopping and connected cars, and many others. In fact, it’s hard to imagine modern life without the Web, the internet, and its various downstream impacts. We simply take for granted instant access to information, ubiquitous connectivity, pervasive communication, remote device monitoring and control, media when and where we want it, as many others. These are now simply built into the fabric of our lives.

Yet the people who claimed the Web was overhyped in the late 1990’s had a point. At that time, connectivity was limited, and complex graphically rich page renderings were slow. Even when user interactivity was introduced, it was—at first—very simple by today’s standards; essentially form-based. E-commerce emerged very early—within two years of the first static website I mentioned—but issues like payment security were still being worked out and trust was low by today’s standards. And indeed, the naysayers were right in one sense: there was a “dot-com bubble” that burst and struck down many web- and internet-centric companies in the early 2000’s. While this downturn had many causes, one of them was that the “hype” had indeed gotten ahead of the technology.

Why do I bring up this ancient history? I think we’re going to see something similar happen to GenAI, probably this year (2024). Like many people, I am confident that GenAI and the downstream technologies it inspires will utterly transform the world—on the scale that the internet, the world-wide web and their follow-on technologies have done, if not more. Bill Gates is quoted as saying that in the short-run GenAI is overhyped, but in the long run it is under-hyped. I don’t know if Mr. Gates was thinking of the history of the Web when he said this, but I’m sure the analogy must have been on his mind. His remark is an excellent description both of the Web‘s historical adoption curve, and sums up very neatly what I think is likely to come with GenAI.

Today’s tools and technologies make it easy to create a very compelling demo with GenAI.  Today in early 2024, eighteen months after ChatGPT went public in the fall of 2022, many of us, myself included, continue to be stunned by what this technology can do. We are even more excited by what it promises for the future. However, as the POCs move into enterprise-scale deployments and business-critical applications, the problems and gaps will predictably start to surface.

People will realize that data is harder to gather, prepare, curate and keep relevant than they suppose. Approaches that only a few months ago defined the state of the art for GenAI development will change as new approaches are invented—obsolescing systems already built. We’ve seen this already: the “RAG” model (“Retrieval Augmented Generation”) that six months ago was so cool is now being termed the “naive RAG model” and has been replaced by the “advanced RAG model”. Probably, in the new future, it will itself be replaced by other approaches that are even better. Lots of work that was done to work around the 4k token window size supported by popular LLMs has become unnecessary because those token windows are expanding to 128k and are growing larger. People are starting to realize that the GPUs needed to power many GenAI systems are expensive and hard to come by, both physically and even on the cloud. New security vulnerabilities and threats will be discovered and invented. And, of course, hallucinations, bias, and inconsistent answers will plague suppliers and applications.

I think it’s pretty much inevitable that there will be a media (social and other) backlash against GenAI in the near future, and that the technology will be labeled as “over-hyped”. I sincerely hope it does not cause the Armageddon among startups that the “dot-com bust” of the early 2000’s did, but some companies will certainly fall victim to the hype cycle plummeting into what Gartner calls the “Trough of Disillusionment” [].

To reframe a famous phrase in a totally different context, though, my experience of the dot-com era tells me that “the end of the peak hype cycle is the beginning of wisdom”. I think it’s a healthy thing for us all to realize that this technology will not, overnight, transform the world. Like all new technologies, GenAI has rough edges that need to be smoothed out, limitations that need to be discovered and overcome, security and other holes that need to be plugged, and infrastructure that has to be built around it before it becomes commonplace. I also believe that this will happen, and that GenAI and its downstream technologies will fulfill the promise that many of us see in it—and probably faster than we think. The important thing, as technologists, is to realize that the “hype cycle” is simply about the hype—it’s not about the technology. Let’s hope our bosses with the money understand the same thing!

Executives, decision-makers, technical experts, and Google Cloud partners converged at Google Cloud Next to explore cutting-edge innovations and industry trends. GlobalLogic was there, speaking about modernization strategy and delivering a Cube talk on Intelligently Engineering the Next GenAI Platform we are building for Hitachi.

Among the buzz at GCN 2024, using GenAI for customer success and process and platform modernization with AI stole the spotlight. Innovative ways companies are evolving from proof of concepts to proof of value were hot topics, too. However, challenges like data integrity and legacy point systems loom large as enterprises shift towards those proof-of-value AI-driven solutions and efficient monetization strategies. Where should you focus now – and what comes next as you develop your innovation roadmap?

Here are five key trends and takeaways from the event that speak to the essential building blocks innovative companies need to lay the groundwork for successful enterprise-grade AI implementations.

1. Applying GenAI for Customer Success

Enterprise-Grade GenAI solutions for customer success are revolutionizing service quality and driving business outcomes. Imagine equipping your frontline staff with GenAI-driven agents, empowering them to ramp up productivity and provide every customer with a personalized, enhanced experience. Built-in multilingual customer support makes GenAI a versatile powerhouse for enterprise teams, catering seamlessly to a global customer base with diverse linguistic preferences. 

This transformative approach to customer success merges advanced technology with human expertise, paving the way for exceptional service delivery and business success in the digital age.

2. Modernizing the Tech Stack & Transforming the SDLC

GenAI is reshaping the software development landscape by empowering developers to drive efficiency and elevate code quality to new heights. This transformative approach extends beyond mere updates—it's about modernizing the entire stack, from infrastructure to user interface. 

Innovative approaches include automated code generation, building RAG-based applications, enhanced testing and QA, predictive maintenance, and continuous integration and deployment (CI/CD). Leveraging natural language processing (NLP) for documentation, behavioral analysis, automated performance optimization, and real-time monitoring and alerting, GenAI streamlines development processes, improves code quality, and enables proactive decision-making. GenAI empowers developers to drive efficiency, improve security, and elevate software quality to unprecedented heights throughout the SDLC by automating tasks, optimizing performance, and providing actionable insights. 

Through comprehensive refactoring of applications, GenAI is leading the charge towards a future-proofed ecosystem. However, this ambitious undertaking isn't without its challenges; it demands time, dedication, and a strategic roadmap for success. 

3. Building a Future-Forward Framework for Success

Enterprises face key challenges in unlocking the value of AI, such as ensuring data privacy and security, protecting intellectual property, and managing legal risks. Flexibility is essential to adapt to evolving models and platforms, while effective change management is crucial for successful integration. 

Embracing a 3-tier architecture with composable components over the core platform emerges as the future-forward approach, fostering flexibility and scalability. Having a robust infrastructure and data stack to underpin the GenAI layer is indispensable, forming the bedrock for successful implementation. We refer to this holistic framework as the "platform of platforms," which not only ensures alignment with business objectives but also facilitates the realization of optimal outcomes in the GenAI journey.

4. Monetizing Applications 

Monetization was a hot topic at Google Cloud Next, and enterprise organizations gravitate towards Google’s own Apigee for several reasons. Apigee’s robust API management platform offers versatile monetization models like pay-per-use and subscriptions, streamlined API productization, customizable developer portals, real-time revenue optimization analytics, seamless billing system integration, and robust security and compliance features. 

For example, we recently designed and built a solution for monetizing an application that uses APIs to access and leverage industry data stored in a cloud-based data lake. This allowed for scalable and serverless architecture, providing reliable and updated information for improved decision-making, identification of new opportunities, and early detection of potential problems. Apigee’s reputation as a trusted and reliable API management platform is backed by Google Cloud's expertise and infrastructure, further solidifying its appeal to enterprise customers.

5. Evolving the Intelligent Enterprise from POC to Proof of Value

Transitioning from Proof of Concept (POC) to Proof of Value (POV) marks a critical phase in adopting AI technologies, particularly in light of recent challenges. Many POCs implemented in the past year have faltered, and the pressure is on to demonstrate a return on AI investments.

Maturing your AI program from POCs to POV calls for a holistic approach that encompasses not only the capabilities of GenAI but also your foundational architecture, data integrity, and input sources. Maintaining data integrity throughout the AI lifecycle is paramount, as the quality and reliability of inputs significantly impact the efficacy of AI-driven solutions. Equally important is the evaluation and refinement of input sources, ensuring that they provide relevant and accurate data for training and inference purposes. 

Successful GenAI implementations are those that are reliable, responsible, and reusable, cultivating positive user experiences and deriving meaningful value for the enterprise. 

Responsibility means delivering accurate, lawful, and compliant responses that align with internal and external security and governance standards. Reliability shifts the focus to maintaining model integrity over time, combating drift, hallucinations, and emerging security threats with dynamic corrective measures. Finally, reusability emerges as a cornerstone, fostering the adoption of shared mechanisms for data ingestion, preparation, and model training. This comprehensive approach not only curtails costs but also mitigates risks by averting redundant efforts, laying a robust foundation for sustainable AI innovation.

How will you propel your AI strategy beyond ideas and concepts to enterprise-grade, production-ready AI and GenAI solutions? 

Let’s talk about it – get in touch for 30-minute conversation with GlobalLogic’s Generative AI experts.

  • URL copied!