Archives

Editor’s note: This article was originally published in CXO Today on July 26, 2023.

CXOToday engaged in an exclusive interview with Avnish Singh, SVP – Head of Content Engineering, GlobalLogic.   

Please elaborate on how Content Engineering is revolutionizing the corporate and major enterprise (CME) industry through enhanced collaboration and knowledge management. 

Response: Major Enterprises have large amounts of data in silos that get created due to geography, business function, scale, and other reasons. Over the last several years, these enterprises have taken conscious and elaborate steps to make this data available to everyone across the organization. Content Engineering practice plays a pivotal role in bringing in technologies and data experts who understand the data, consolidating it onto a common platform, and enabling enhanced collaboration by making it more searchable and accessible. 

An important aspect is to make this data easily searchable and bring in the ability for employees to access relevant information quickly. This can be achieved by applying high-quality data tagging and labeling techniques when setting up the common data platform. Improving the search and accessibility of the information across the organization enhances collaboration by ensuring that there is always a single source of truth, that has structured information on which the employees can collaborate. 

The organized approach benefits large enterprises with dispersed systems as it helps in breaking the silos and drives better knowledge management, and faster decision-making. Furthermore, the advantages of well-organized data extend to market growth and customer service. Organizations with multiple product/service lines can provide a seamless experience to their customers through properly tagged and centrally accessed data. 

This can help drive customer sentiment and hence retention. With the advent of generative AI, the use of content engineering teams becomes much more important. The data and domain experts will continue to enable organizations in creating their LLMs, to help power knowledge management and collaboration across the organization.   

How does GlobalLogic distinguish itself from other companies in the Content Engineering sector, and what sets it apart in terms of innovative approaches and implementations? 

Response: Our company DNA is product engineering, a capability that distinguishes us in our industry. This gives us a deep understanding of the complexities of the product lifecycle and its inherent dependencies on accurate and timely data. We recognize that such data is not merely incidental, but a crucial driver in shaping the customer’s experience and the organization’s evolution. The value it imparts is far-reaching, driving strategic decisions, refining product development, and propelling market positioning. 

Our approach to content engineering is profoundly influenced by our understanding of data in the product lifecycle. We go beyond mere data management and strive to unleash its full potential in terms of usability, accessibility, and impact. 

To us, data and content are not mere digits and letters but invaluable assets that can shape the trajectory of the organization and create rich, meaningful experiences for its customers. We ensure the integrity and validity of data at all stages of its lifecycle, from inception and collection to processing, storage, and deployment. Our stringent quality control measures guarantee the accuracy of data and the credibility of the content we present. By doing so, we ensure that our content is not just informative but also reliable, consistent, and geared toward delivering the intended impact. 

Not only that, but our content engineering services also drive digital transformation for clients, covering concept to platform to insights. Data and content are vital in the product life cycle that helps in aligning their journey with product evolution, ensuring true engineering value. Our expertise ranges from content digitization to machine learning, enabling diverse digital platforms. Through partnerships, we’ve built a strong cross-functional lab, supporting design, development, and maintenance. 

Additionally, we provide full lifecycle digital product development services to our customers covering requirement analysis, development, testing, and maintenance for completed customized solutions, deployment, and integration. These reflect across multiple aspects such as Talent Acquisition understanding, Operations & Process excellence, Competitive Pricing/Volume Discount/Innovation Fund, Content Localization and Multilingual capabilities, Data Security, and Adoption of Emerging technologies.   

Could you share examples of notable projects or case studies where GlobalLogic’s expertise in Content Engineering has significantly enhanced customer experience and achieved tangible business outcomes? 

Response: Some notable case studies that resulted in enhanced customer experience and tangible business outcomes for our customers: 

Case Study 1 – Enhancement of Navigation Maps for a leading ride-sharing platform company 

Challenges: Our client was using third-party commercial maps, which posed a few business challenges. The third-party maps were not designed and did not have all the features required for ride-hailing services. This led to a compromised experience for both drivers and customers due to routing and ETA issues. Additionally, maps service downtime directly impacted revenue, leading to skyrocketing costs as the business grew, adversely impacting the bottom line and margins. 

Business Outcome: Due to these challenges, the customer engaged GlobalLogic to help create their maps. We quickly set up a core team that understood the unique requirements for map creation for the ride-hailing service. The team then delivered excellent quality maps for 7 countries, processing road geometry of 659,000KM (adding 217,000KM new roads) with an accuracy of 99.61% for road geometry and 99.70% for navigation. 

This led to the enhanced customer experience and the customer experienced multiple benefits such as: 

  • Enhanced Customer and Driver Experience through the improvement of overall route planning, excellent accuracy of pick-up/drop-off locations, and reduced navigation errors. 
  • Increased business value for customers and drivers through enhanced routing efficiency through optimized routes, reduced travel time and costs. 
  • Expanded service coverage through the addition of new roads leading to access to new riders leading to business growth Elimination of 3rd party maps license and subscription costs leading to improved bottom line and margins Case Study.

2 – AI-driven remote detection of medical conditions for a leading healthcare provider 

Challenges: The customer, a leading provider of nutrition and therapeutic health products, launched a dermatology product for remote assessment of various skin-related diseases. But given the remote nature, the diagnosis of the diseases was not very effective. Further, the doctor’s and patient’s session time was much longer as the diagnosis process was long. 

To solve these challenges, the customer wanted to use AI to identify various skin ailments. However, they did not have the required training dataset for this purpose. They tried to use their teams, but the process was taking very long. This is when they engaged GlobalLogic to help train their AI model with an appropriate machine learning training dataset. 

Business Outcome: We deployed a team of experts that included AI Content Engineering experts and Doctors with MD Dermatologist expertise. This team developed two machine learning training datasets. The first dataset was worked by AI content engineers who annotated the thousands of images provided by the customer to label (image quality, body part, skin type/tone Fitzpatrick scale, lesion detection) the ROI (region of interest). The team of doctors then did the ROI evaluation on this first labeled dataset to identify the skin disorders. The customer then used these two datasets to train their AI model with very good accuracy, making their product a great success in the market. 

This led to tremendous customer experience improvements for both the doctors and the patients as the time taken during the session was brought down by more than 50% in many cases and the AI-assisted identification of diseases led to much better accuracy of remote identification of the skin disorders.   

How does GlobalLogic maintain the quality and precision of the structured data it delivers through Content Engineering? Are there specific processes or methodologies in place to ensure accuracy? 

Response: GlobalLogic follows very stringent quality processes containing both manual and automated quality workflows. This is to ensure the quality and precision of the structured data. The quality workflow structures are customized based on the client’s requirements and expected deliverables. Our standard workflow includes: 

Data Validation: We implement comprehensive validation rules to ensure that data entered into the system meets predefined criteria. This includes format checks, range checks, and consistency checks to identify and reject invalid or inconsistent data. 

Data Cleansing: Once the data validation process is completed, we then clean and correct data to remove errors, duplicates, and inconsistencies. Furthermore, we also use automated tools, and scripts to identify and fix issues such as misspellings, incomplete records, or incorrect formatting. 

Recommended reading: Continuous Testing: How to Measure & Improve Code Quality

Documentation and Metadata: We maintain comprehensive documentation and metadata about customer structured data. This includes recording the source, meaning, and context of each data element. Clear documentation helps prevent misinterpretation and ensures accurate usage of the data. 

Regular Auditing: Periodic audits of customer data are conducted to identify and rectify any inconsistencies, inaccuracies, or missing values. This involves comparing data across different sources, verifying data against known benchmarks, or performing statistical analyses to identify outliers or anomalies. 

Quality Assurance System: GlobalLogic has an in-house solution for Quality Assurance which is tailored as per the customer requirements. This system can be used with any type of process workflow. 

Regular Data Backups: Regular data backups are performed to ensure that in case of any data loss or corruption, we can restore the data to its previous state. This minimizes the risk of losing valuable information and allows customers to maintain the integrity of their structured data. 

Continuous Improvement: Our focus remains on continuous monitoring and improvement of customer data management processes. Feedback from users is collected to promptly address any data quality issues, and we regularly review and update customer data quality procedures to adapt to changing requirements and emerging best practices.   

What are the primary technologies and tools utilized by GlobalLogic in its Content Engineering solutions, and how do they contribute to providing comprehensive support to customers? 

Response: We leverage multiple in-house content engineering solutions and third-party solutions to deliver services to our customers. These are divided into the following categories:

Data Extraction and Web Scraping: We have built our Web Scraping tools using Python, BeautifulSoup, and Scrapy for extracting structured data from websites. 

Extract, Transform, Load (ETL): Our Inhouse ETL solution provides features for extracting, transforming, and loading structured data from various sources into a target database or data warehouse. 

Optical Character Recognition (OCR): Leveraging third-party OCR tools such as Tesseract and PDFMiner helps to extract structured data from scanned documents or images by recognizing and converting text into machine-readable formats. We also have our in-house tool named Dark Data Solution. 

Data Quality and Precision: OpenRefine (formerly Google Refine), Google Sheets (With Apps Script), and a few other tools leveraged by us to help in cleaning and standardizing structured data. These tools automate tasks like removing duplicates, correcting formatting issues, and reconciling inconsistencies. 

Labeling, Annotation & Classification: GlobalLogic has built its tool named LabelLogic that caters to all types of training data requirements, for next-generation ML models, through labeling, annotation & classification. 

We leverage multiple accelerators, including Project Management Tool, Data Collection App, SLA Management Tool, and Auto Redaction of PI, while custom developing additional accelerators as needed. Our expertise in various tools and technologies like Python, Scrapy, Selenium, AWS, Google Cloud, Docker, Git, and more, further enhances our capabilities in delivering efficient solutions.

More helpful resources:

Enterprises envision a cutting-edge new system as their envisioned future state; when the outdated system has been phased out, the novel system takes over, and legacy data is managed while seamlessly integrating new data. In a successful digital transformation, this new system garners widespread approval from the extensive target audience, too. 

Sounds great, right? Unfortunately, this isn’t always a smooth process, and there’s no guarantee of a successful outcome. According to McKinsey, a staggering 70% of digital transformations end in failure.  This statistic paints a concerning picture, particularly when we consider that a significant portion of these failures can be attributed to unsuccessful migration endeavors. 

It’s no wonder business leaders tend to get the “heebie jeebies” – a slang term meaning a state of nervous fear and anxiety – when it comes to migration. Often, migrations suffer from poor planning or exceed their allotted timeframes. In this article, we explore four different types of migration and share strategies to alleviate these apprehensions and combat the factors that can interfere with a migration’s success. 

(Note: Within the context of this article, migration encompasses more than just data transfer; it encompasses a comprehensive system transition.)

Types of Migration

First, let’s explore four types of migration your organization might consider as part of its digital transformation.

Conventional Data Migration

Conventional data migration involves exporting data from source systems into flat files, followed by the creation of a program to read these files and subsequently load the data into the target system. It represents a more compartmentalized approach, suitable for scenarios where the disparity between the source and target data schema is minimal and the volume of data to be migrated remains relatively modest.

Here’s a real-life scenario in which an online pet pharmacy enterprise transitioned from an existing pharmacy vendor to a new one. 

The groundwork was meticulously laid out for the new system, complete with a switch poised to be activated once the new system was infused with data. During the new pharmacy vendor’s launch, a migration task involving approximately 90,000 prescriptions from the old vendor’s database to the new one awaited. While not an overwhelming data load, it was substantial enough to warrant a deliberate decision. Consequently, the choice was made to employ the conventional data migration method.

The data was extracted from the previous vendor and handed over to our team. We meticulously refined the information, converting it into a format compatible with the new vendor’s system for seamless import. This comprehensive procedure was practiced and refined over the span of several months. The planning was executed with exact precision, carefully scheduling both full data feeds and incremental data updates. To ensure meticulous execution, we crafted a release checklist that enabled us to monitor and manage every step of the migration journey. Remarkably, the entire process unfolded seamlessly, maintaining uninterrupted service for the online pet pharmacy store’s end users.

Recommended reading: Easing the Journey from Monolith to Microservices Architecture

Custom Migration

In some cases, a migration process can become exceptionally intricate, demanding the establishment of a dedicated system solely for this purpose. This specialized software system, crafted specifically for the migration endeavor, follows its own distinct lifecycle and will eventually be retired once its mission is fulfilled. 

Within the dynamic realm of the online travel industry, one of our clients is gearing up for a monumental migration undertaking. The intricacy of the issue at hand and the sheer volume of data involved necessitated the adoption of a highly customized service. 

This bespoke solution was designed with a singular objective: to stage and subsequently transfer the data to the new system at the precise moment of user activation. 

The sheer scale of this migration project is staggering, with the number of records to be migrated reaching the monumental figure of 250 million.

The existence of diverse source systems stands out as a key driver behind the adoption of this distinctive migration approach. This tailor-made service functions as a robust engine, adeptly assimilating data from various sources and meticulously readying it for integration into the staging database. Subsequently, the shift to the new system becomes a seamless transition, executed during runtime upon activation request. This precision-engineered and finely tuned custom solution sets the stage for the client’s journey toward a more enhanced operational landscape.

Data Migration Aided by Technology

Now, let’s envision taking the conventional data migration process and enhancing it with the power of modern automation through cutting-edge technology stacks. Picture the benefits of having tools seamlessly handle error handling, retries, deployments, and more. The prospect of achieving migration with such automated prowess might appear enticingly straightforward. However, there’s a twist. The success of this approach hinges on meticulous planning and agility, qualities that tools can aid in monitoring but ultimately require the deft touch of a skilled practitioner.

Several cloud services can assist in automating the various steps of migration. While I’m leaning toward an AWS PaaS-first approach here, it’s important to note that other leading cloud providers offer equivalent tools that are equally competitive.

The key components within such a migration system include:

  • AWS Glue: AWS Glue serves as a serverless data integration service, simplifying the process of discovering, preparing, and amalgamating data.
  • AWS S3: AWS Simple Storage Service proves invaluable for storing all ETL scripts and log storage.
  • AWS Secret Manager: AWS Secret Manager ensures secure encryption and management of sensitive credentials, particularly database access.
  • AWS CloudWatch: CloudWatch Events Rule plays a pivotal role in triggering scheduled ETL script execution, while CloudWatch Logs are instrumental in monitoring Glue logs.
  • AWS DMS: AWS Database Migration Service (AWS DMS) emerges as a managed migration and replication service, enabling swift, secure, and low-downtime transfers of database and analytics workloads to AWS, with minimal data loss.

With the utilization of these services, let’s delve into how we can effectively execute the migration process:

This presents a straightforward workflow, leveraging AWS Glue, to facilitate data transfer from source to target systems. A crucial requirement for the successful execution of this workflow is establishing VPC peering between the two AWS accounts. It’s worth noting that there could be instances where client infrastructure constraints hinder such access. In such cases, it’s advisable to collaborate closely with the infrastructure team to navigate this challenge.

The process unfolds as follows: data undergoes transformation and finds its place within the stage database. Once the data is primed for activation, it is then seamlessly transferred to the target system through the utilization of AWS DMS.

While these tools undoubtedly streamline our development efforts, it’s essential to grasp how to harness their full potential. This aspect represents the simpler facet of the narrative; the true complexity arises when we engage in data validation post-migration.

On-Premises to Cloud Migration

This migration is the epitome of complexity – a quintessential enterprise scenario involving a shift from on-premise servers to cloud servers. The entire process is facilitated by a plethora of readily available solutions proffered by cloud vendors. A prime example is the AWS Migration Acceleration Program (MAP), an all-encompassing and battle-tested cloud migration initiative forged from our experience migrating myriad enterprise clientele to the cloud. MAP equips enterprises with cost-reduction tools, streamlined execution automation, and a turbocharged path to results.

Our collaboration extended to a leading authority in screening and compliance management solutions, embarking on a transformative journey. Among the ventures undertaken for this partner was the formidable Data Migration and 2-Way Sync project. The essence of this endeavor was to engineer a high-performance two-way synchronization strategy capable of supporting both the existing features of the On-Premises solution and those newly migrated to a novel, service-oriented framework on Azure. Furthermore, this solution was compelled to gracefully manage substantial volumes of binary content.

Take a look at the tech stack used for this migration:

Our solution comprised these integral components:

  • ACL: A legacy component tasked with detecting alterations within the on-prem database and subsequently triggering events that are relayed to the cloud.
  • Upstream Components: These cloud-based elements encompass a series of filtering, transforming, and persisting actions applied to changes. They are meticulously designed to anchor the modifications within the entity’s designated domain in the cloud. Moreover, these components generate replication events that can trigger responsive actions as required.
  • Replication Components: Positioned in the cloud, these components specialize in receiving the replication events. They then proceed to either store the data or execute specific actions in response to the received events.
  • MassTransit: In scenarios where cloud-induced changes necessitate synchronization back to the on-prem database, MassTransit steps in. This tool plays a pivotal role in reading all events generated in the cloud, forwarding them to downstream components, thus orchestrating the synchronization of changes.

Collectively, these components form a coherent framework that orchestrates the intricate dance of data synchronization between on-premises and cloud-based systems.

The achievement of two-way synchronization hinged on the utilization of key features within our product. These components included:

  • Table-to-Table Data Synchronization: Our solution facilitated seamless data synchronization between on-premise and cloud databases, or vice versa. This process was orchestrated via an event-driven architecture, ensuring a fluid exchange of information.
  • Change Capture Service for On-Prem Changes: In cases where alterations occurred on the on-premise side, a change capture service meticulously detected these changes and initiated corresponding events. These events were then synchronized to the designated home domain, simultaneously triggering notifications for other domains to synchronize their respective data, if deemed necessary.
  • Cloud-Initiated Changes and Data Replication: Conversely, when changes manifested in the cloud, our solution orchestrated their transmission to the on-premise data replication service. This was achieved through a streamlined event-driven approach.

While much ground can be explored in the realm of on-premise to cloud migration, ongoing innovation, such as the integration of tools like CodeGPT, is consistently expanding the avenues for executing migrations. However, to stay focused on the core subject matter at hand, let’s get into the tips that can help alleviate the anxieties associated with these migration endeavors.

Tips for Migration Success

How can you ensure your next migration is successful? Don’t miss these crucial opportunities to simplify and combat the complexities of your migration.

1. Plan for Shorter and Early Test Cycles

Just as integrating and commencing testing early is pivotal in microservices architecture, kickstart the migration process early within the testing cycle. Incorporate numerous testing cycles to optimize the migration process. Our recommendation is to embark on five or more testing cycles. It’s of utmost importance that these cycles unfold in near-real-time production-like environments, replicating data closely resembling the production setting. Morphing tools can be employed to transplant sanitized production data into a staged environment.

Recommended reading: Continuous Testing – How to Measure & Improve Code Quality

2. Formulate a Comprehensive Validation Strategy

Leave no stone unturned when validating the migrated data. Thorough validation is essential to prevent financial losses or the risk of alienating customers due to a subpar post-migration experience. Here is an exemplary set of validation steps tailored for the post-migration scenario:

3. Initiate with Beta Users

Start the migration process by selecting a group of Alpha and Beta users who will serve as pilots for the migrated data. This preliminary phase aids in minimizing risks in the production systems. Handpick Alpha and Beta users carefully to ensure a smooth transition during live data migration. Alpha users constitute a smaller subset, perhaps around a hundred or so, while Beta users encompass a slightly larger group, potentially comprising a few thousand users. Eventually, the transition is made to a complete dataset of live users.

4. Anticipate Poison Pills

From the outset, plan for poison pills – records in Kafka that consistently fail upon consumption due to potential backward compatibility issues with evolved message schemas. Regularly checking for poison pills in production is a proactive measure to avert last-minute obstacles. Here’s a workflow that illustrates how to address poison pills:

5. Craft a Robust Rollback Strategy

Collaborate with clients to establish a comprehensive rollback strategy, ensuring that expectations are aligned. Conduct mock-run tests of the rollback strategy to preemptively address potential emergencies, as this could be the ultimate recourse to salvage the situation.

6. Seek Assistance When Available

If feasible, consider enlisting paid support to bolster your efforts. For instance, our client benefitted from licensed MongoDB support, utilizing allocated hours to enhance system performance and migration scripts. Such support often introduces a fresh perspective and intimate knowledge of potential challenges and solutions, making it invaluable during the migration process.

7. Incorporate Early Reviews

Be proactive in seeking reviews of the migration architecture from both clients and internal review boards. This diligence is vital to identify any potential roadblocks or discrepancies before they pose real-world challenges. By preemptively addressing issues raised during reviews, you can avoid last-minute complications, such as instances when a migration plan contradicts client policies, necessitating adjustments and improvements.

Conclusion

The vision of a seamless transition to a cutting-edge new system is an alluring prospect for enterprises, promising improved efficiency and enhanced capabilities. However, the journey from outdated systems to a technologically advanced future state is often fraught with challenges, and the alarming statistic that 70% of digital transformations end in failure, as highlighted by McKinsey, is a stark reminder of the complexities involved. Among the key contributors to these failures are unsuccessful migration endeavors, which underscore the critical importance of addressing migration apprehensions.

Indeed, the term “heebie jeebies” aptly encapsulates the anxiety that often accompanies migration processes. The anxiety can be attributed to a range of factors, including poor planning, exceeded timeframes, and unexpected roadblocks. Yet, as this article has explored, there are proven strategies to counter these challenges and achieve successful migrations. By embracing approaches such as shorter and early test cycles, comprehensive validation strategies, staged rollouts with Beta users, preparedness for potential obstacles like poison pills, and crafting effective rollback plans, enterprises can greatly mitigate the risks and uncertainties associated with migrations. Seeking expert assistance and incorporating early reviews also play crucial roles in ensuring a smooth migration journey.

The diverse types of migration covered in this article, from conventional data migration to custom solutions and on-premise to cloud transitions, demonstrate the range of scenarios and complexities that organizations may encounter. By diligently adhering to the strategies outlined here, enterprises can navigate the intricate dance of data synchronization and system transitions with confidence. As the digital landscape continues to evolve, embracing these best practices will not only help ease the “heebie jeebies” but also pave the way for successful digital transformations that empower organizations to thrive in the modern era.

Reach out to the GlobalLogic team for digital advisory and assessment services to help craft the right digital transformation strategy for your organization.

More helpful resources:

In planning a digital transformation, the CTO of an organization has many decisions to make to reach the final state. In order to achieve the overarching goal of sunsetting a legacy monolithic system, one such decision is whether to go with a brownfield or greenfield approach. 

But that is the last stage, and we have a long road to travel to reach that place. There is a long lapse before the successful sunset, and it is perfectly acceptable to have the systems working midway. Typically, this means brownfield systems using much of the legacy monolith system, wrapped with a modern stack. 

In this article, we’ll share guidance on easing the journey from monolith to a modern system, developing a North Star architecture, and how to adapt if needed at different stages of your digital transformation.

Getting Started: Evaluating Your Options

Once you’ve decided to modernize, there are different paths you can take: move to a completely new system, wrap/refactor your solution, or side-by-side.

Figure: Modernization Choices – GL POV

A lot of the thought leadership on this topic focuses on successfully transforming architecture and new systems. However, there is very little discussion about midway systems. If anything, people tend to talk about the older monolith/legacy systems and the amount of baggage they carry. 

Recommended reading: Digital Transformation 101: Leveraging Technology to Drive Business Growth & Sustainability

But midways systems are complex and come with a lot of baggage. They can be labor-intensive and there are a lot of unknowns that can blow the budget. We’ve worked with many clients during this phase when their systems are midway and have learned important lessons to help make this a smoother process.

Many of these clients have started creating a greenfield system but modified the goal and decided to stay with the midway system for good. There are many reasons this may happen: 

  • External threats or change. The pandemic is a great example where organizations found themselves having to invest to keep up with an upsurge in orders during or post-pandemic.
  • Lack of adoption. During testing or MVP rollouts, the client may struggle to gain acceptance and adoption from users. 
  • Cost factors. The budget runs out, and rather than having no system, clients stick to the half-prepared system, which still functions. This keeps the cash flowing, and the client decides to lose the battle to win the war at a later point in time.

We can indeed modernize the stack beneath before disrupting the actual user experience. This is when the entire monolithic system is not strangled. Seems like it is still alive while the engine underneath has been replaced. The harsh reality of such systems hits us hard – they are more complex than earlier and with more urgency to get to the other side.

The North Star Architecture

Moving towards North Star architecture is a Herculean effort and requires great perseverance. What’s more, a mid-way system needs more effort to maintain. So how do we move away from such a faux pas? 

Here are the steps that have worked for us in the past. A visionary enterprise architect with a strong understanding of the old and new systems can help to come up with a blueprint to achieve this. Use these simple steps to chalk out your blueprint.

1. Continue with such mid-way systems with elan.

Since the mid-way systems are not going away too soon, try to make life more bearable with such systems. Invest in transition technology which will make mid-way systems simpler to operate and maintain.

We partnered with a global leader in veterinary practices software, products and professional services having $4B revenue, in creating a future state microservice based Global Prescription Management (GPM) platform for International expansion with current serving of over 10,000+ practices in North America & ~100K Global Supply Chain customers. This was a Greenfield approach to solve the current struggle of existing monolithic systems.

Recommended reading: Benefits of Total Experience (TX) Strategy in Modernizing Applications

The transition to the new system needs time and we cannot stop the business as usual. That is what funds the new system as well. So, we decided consciously to gear up for the new system with pipelines in place to migrate the data two ways. By “two ways,” we mean the data gets synced from new system to old and vice versa. 

This essentially means that data is duplicated and maintained in sync for the sake of taking the step towards the transition to a new system. So, it looked something like this:

Figure: “Two-way” data sync

This pipeline will eventually sunset but it is worth the effort to keep the new system in sync with the old system and vice versa. The advantage we receive is a breather in our transition, when the enterprise looks at the more tasks in hand to move to a new system, adjusts the budget and timelines and overall makes life a bit easier because things are still running albeit with higher costs. The good news is the shop is still making money.

2. Build the orchestration layer.

The next step is to build an orchestration layer which will route the traffic from existing front end applications to the new backend systems. This layer will make sure users of the system will continue to have a seamless transition. In the background the old system has been replaced but users are still not impacted. Again this step can be executed by different deployment strategies like blue green deployment. And since the data is always synced in the background, you can make a switch back in case of any issues with the new system. 

Figure: The Orchestrator Layer

While creating the set of APIs in the orchestration layer, architect it to be futuristic. The orchestration layer remains in the new system as well and is not a throw away code. For doing so, if there is a need for a wrapper layer or an adaptor for the legacy front end, let it be. This will be a throw away code but still will live its short life to glory. This is because it will keep the design for your orchestration clean and pristine.

3. Bite the silver bullet and migrate to a new system.

There is no easier way to do this than bite the silver bullet now. Create the new modern UI with your technology of choice and make the modern UI talk the same language as the orchestrator service. Phase out the old system in batches. Do an alpha test run, then select a cohort of beta users to migrate and finally the complete set of users can be migrated. This gives enough runway for the final launch.

Sometimes, migration to a new system requires more planning and effort than building greenfield applications. Such a big problem requires that we have to break it down into smaller problem statements and fix smaller problems one at a time. This may be as simple as on-premise to cloud migration and also may involve complex business rules to be applied for the transformation. 

For instance, in our client’s case, we built a custom microservice whose sole responsibility was to ingest data from all disparate sources and place it in the final destination. This service had one time use but was easy to manipulate according to the business rules of transformation.

Figure: The final go live using custom Migration Engine

Conclusion

The monolith legacy system has lived its lives, and strangling the entire monolith is a humongous task. There are solutions to help ease the journey towards the North Star architecture. It is wise to invest in them early and use your budget to make lives simpler. 

These mid-way system patterns are there to simplify your journey, and making use of them will help you avoid the faux pas of strangling the monolithic systems. GlobalLogic’s Digital Advisory & Assessment can help you in your journey from the monolith to a digitally transformed architecture to choose the right path, simplify steps along the way, and reach your end goal with a successful, sustainable solution to take your business forward.

More helpful resources:

Together, IoT, AI, and cloud computing have the synergistic potential to increase efficiency, optimize performance, and reduce energy consumption. How can organizations use these technologies to address the persistent issue of the CO2 footprint created by large-scale IT infrastructure? 

In this whitepaper, see how companies are leveraging sensors and IoT devices to collect data on energy usage and environmental factors, then having AI algorithms analyze the data and provide insights for making more informed decisions about energy usage. You’ll explore energy-saving services from major cloud providers and learn about: 

  • Harnessing sensor data and AI algorithms for informed energy decisions
  • Optimizing workloads for energy efficiency.
  • Leveraging AI platforms for energy-saving solutions.
  • Utilizing IoT to improve energy production and consumption.
  • Embracing sustainability principles and guidelines in software architecture.
  • How companies are supporting sustainable solutions in diverse industries.

The paper highlights sustainable practices in cloud computing and discusses the impact of IoT in enhancing energy efficiency, from smart grid sensors to device connectivity in smart cities. It’s time to think about how the convergence of IoT, AI, and cloud computing can help you achieve energy efficiency and sustainability. You’ll find practical lessons and guidelines to help guide informed decisions, optimize energy consumption, and contribute to a greener future.

Want to learn more? Get in touch with GlobalLogic’s Digital Assessment & Advisory team to begin mapping your path to a more sustainable future.

Unlock the potential of digital transformation with hyperautomation. Discover how integrating digital technology across your organization can help you enhance efficiency, reduce costs, and adapt to future challenges.

In this whitepaper, we explore the pivotal role of hyperautomation and how it supports the trend towards digitization. You’ll learn about:

  • Essential tools like BPM systems, RPA software, process templating platforms, process mining tools, and decision management suites. 
  • The importance of digital transformation in integrating technology across all organizational functions.
  • How integration tools like APIs, ESBs, and iPaaS enable seamless connectivity and enhance the effectiveness of hyperautomated processes. 
  • Process Mining and Task Mining technologies, and where they’re used in hyperautomation to improve business processes and increase efficiency.
  • How various interfaces, integrations, and tools influence the success of digital transformation.
  • The role that conversational AI platforms can play in a hyperautomation strategy.

You’ll also find a use case example from the financial services industry demonstrating how specific tools can be applied to achieve hyperautomation.

In the rapidly evolving manufacturing and industrial landscape, digital transformation is crucial for survival. Discover the top challenges in tool and equipment management and explore the Smart Toolbox system, a groundbreaking solution researched and developed by GlobalLogic Ukraine. 

In this whitepaper, explore its high-level features and architecture, hardware and software components, and how the Smart Toolbox solves common challenges in industrial tool management.

You’ll learn about:

  • The impact of digital transformation on the manufacturing and industrial sectors.
  • Key attributes of a solid tool management system.
  • How tools and equipment management helps ensure product and service quality for industrial organizations..
  • New business opportunities that can be unlocked by implementing the Smart Toolbox system.
  • Next steps and future developments for the Smart Toolbox research and development.

Want to learn more? Get in touch with GlobalLogic’s manufacturing and industrial digital product engineering experts and let’s see what we can do for you.

While ideating any software, functionality and its implications on the business and revenue are typically major focus areas. Functionalities are further broken down into requirements, then features, user stories, and integrations. But when it comes to actually developing that software, another mindset takes over. The key focus on the architect’s mind is more often, “What are the non-functional requirements here?” 

Non-functional requirements (NFR) are the criteria or parameters that ensure the product delivers on the business requirements – speed, compatibility, localization, and capacity, for example. While functional requirements define what the app should do, NFRs define how well it should perform and meet user expectations. 

The Importance of NFRs 

NFRs are an essential aspect of software development and act as base requirements around which the system architecture is designed. System architecture designed around a well-established NFR provides a road map for designing software architecture, implementation, deployment, and post-production maintenance and updates. 

Many known NFRs were defined before the first mobile application was developed, making it essential that you contextualize these NFRs from a mobile development point of view. But which of these non-functional requirements are applicable to mobile application development, and what considerations must you keep in mind when planning your own mobile app project?

In this post, we’ll explore how NFR impacts mobile application design, development, and support, looking at each requirement and what it involves in turn.

NFRs Through the Lens of Mobile App Development

These are the non-functional requirements to consider when designing mobile applications. Some are applicable only to mobile, while others vary only slightly from web app development NFRs.

Accessibility 

Accessibility as an NFR refers to how the app supports users with special needs or is used under specific circumstances; for example, User with Low Vision. While there are many accessibility requirements to meet in mobile application design, using voice commands to control and navigate through the application is a particularly important NFR. Additionally, accessibility can be increased by adding special gestures such as double tap and long press to perform essential functions. 

Adaptability 

In the context of mobile application development, if an application meets all its functional requirements under the following conditions, it meets the adaptability NFR: 

  • Support for a wide range of screen resolutions. 
  • Support for a wide range of Manufacturers (In Android). 
  • Support for the maximum possible backward compatibility OS versions. 

Adaptability can also be an NFR for ensuring the application runs smoothly under low bandwidth conditions. 

Recommended reading: Selecting a Cross-Platform Solution for Mobile Application Development

Availability 

If a mobile application is directly dependent on backend API and services to execute its functions, its availability is dependent on the availability of those backend services. However, in a mobile context, availability as an NFR pertains to the execution of possible functions even if the backend API is not available For example, can the user perform an operation that can be synchronized later once services are back online? 

Compliance 

Compliance in mobile applications largely revolves around the protection and privacy of user data, with requirements set out and enforced by HIPAA, GDPR, etc. If privacy and security NFR is achieved on the backend and in mobile applications, in most cases, compliance is also achieved (unless there are specific compliance requirements). 

Data Integrity 

In mobile apps, data integrity involves the recovery of data for the smooth execution of the application, with the expectation that the app will recover and retain data as intended when users change the device, a new version of the application is installed, or the user performs operations in offline mode. 

Data Retention 

In mobile applications, it is expected that data is synchronized with backend services, and for that reason, it’s generally not advised to keep large-size persistent data locally. “No data retention” as an NFR applies to mobile applications. However, when there is a requirement to keep extensive data in local persistent storage, the volume of data – not the duration – should be the driving factor for the data retention NFR.

Deployment 

Mobile application deployment occurs mostly in stores provided by Android and Apple, which follow their own process to make applications available. Updates are not available to end users immediately as a result. Deployment as an NFR in the mobility context (apart from its basic specifications) is focused on informing users about the availability of new versions and stopping application usage if mandatory updates are not installed. Both the App Store and Play Store provide configurations to prioritize mandatory updates. Still, the system can be designed to enforce mandatory updates for a smooth application experience to the end user. 

Efficiency 

Unlike web or backend applications, mobile applications run on mobile devices with limited resources such as memory. Given that they are also battery-powered, efficiency is an important NFR. It is a must for the mobile application to run efficiently, with a low memory footprint and battery consumption.

Privacy 

Privacy is an important aspect of mobile applications. In terms of privacy NFRs, the following are important considerations: 

  • Media files containing user-specific data should be stored in the application’s private storage and encrypted. 
  • Media captured from the application should not be shared directly. 
  • Copying text from the application should not be allowed. 
  • Screenshots should not be allowed. 

Reporting and Monitoring 

Reporting and monitoring NFRs are crucial from a support and maintenance perspective. Since mobile applications are installed on users’ devices, it’s difficult for the support team to have direct interaction, screen share sessions, or access local log files. Remote logging and analytics solutions such as Firebase or Countly are needed for that reason. These solutions can capture events, user actions, and exceptions, and can help to analyze application usage patterns. 

Security 

Privacy and security are interlinked and in terms of security NFRs, the following are important considerations: 

  • The application should be signed with appropriate private certificates, with a policy guiding certificate storage and usage. 
  • The application should not install on authorized/tampered versions of operating systems.
  • Data should be encrypted both at rest and in transit. 
  • Application access from other applications should disabled by default. 
  • All other platform-specific security guidelines should be followed.

Usability 

Due to the small form factor, usability is an important NFR. In general, users should be able to navigate through applications and access important functions with ease, most often with single-hand operations. UX design should also consider having a minimum scrolling screen, or search functionality for scrollable content, and quick navigation for important functions. 

Key Takeaways

Addressing NFRs requires a proactive and comprehensive approach from mobile app developers. It begins with thorough planning and analysis to identify the specific NFRs relevant to the project. Setting clear and measurable targets for each requirement is essential to ensure that the app meets user expectations.

Throughout the development process, consider NFRs at every stage. Developers should continuously evaluate the app’s performance, security measures, and usability, making necessary adjustments and optimizations to meet the desired requirements. Close collaboration between developers, designers, testers, and stakeholders is crucial to effectively address NFRs and ensure a high-quality mobile app.

Rigorous testing methodologies, such as performance testing, security testing, and compatibility testing, will help validate the app’s adherence to the defined NFRs. Automated testing tools and frameworks can help streamline the testing process and identify any potential performance bottlenecks, security vulnerabilities, or compatibility issues.

Keep in mind that NFRs are not a one-time consideration. As technology evolves, user expectations change, and new challenges arise. Mobile app developers must continuously monitor and adapt to emerging trends and technologies to ensure their apps meet evolving NFRs.

Prioritizing NFRs and integrating them into your development process will help your team deliver mobile apps that not only meet functional requirements but also excel in performance, security, usability, compatibility, and scalability. Such apps have a higher chance of success in the highly competitive mobile app market, delighting users and establishing a strong reputation for the development team.

More helpful resources:

As with “Conversation Design” over the past 5 years, “Prompt Engineering” has produced a great deal of confusion in the context of interacting with ChatGPT, New Bing, Google Bard and other interfaces to Large Language Models (LLMs).

This is evident from this Harvard Business Review article entitled “AI Prompt Engineering Isn’t the Future.” 

Prompt engineering is not just putting words together; first, because the words are chosen depending on the intended meaning and goals. In Linguistics and Computational Linguistics, this is not just syntax (word order), but also semantics (word meaning), pragmatics (intention, assumptions, goals, context), sociolinguistics (audience profile) and even psycholinguistics (audience-author relationship).

I absolutely agree with the author that you need to identify, define, delineate, break down, reframe and then constrain the problem and goal. However, you cannot define, delineate and formulate a problem clearly without using language or outside of language (our language defines our world and multilingual people are the most open-minded of all, as you will see from our GlobalLogic colleagues!). Prompt engineering does exactly that, finding a way to define the problem in as few steps as possible: efficiently, effectively, consistently, predictably and in a reusable/reproducible way.

That is why prompt engineering is also tightly coupled with domain ontology mapping, i.e.: the delineation of the problem space in a semantic and often visual way.

There is no “linguistics” without meaning. What the author (as a non-linguist) sees as two separate things are, in fact, one and the same.

This is why I think the traditional (for the past 40 years) term “language engineering” is the more appropriate and perennial form and most possibly the one that will outlive both myself and the HBR author! 

Learn more:

Welcome to the next frontier of the digital era, where virtual reality transcends boundaries and the metaverse emerges as an immersive and interconnected virtual world. Everyone involved in digital product engineering finds ourselves at the precipice of a transformative moment. The metaverse has the potential to revolutionize the way we conduct financial transactions, interact with customers, and establish trust in an increasingly virtual world.

However, venturing into the metaverse comes with its own unique set of challenges, particularly for the banking, financial services and insurance sector. We learned a great deal about how those challenges are impacting executives at some of the world’s leading financial institutions in a recent digital boardroom event hosted by the Global CIO Institute and GlobalLogic.

‘The Wild West: Regulation In The Metaverse,’ was moderated by Dr. Jim Walsh, our CTO here at GlobalLogic. It was the first of three thought provoking digital boardrooms we’re hosting to explore the issues driving – and impeding – finance product innovation in the metaverse. He was joined by nine executives spanning enterprise architecture, information security, technology risk, IT integration, interactive media and more, from some of the world’s largest financial institutions. 

In this article, we delve into the main obstacles these companies are facing as they prepare to do business in this new realm: regulation, identity verification and management, creating an ecosystem of trust, and governance structures that will support law and order in the metaverse.

1. Regulating the Next Wild, Wild West for Finance

Experts have raised concerns over the lack of regulatory oversight within the metaverse, citing that users are at risk of becoming victims to real world harms such as fraud, especially with its overreliance on decentralized cryptocurrencies. The EU Commission is working on a new set of standards for virtual worlds, for which it received public feedback in May 2023. The World Economic Forum is calling for the rest of the world to follow suit and regulate digital identities within the Metaverse.

This is the backdrop against which we kicked off our roundtable discussion on regulation in the metaverse. 

And of course, we cannot talk about regulation in the metaverse without first discussing whether it’s even needed at all, and to what extent.

Recommended reading: Fintech in the Metaverse: Exploring the Possibilities

The metaverse is not new, as one participant pointed out; what’s happening now is that technologies are colliding to create new business opportunities. We’re seeing more and more examples of the Internet being regulated, and now must turn our attention to what impact those regulations may have on the emerging metaverse. Will it slow adoption or change how people interact? 

“People have been waking up to why it’s been important to have some limitations around the complete freeness of the internet of the ‘90s,” a panelist noted. “Regulations must evolve in a way that the value of the metaverse is not compromised.” 

Another noted that anywhere commerce and the movement of currency can impact people’s lives in potentially negative ways, the space must be regulated. In order to maintain law and order in the metaverse, we’ll need a way of connecting metaverse identities to real people. And so another major theme emerged.

2. Identify Verification and Management in the Metaverse 

Panelists across the board agreed that identity verification and management is a prerequisite to mainstream finance consumer adoption of the metaverse as a place to do business. Banking, insurance, and investment companies will therefore be looking for these solutions to emerge before entering the metaverse as a market for their products and services.

Look at cryptocurrency as an example, one participant recommended. “Crypto was anonymous, decentralized and self-regulated – but those days are over. Look at the token scams that have happened in crypto. That’s not a community capable of self-regulation.”

If the metaverse is going to scale, they said, we need regulation – and anonymity cannot persist.

Another attendee suggested we look to Roblox and Second Life as early examples of closed worlds with identity verification solutions. Second Life has long required that users from specific countries or states verify their real identity in order to use some areas of the platform, and had to go state-by-state to get the regulatory approvals to allow users to withdraw currency. For its part, Roblox introduced age and identity verification in 2021. These were closed worlds where you could be whatever you want, but identity was non-transferable. 

The metaverse, on the other hand, is a place where you can move through worlds, transfer assets and money from virtual to real worlds, etc. Anti-money laundering and identity management will need to catch up before it’s a space consumers and the companies that serve them can safely do business.

3. Trust & Safety in the Metaverse

Closely related to identity is the issue of trust in the metaverse, and it’s an impactful one for finance brands and the customers they serve. There must be value and reasons for people to show up and interact, and the metaverse cannot be a hostile, openly manipulated environment if we’re going to see financial transactions happening at scale. 

Already, one participant noted, societal rules are being brought into the Metaverse. You don’t need physical contact to have altercations and conflict; tweets and Facebook comments can cause harm in real ways, and we need to consider the impacts of damaging behaviors in the highly immersive metaverse. Platforms create codes of conduct, but those expectations don’t persist across the breadth of a user’s experience in the metaverse.

Another pointed out that we don’t even have customer identity or online safety solutions that work perfectly in Web 2 and are carrying these flaws we already know about into Web 3. Credit card hacking and data breaches involving online credit card purchases have plagued e-commerce since its inception.

Even so, the level of concern over privacy and safety issues varies wildly among consumers. Some will be more comfortable with a level of risk than others.

4. Metaverse Governance and Mapping Virtual Behavior to Real-World Consequence 

Dr. Walsh asked of the group, will we have government in the metaverse, or will it be self-governing?

On this, one participant believes that regulating blockchain will sort out much of what needs to happen for the metaverse. The principles of blockchain are self-preservation of the community and consensus, they said, but that’s going to take a while to produce in the metaverse.

Recommended reading: Guide to Blockchain Technology Business Benefits & Use Cases

Another kicked off a fascinating discussion around the extent to which AI might “police” the metaverse. Artificial intelligence is already at work on Web 2.0 platforms in centralized content moderation and enforcing rules against harassment. Imagine metaverse police bots out in full force, patrolling for noncompliance. We’ll need this for the self-preservation of the metaverse, the attendee said. 

Participants seemed to agree that when what’s happening in the metaverse has real-life consequences, regulation must reflect that. Legit business cannot happen in a space where financial crimes happen with impunity. 

However, who will be responsible for creating and enforcing those regulations remains to be seen. In a space with no geographical boundaries, which real-world governments or organizations will define what bad behavior is? 

“If I’m in the European metaverse, maybe I have a smoking room and people drink at 15,” one participant noted with a wry smile. “That’s okay in some parts of the world, but it’s very bad behavior in others.”

In the metaverse as a siloed group of worlds with individual governance and regulation, financial institutions may have to account for varying currency rates and conversion, digital asset ownership and portability, and other issues. Or, we may see the consolidation of spaces and more streamlined regulations than in the real world and Web 2.0. The jury is out.

Reflecting Back & Looking Ahead

For finance brands, the sheer volume of work to be done before entering the metaverse in a transactional way seems overwhelming. “The amount of things we have to build on the very basic stack we have is staggering,” one participant said.

However, we will bring a number of things from the real, physical world into the metaverse because we need those as humans. These range from our creature comforts – a comfortable sofa, a beautiful view – to ideals such as trust, and law and order, the nuts and bolts of a functioning society. How those real-world ideas and guiding principles adapt to the metaverse remains to be seen.

We’re currently in the first phase of the metaverse, where individual worlds define good and bad behavior, and regulate the use of their platforms. The second stage will be interoperability by choice. For example, Facebook and Microsoft could agree you can have an identity move between their platforms, and in that case those entities will dictate what behaviors are acceptable or not in their shared space.

Eventually, people should be able to seamlessly live their life in the digital metaverse. That’s the far future state, where you can go to a mall in the metaverse, wander and explore, and make choices about which stores you want to visit. By the time we get there, we’ll need fully implemented ethics, regulations, and laws to foster an ecosystem of trust – one in which customers feel comfortable executing financial transactions en masse. Large organizations will need to see these regulations and governance in place before they can move beyond experimentation to new lines of business.

The technology is new, but the concepts are not. Past experience tells us there are things we need to get into place before we’ll see mass adoption and financial transactions happening at scale in the metaverse. 

Regardless of how one might think of having centralized controls thrust upon them, the vast majority of consumers will not do financial business in an ecosystem without trust. Regulation is one of the key signals financial institutions, banks, insurance providers and others in their space need to monitor, to determine when the metaverse can move from the future planning horizon to an exciting opportunity for near-term business growth.

In the meantime, business leaders can work on establishing the internal structure and support for working cross-functionally with legal and governance functions to stay abreast of regulatory changes and ensure compliance. This is also a good time to explore opportunities where the metaverse could help organizations overcome compliance obstacles, and imagine future possibilities for working with regulators to combat financial crime within the metaverse. 

There’s much groundwork to be laid, and it will take a collaborative effort to build the ecosystem of trust financial organizations and customers need to conduct transactions safely and responsibly in the metaverse. 

Want to learn more?

See how a UK bank improved CX for its 14 million customers with AIOps

Are you ready to embrace the future of automotive innovation? Learn about GlobalLogic’s white paper that unveils a modern paradigm for vehicle software development leveraging the power of cloud technology: The SDV Cloud Framework.

Top Reasons to Download:

  • Discover the next-gen paradigm for vehicle development

  • Harness the power of software-defined components

  • Enable over-the-air updates for your vehicle software

  • Learn about central unit applications and software reuse

  • Boost your business with GlobalLogic’s integration and infrastructure services

Download the White Paper now and drive into the future.

  • URL copied!