Archives

Together, IoT, AI, and cloud computing have the synergistic potential to increase efficiency, optimize performance, and reduce energy consumption. How can organizations use these technologies to address the persistent issue of the CO2 footprint created by large-scale IT infrastructure? 

In this whitepaper, see how companies are leveraging sensors and IoT devices to collect data on energy usage and environmental factors, then having AI algorithms analyze the data and provide insights for making more informed decisions about energy usage. You’ll explore energy-saving services from major cloud providers and learn about: 

  • Harnessing sensor data and AI algorithms for informed energy decisions
  • Optimizing workloads for energy efficiency.
  • Leveraging AI platforms for energy-saving solutions.
  • Utilizing IoT to improve energy production and consumption.
  • Embracing sustainability principles and guidelines in software architecture.
  • How companies are supporting sustainable solutions in diverse industries.

The paper highlights sustainable practices in cloud computing and discusses the impact of IoT in enhancing energy efficiency, from smart grid sensors to device connectivity in smart cities. It’s time to think about how the convergence of IoT, AI, and cloud computing can help you achieve energy efficiency and sustainability. You’ll find practical lessons and guidelines to help guide informed decisions, optimize energy consumption, and contribute to a greener future.

Want to learn more? Get in touch with GlobalLogic’s Digital Assessment & Advisory team to begin mapping your path to a more sustainable future.

Unlock the potential of digital transformation with hyperautomation. Discover how integrating digital technology across your organization can help you enhance efficiency, reduce costs, and adapt to future challenges.

In this whitepaper, we explore the pivotal role of hyperautomation and how it supports the trend towards digitization. You’ll learn about:

  • Essential tools like BPM systems, RPA software, process templating platforms, process mining tools, and decision management suites. 
  • The importance of digital transformation in integrating technology across all organizational functions.
  • How integration tools like APIs, ESBs, and iPaaS enable seamless connectivity and enhance the effectiveness of hyperautomated processes. 
  • Process Mining and Task Mining technologies, and where they’re used in hyperautomation to improve business processes and increase efficiency.
  • How various interfaces, integrations, and tools influence the success of digital transformation.
  • The role that conversational AI platforms can play in a hyperautomation strategy.

You’ll also find a use case example from the financial services industry demonstrating how specific tools can be applied to achieve hyperautomation.

In the rapidly evolving manufacturing and industrial landscape, digital transformation is crucial for survival. Discover the top challenges in tool and equipment management and explore the Smart Toolbox system, a groundbreaking solution researched and developed by GlobalLogic Ukraine. 

In this whitepaper, explore its high-level features and architecture, hardware and software components, and how the Smart Toolbox solves common challenges in industrial tool management.

You’ll learn about:

  • The impact of digital transformation on the manufacturing and industrial sectors.
  • Key attributes of a solid tool management system.
  • How tools and equipment management helps ensure product and service quality for industrial organizations..
  • New business opportunities that can be unlocked by implementing the Smart Toolbox system.
  • Next steps and future developments for the Smart Toolbox research and development.

Want to learn more? Get in touch with GlobalLogic’s manufacturing and industrial digital product engineering experts and let’s see what we can do for you.

While ideating any software, functionality and its implications on the business and revenue are typically major focus areas. Functionalities are further broken down into requirements, then features, user stories, and integrations. But when it comes to actually developing that software, another mindset takes over. The key focus on the architect’s mind is more often, “What are the non-functional requirements here?” 

Non-functional requirements (NFR) are the criteria or parameters that ensure the product delivers on the business requirements – speed, compatibility, localization, and capacity, for example. While functional requirements define what the app should do, NFRs define how well it should perform and meet user expectations. 

The Importance of NFRs 

NFRs are an essential aspect of software development and act as base requirements around which the system architecture is designed. System architecture designed around a well-established NFR provides a road map for designing software architecture, implementation, deployment, and post-production maintenance and updates. 

Many known NFRs were defined before the first mobile application was developed, making it essential that you contextualize these NFRs from a mobile development point of view. But which of these non-functional requirements are applicable to mobile application development, and what considerations must you keep in mind when planning your own mobile app project?

In this post, we’ll explore how NFR impacts mobile application design, development, and support, looking at each requirement and what it involves in turn.

NFRs Through the Lens of Mobile App Development

These are the non-functional requirements to consider when designing mobile applications. Some are applicable only to mobile, while others vary only slightly from web app development NFRs.

Accessibility 

Accessibility as an NFR refers to how the app supports users with special needs or is used under specific circumstances; for example, User with Low Vision. While there are many accessibility requirements to meet in mobile application design, using voice commands to control and navigate through the application is a particularly important NFR. Additionally, accessibility can be increased by adding special gestures such as double tap and long press to perform essential functions. 

Adaptability 

In the context of mobile application development, if an application meets all its functional requirements under the following conditions, it meets the adaptability NFR: 

  • Support for a wide range of screen resolutions. 
  • Support for a wide range of Manufacturers (In Android). 
  • Support for the maximum possible backward compatibility OS versions. 

Adaptability can also be an NFR for ensuring the application runs smoothly under low bandwidth conditions. 

Recommended reading: Selecting a Cross-Platform Solution for Mobile Application Development

Availability 

If a mobile application is directly dependent on backend API and services to execute its functions, its availability is dependent on the availability of those backend services. However, in a mobile context, availability as an NFR pertains to the execution of possible functions even if the backend API is not available For example, can the user perform an operation that can be synchronized later once services are back online? 

Compliance 

Compliance in mobile applications largely revolves around the protection and privacy of user data, with requirements set out and enforced by HIPAA, GDPR, etc. If privacy and security NFR is achieved on the backend and in mobile applications, in most cases, compliance is also achieved (unless there are specific compliance requirements). 

Data Integrity 

In mobile apps, data integrity involves the recovery of data for the smooth execution of the application, with the expectation that the app will recover and retain data as intended when users change the device, a new version of the application is installed, or the user performs operations in offline mode. 

Data Retention 

In mobile applications, it is expected that data is synchronized with backend services, and for that reason, it’s generally not advised to keep large-size persistent data locally. “No data retention” as an NFR applies to mobile applications. However, when there is a requirement to keep extensive data in local persistent storage, the volume of data – not the duration – should be the driving factor for the data retention NFR.

Deployment 

Mobile application deployment occurs mostly in stores provided by Android and Apple, which follow their own process to make applications available. Updates are not available to end users immediately as a result. Deployment as an NFR in the mobility context (apart from its basic specifications) is focused on informing users about the availability of new versions and stopping application usage if mandatory updates are not installed. Both the App Store and Play Store provide configurations to prioritize mandatory updates. Still, the system can be designed to enforce mandatory updates for a smooth application experience to the end user. 

Efficiency 

Unlike web or backend applications, mobile applications run on mobile devices with limited resources such as memory. Given that they are also battery-powered, efficiency is an important NFR. It is a must for the mobile application to run efficiently, with a low memory footprint and battery consumption.

Privacy 

Privacy is an important aspect of mobile applications. In terms of privacy NFRs, the following are important considerations: 

  • Media files containing user-specific data should be stored in the application’s private storage and encrypted. 
  • Media captured from the application should not be shared directly. 
  • Copying text from the application should not be allowed. 
  • Screenshots should not be allowed. 

Reporting and Monitoring 

Reporting and monitoring NFRs are crucial from a support and maintenance perspective. Since mobile applications are installed on users’ devices, it’s difficult for the support team to have direct interaction, screen share sessions, or access local log files. Remote logging and analytics solutions such as Firebase or Countly are needed for that reason. These solutions can capture events, user actions, and exceptions, and can help to analyze application usage patterns. 

Security 

Privacy and security are interlinked and in terms of security NFRs, the following are important considerations: 

  • The application should be signed with appropriate private certificates, with a policy guiding certificate storage and usage. 
  • The application should not install on authorized/tampered versions of operating systems.
  • Data should be encrypted both at rest and in transit. 
  • Application access from other applications should disabled by default. 
  • All other platform-specific security guidelines should be followed.

Usability 

Due to the small form factor, usability is an important NFR. In general, users should be able to navigate through applications and access important functions with ease, most often with single-hand operations. UX design should also consider having a minimum scrolling screen, or search functionality for scrollable content, and quick navigation for important functions. 

Key Takeaways

Addressing NFRs requires a proactive and comprehensive approach from mobile app developers. It begins with thorough planning and analysis to identify the specific NFRs relevant to the project. Setting clear and measurable targets for each requirement is essential to ensure that the app meets user expectations.

Throughout the development process, consider NFRs at every stage. Developers should continuously evaluate the app’s performance, security measures, and usability, making necessary adjustments and optimizations to meet the desired requirements. Close collaboration between developers, designers, testers, and stakeholders is crucial to effectively address NFRs and ensure a high-quality mobile app.

Rigorous testing methodologies, such as performance testing, security testing, and compatibility testing, will help validate the app’s adherence to the defined NFRs. Automated testing tools and frameworks can help streamline the testing process and identify any potential performance bottlenecks, security vulnerabilities, or compatibility issues.

Keep in mind that NFRs are not a one-time consideration. As technology evolves, user expectations change, and new challenges arise. Mobile app developers must continuously monitor and adapt to emerging trends and technologies to ensure their apps meet evolving NFRs.

Prioritizing NFRs and integrating them into your development process will help your team deliver mobile apps that not only meet functional requirements but also excel in performance, security, usability, compatibility, and scalability. Such apps have a higher chance of success in the highly competitive mobile app market, delighting users and establishing a strong reputation for the development team.

More helpful resources:

As with “Conversation Design” over the past 5 years, “Prompt Engineering” has produced a great deal of confusion in the context of interacting with ChatGPT, New Bing, Google Bard and other interfaces to Large Language Models (LLMs).

This is evident from this Harvard Business Review article entitled “AI Prompt Engineering Isn’t the Future.” 

Prompt engineering is not just putting words together; first, because the words are chosen depending on the intended meaning and goals. In Linguistics and Computational Linguistics, this is not just syntax (word order), but also semantics (word meaning), pragmatics (intention, assumptions, goals, context), sociolinguistics (audience profile) and even psycholinguistics (audience-author relationship).

I absolutely agree with the author that you need to identify, define, delineate, break down, reframe and then constrain the problem and goal. However, you cannot define, delineate and formulate a problem clearly without using language or outside of language (our language defines our world and multilingual people are the most open-minded of all, as you will see from our GlobalLogic colleagues!). Prompt engineering does exactly that, finding a way to define the problem in as few steps as possible: efficiently, effectively, consistently, predictably and in a reusable/reproducible way.

That is why prompt engineering is also tightly coupled with domain ontology mapping, i.e.: the delineation of the problem space in a semantic and often visual way.

There is no “linguistics” without meaning. What the author (as a non-linguist) sees as two separate things are, in fact, one and the same.

This is why I think the traditional (for the past 40 years) term “language engineering” is the more appropriate and perennial form and most possibly the one that will outlive both myself and the HBR author! 

Learn more:

Welcome to the next frontier of the digital era, where virtual reality transcends boundaries and the metaverse emerges as an immersive and interconnected virtual world. Everyone involved in digital product engineering finds ourselves at the precipice of a transformative moment. The metaverse has the potential to revolutionize the way we conduct financial transactions, interact with customers, and establish trust in an increasingly virtual world.

However, venturing into the metaverse comes with its own unique set of challenges, particularly for the banking, financial services and insurance sector. We learned a great deal about how those challenges are impacting executives at some of the world’s leading financial institutions in a recent digital boardroom event hosted by the Global CIO Institute and GlobalLogic.

‘The Wild West: Regulation In The Metaverse,’ was moderated by Dr. Jim Walsh, our CTO here at GlobalLogic. It was the first of three thought provoking digital boardrooms we’re hosting to explore the issues driving – and impeding – finance product innovation in the metaverse. He was joined by nine executives spanning enterprise architecture, information security, technology risk, IT integration, interactive media and more, from some of the world’s largest financial institutions. 

In this article, we delve into the main obstacles these companies are facing as they prepare to do business in this new realm: regulation, identity verification and management, creating an ecosystem of trust, and governance structures that will support law and order in the metaverse.

1. Regulating the Next Wild, Wild West for Finance

Experts have raised concerns over the lack of regulatory oversight within the metaverse, citing that users are at risk of becoming victims to real world harms such as fraud, especially with its overreliance on decentralized cryptocurrencies. The EU Commission is working on a new set of standards for virtual worlds, for which it received public feedback in May 2023. The World Economic Forum is calling for the rest of the world to follow suit and regulate digital identities within the Metaverse.

This is the backdrop against which we kicked off our roundtable discussion on regulation in the metaverse. 

And of course, we cannot talk about regulation in the metaverse without first discussing whether it’s even needed at all, and to what extent.

Recommended reading: Fintech in the Metaverse: Exploring the Possibilities

The metaverse is not new, as one participant pointed out; what’s happening now is that technologies are colliding to create new business opportunities. We’re seeing more and more examples of the Internet being regulated, and now must turn our attention to what impact those regulations may have on the emerging metaverse. Will it slow adoption or change how people interact? 

“People have been waking up to why it’s been important to have some limitations around the complete freeness of the internet of the ‘90s,” a panelist noted. “Regulations must evolve in a way that the value of the metaverse is not compromised.” 

Another noted that anywhere commerce and the movement of currency can impact people’s lives in potentially negative ways, the space must be regulated. In order to maintain law and order in the metaverse, we’ll need a way of connecting metaverse identities to real people. And so another major theme emerged.

2. Identify Verification and Management in the Metaverse 

Panelists across the board agreed that identity verification and management is a prerequisite to mainstream finance consumer adoption of the metaverse as a place to do business. Banking, insurance, and investment companies will therefore be looking for these solutions to emerge before entering the metaverse as a market for their products and services.

Look at cryptocurrency as an example, one participant recommended. “Crypto was anonymous, decentralized and self-regulated – but those days are over. Look at the token scams that have happened in crypto. That’s not a community capable of self-regulation.”

If the metaverse is going to scale, they said, we need regulation – and anonymity cannot persist.

Another attendee suggested we look to Roblox and Second Life as early examples of closed worlds with identity verification solutions. Second Life has long required that users from specific countries or states verify their real identity in order to use some areas of the platform, and had to go state-by-state to get the regulatory approvals to allow users to withdraw currency. For its part, Roblox introduced age and identity verification in 2021. These were closed worlds where you could be whatever you want, but identity was non-transferable. 

The metaverse, on the other hand, is a place where you can move through worlds, transfer assets and money from virtual to real worlds, etc. Anti-money laundering and identity management will need to catch up before it’s a space consumers and the companies that serve them can safely do business.

3. Trust & Safety in the Metaverse

Closely related to identity is the issue of trust in the metaverse, and it’s an impactful one for finance brands and the customers they serve. There must be value and reasons for people to show up and interact, and the metaverse cannot be a hostile, openly manipulated environment if we’re going to see financial transactions happening at scale. 

Already, one participant noted, societal rules are being brought into the Metaverse. You don’t need physical contact to have altercations and conflict; tweets and Facebook comments can cause harm in real ways, and we need to consider the impacts of damaging behaviors in the highly immersive metaverse. Platforms create codes of conduct, but those expectations don’t persist across the breadth of a user’s experience in the metaverse.

Another pointed out that we don’t even have customer identity or online safety solutions that work perfectly in Web 2 and are carrying these flaws we already know about into Web 3. Credit card hacking and data breaches involving online credit card purchases have plagued e-commerce since its inception.

Even so, the level of concern over privacy and safety issues varies wildly among consumers. Some will be more comfortable with a level of risk than others.

4. Metaverse Governance and Mapping Virtual Behavior to Real-World Consequence 

Dr. Walsh asked of the group, will we have government in the metaverse, or will it be self-governing?

On this, one participant believes that regulating blockchain will sort out much of what needs to happen for the metaverse. The principles of blockchain are self-preservation of the community and consensus, they said, but that’s going to take a while to produce in the metaverse.

Recommended reading: Guide to Blockchain Technology Business Benefits & Use Cases

Another kicked off a fascinating discussion around the extent to which AI might “police” the metaverse. Artificial intelligence is already at work on Web 2.0 platforms in centralized content moderation and enforcing rules against harassment. Imagine metaverse police bots out in full force, patrolling for noncompliance. We’ll need this for the self-preservation of the metaverse, the attendee said. 

Participants seemed to agree that when what’s happening in the metaverse has real-life consequences, regulation must reflect that. Legit business cannot happen in a space where financial crimes happen with impunity. 

However, who will be responsible for creating and enforcing those regulations remains to be seen. In a space with no geographical boundaries, which real-world governments or organizations will define what bad behavior is? 

“If I’m in the European metaverse, maybe I have a smoking room and people drink at 15,” one participant noted with a wry smile. “That’s okay in some parts of the world, but it’s very bad behavior in others.”

In the metaverse as a siloed group of worlds with individual governance and regulation, financial institutions may have to account for varying currency rates and conversion, digital asset ownership and portability, and other issues. Or, we may see the consolidation of spaces and more streamlined regulations than in the real world and Web 2.0. The jury is out.

Reflecting Back & Looking Ahead

For finance brands, the sheer volume of work to be done before entering the metaverse in a transactional way seems overwhelming. “The amount of things we have to build on the very basic stack we have is staggering,” one participant said.

However, we will bring a number of things from the real, physical world into the metaverse because we need those as humans. These range from our creature comforts – a comfortable sofa, a beautiful view – to ideals such as trust, and law and order, the nuts and bolts of a functioning society. How those real-world ideas and guiding principles adapt to the metaverse remains to be seen.

We’re currently in the first phase of the metaverse, where individual worlds define good and bad behavior, and regulate the use of their platforms. The second stage will be interoperability by choice. For example, Facebook and Microsoft could agree you can have an identity move between their platforms, and in that case those entities will dictate what behaviors are acceptable or not in their shared space.

Eventually, people should be able to seamlessly live their life in the digital metaverse. That’s the far future state, where you can go to a mall in the metaverse, wander and explore, and make choices about which stores you want to visit. By the time we get there, we’ll need fully implemented ethics, regulations, and laws to foster an ecosystem of trust – one in which customers feel comfortable executing financial transactions en masse. Large organizations will need to see these regulations and governance in place before they can move beyond experimentation to new lines of business.

The technology is new, but the concepts are not. Past experience tells us there are things we need to get into place before we’ll see mass adoption and financial transactions happening at scale in the metaverse. 

Regardless of how one might think of having centralized controls thrust upon them, the vast majority of consumers will not do financial business in an ecosystem without trust. Regulation is one of the key signals financial institutions, banks, insurance providers and others in their space need to monitor, to determine when the metaverse can move from the future planning horizon to an exciting opportunity for near-term business growth.

In the meantime, business leaders can work on establishing the internal structure and support for working cross-functionally with legal and governance functions to stay abreast of regulatory changes and ensure compliance. This is also a good time to explore opportunities where the metaverse could help organizations overcome compliance obstacles, and imagine future possibilities for working with regulators to combat financial crime within the metaverse. 

There’s much groundwork to be laid, and it will take a collaborative effort to build the ecosystem of trust financial organizations and customers need to conduct transactions safely and responsibly in the metaverse. 

Want to learn more?

See how a UK bank improved CX for its 14 million customers with AIOps

Are you ready to embrace the future of automotive innovation? Learn about GlobalLogic’s white paper that unveils a modern paradigm for vehicle software development leveraging the power of cloud technology: The SDV Cloud Framework.

Top Reasons to Download:

  • Discover the next-gen paradigm for vehicle development

  • Harness the power of software-defined components

  • Enable over-the-air updates for your vehicle software

  • Learn about central unit applications and software reuse

  • Boost your business with GlobalLogic’s integration and infrastructure services

Download the White Paper now and drive into the future.

Pre-pandemic, “remote health monitoring” was not a common term in the average person’s lexicon. Major technology and healthcare companies have heavily invested in research and launching various health monitoring sensors and related ecosystems, and today, remote health monitoring is gaining acceptance and adoption among the general public.

In the post-pandemic world, personal health care is an increasing focus and priority. The major benefit of remote health monitoring is obvious: caregivers may need to avoid physical meetings with patients but still need an ecosystem where all vital data is available remotely to aid in diagnosis and treatment.

Aside from infectious disease concerns, patients suffering from chronic illnesses require ongoing monitoring of vital parameters. Clinicians can reduce barriers to access for patients when daily/routine health monitoring and consultation can be done remotely, with physical meetings limited to major intervention and diagnosis. The benefits of remote health monitoring are many, and technology has a critical role to play.

We’ll explore remote health monitoring advantages, use cases, and solutions, but first – an important point of clarification. Remote health monitoring refers to the collection of patient vitals, while “remote clinical consultation” focuses on remote treatment and doctors’ recommendations based on vitals and other available patient data. We’ll focus on remote health monitoring in this article.

Types of Health Monitoring

Monitoring patient vitals such as body temperature, pulse, oxygen saturation, weight, and other factors inform medical consultations and are an important part of the diagnosis of symptoms. Regular monitoring becomes even more important for patients undergoing treatment and suffering from disease. 

Traditionally, vitals monitoring has been done at the hospital or clinic where the patient is being treated. However, this has evolved with increasing technological innovation and the falling costs of monitoring devices/sensors. Three major classifications of health monitoring today include:

    • In-person monitoring: The patient and clinician must physically meet in order to perform vitals monitoring.
    • On-demand monitoring: The patient or their caregiver can monitor their vitals at home as scheduled.
    • Implicit monitoring: Smart, wearable devices implicitly collect and monitor vital patient data.

How Remote Health Monitoring Helps

Remote health monitoring combines on-demand and implicit monitoring, helping doctors and other healthcare practitioners gain ongoing visibility of patient data that can signal changes in the patient’s health condition. This is crucial for patients suffering from diseases. In some cases of typical disease, patients seem on the road to recovery during hospitalization but see their health problems return within a few weeks of discharge. Remote Health Monitoring can help in such cases to provide post-discharge monitoring. 

One solution is to have a full-time trained nurse to track patient vitals. However, this is expensive and not sustainable, making it impractical to serve every patient like this. 

Remote health monitoring platforms offer an alternative via a set of medical devices and sensors connected to a mobile application and/or a suite of remote applications to monitor and relay patient data to clinicians. This provides healthcare providers with an ongoing view into patient health so that they can monitor an existing treatment plan or intervene based on inputs from medical devices and health sensors.

Recommended reading: Digital Biomarkers: The New Era of Wearable Technology

Remote Health Monitoring Advantages

Remote health monitoring is a boon to patients with chronic illnesses who require close attention to body vitals. Other major advantages of the technology include:

  • Overall improvement in post-hospitalization value-based care.
  • Close engagement with patient chronic illness by monitoring patient health remotely.
  • A reduced number of hospital visits reduces barriers for patients with mobility issues.
  • Reduced number of readmissions in hospital.
  • Helps in social distancing.
  • Helps in addressing the shortage of trained health professionals. 
  • Regular health monitoring of patients can aid in earlier disease detection and better treatment outcomes.

Remote Health Monitoring Challenges

As with other remote monitoring solutions (such as predictive maintenance in manufacturing), remote health monitoring relies on connectivity to ensure data syncing between sensors and the devices clinicians use to access those insights. Other major challenges include:

  • Latency in data syncing.
  • Onboarding and patient learning to wear and maintain the new device. 
  • Data inconsistency and duplication.
  • Remote configuration of devices and debugging.
  • Data ingestion from devices with different output formats.

Remote Health Monitoring Device Classifications

Medical devices and sensors to measure body vitals are the most important part of a remote health monitoring solution. While most devices are external, modern technological advances mean some of these devices can be embedded and serve multiple purposes. For example, some pacemakers can control heartbeat while also tracking and delivering vital information about heart condition.

There are limitations; for example, devices and sensors preprogrammed to take readings at a particular event or time must sync with mobile applications over Bluetooth. Some devices must be operated manually to take required vitals readings, with data entered manually into an application. With that in mind, we can broadly group medical devices and sensors into the classifications below.

Implicit Reading Devices

These devices collect vitals data without manual intervention and can include:

  • Implantable or embedded devices that serve a specific purpose, such as cochlear implants or pacemakers.
  • Wearable devices such as a smartwatch or pedometer.

Manual Reading Devices

The patient or their caregiver is required to take the reading and input data. One example of such an external device is a pulse oximeter.

Remote Health Monitoring Use Cases

There are many possible workflows for remote health monitoring. Mapping out the workflow is an important part of solution design and enables all stakeholders to see how and where patient data is being used, and what decisions are being made based on it.

Here, we examine a common workflow for a patient discharged from the hospital and in need of ongoing care:

There are many use cases already in practice today, and innovations in the space are opening up new opportunities for remote health monitoring each day. Here are several more ways this technology can be used to benefit patients and improve healthcare outcomes.

Workplace safety and injury prevention

In many industries, shifts are long and tiring, and lapses in judgment or human error can lead to major financial losses and even loss of life. In aviation, heavy machinery operation, natural resource extraction, and even healthcare itself, the stakes are high. Remote health monitoring solutions can provide a system of wellness checks for professionals involved in high-risk workplaces.

Insurance

Technological advancements are enabling insurance companies to use data from remote health monitoring services to decide on yearly premiums. Individuals with healthy records can be awarded a reduced premium as compared to individuals with risk factors identified in the collected data.

Athletes

Vitals data can contain significant insights for athletes and sports personalities, with body parameters differing for each athletic and sporting activity. For example, a javelin thrower may want to measure the speed of their run to give optimum results. A swimmer might benefit from blood oxygen insights to help improve performance. Health monitoring can be modified with a new set of sensors and devices to create solutions for various types of athletes.

Fitness enthusiasts

Individuals increasingly want to live a healthy lifestyle, and monitoring body parameters can help. There are already various solutions available, such as wearable fitness devices. The engagement of fitness enthusiasts can be further increased by modifying remote health monitoring solutions to track and evaluate other aspects of daily life such as sleep quality, or screen time.

Important Considerations for Evaluating RHM Solutions

The specific qualities and capabilities of your solution will depend largely on the needs of your business, patients, and healthcare professionals. In a broad sense, quality remote health monitoring solutions will have most or all of the following characteristics:

  • Simple device onboarding and registration with patients as well as hospitals.
  • Frequent collection of data from devices and synchronization with remote services with minimum latency.
  • Business rules optimized to raise alarm warnings and emergency notifications.
  • Fulfillment of emergency notifications.
  • Unified communication solutions to provide end-to-end communication.
  • Scheduling for patient and doctor/hospital physical or virtual meetings.
  • Interoperability solutions for smooth flow of patient records.
  • Billing and subscriptions. 
  • HIPAA compliance to safeguard PHI.

Example: The IoMT FHIR Connector for Azure

Remote health monitoring platforms face common core challenges, including data ingestion at high frequency, scalability to add new devices, and data interoperability. 

The IoMT FHIR Connector for Azure tried to solve all these problems by providing tools for seamless data-pulling from medical devices (IOMT). Data is pushed securely to Azure for remote health monitoring. In this way, this solution also solves problems around lack of interoperability by persisting data in a FHIR (Fast Healthcare Interoperability Resources) server. Learn more in the github repository

Conclusion

Remote health monitoring is a rapidly evolving space, with much research ongoing and new solutions being released regularly. Though there are many off-the-shelf solutions available, solutions can be built from the ground up – or around open-source tools like IoMT Connector for Azure – to meet the specific needs of patients and their healthcare providers.

Want to learn more? Explore how we’re revolutionizing healthcare experiences with technology here, or reach out to a member of the GlobalLogic team with questions.

Learn more:

Geoffrey Hinton, one of the so-called ‘Godfathers of AI’, made headlines at the beginning of May after stepping down from his role as a Google AI researcher. A few days later, he delivered a talk at the MIT Technology Review’s EmTech Digital event.

When asked about his decision to quit, Hinton mentioned that getting old (he is now 75) had been a contributing factor, claiming that he cannot program that well anymore (he forgets things when he writes code, for example). Age aside, the biggest reason was realising how unexpectedly and terrifyingly good “Large language models” (LLMs) had become and recognising the need to speak out about it without compromising his employer.

After explaining beautifully how Backpropagation works (the core type of algorithm behind both Deep Learning and LLMs), in terms of learning how to recognise the image of a bird vs that of a non-bird, Hinton claimed that this has recently become so good that it cannot possibly be how the human brain works. Originally, he had hoped to get an insight into how the brain works by continually improving the algorithms, but – as of now – LLMs can often reason as well as a human with just one trillion connections, when humans need 100 trillion of them and many years to learn how to reason in the first place.

Learning takes time for us humans. Transferring our acquired knowledge to another human also involves investing considerable time and effort, knowledge, that – if not passed on – would otherwise perish with our inevitable death.

In contrast, an AI instance can never die. It can constantly communicate and transfer new knowledge to all other instances simultaneously, thereby augmenting the “collective AI intelligence.” And even if the current hardware breaks or fails, the code and parameters can just get transferred to a new storage medium. So, in effect, we have already achieved immortality, but sadly not for humans (and definitely not for Ray Kurzweil, who has made it his life mission! But as Hinton remarked, “Who would want immortality for white males” anyway! ).

All this is what made Hinton make the bold, chilling, but now somehow completely reasonable claim that he fears that humans are just an isolated step in the evolution of intelligence. In his view, we evolved to reach the point of creating the LLMs, which then went on to quietly consume everything we have ever written, thought, invented, – including Machiavelli – and can now, as a result, exhibit understanding and reasoning (relationships between entities and events, generalisations, inferences). So they will no longer need us around, “except perhaps for a while to keep the power stations going!”

Hinton clarified his view by referring to evolution: Humans evolved with some clear basic goals. These include things that we instinctively try to fulfill (e.g. eating and making copies of ourselves). Machines / AI did not evolve with any such goals, but it is reasonable to expect that they will soon develop “subgoals” of their own. One such subgoal may be “control” (you get more things done if you gain control).

To seize control, you may well take recourse to “manipulation” techniques – remember the Machiavelli texts we have let the LLMs ingest? Manipulation can be very covert and may even hide under the impression of benevolence, compliance or even yielding control. “You can force your way into the White House without ever going there yourself” as Hinton poignantly remarked in reference to the infamous January 6th insurrection.

So, what is the solution?

Hinton doesn’t see one!

We certainly cannot put a stop to LLM development and “Giant AI Experiments,” as many AI Scientists and Thought Leaders recently demanded with their Open Letter. Incidentally, according to Hinton, there had been such attempts already back in 2017, and his employer Google had held a long time before releasing their models, exactly out of apprehension that they could get misused (which is why Google Bard came out after ChatGPT and the New Bing).

We have now passed the point of no return for LLM development, if nothing else, because there is a real risk that should one country stop investing in these technologies, another one (worse case, their adversary) may continue exploiting them. We could perhaps establish some sort of “LLM non-proliferation treaty” along the lines of the one curbing the use of nuclear weapons, but again, this depends, according to Hinton, on the absence of bad (human) actors. AI is already used in War, but it is also increasingly used to control and punish citizens and dissidents by repressive Governments and immoral Politicians too.

We cannot depend on explainability or transparency either. Having learned pretty much everything about human emotions, thoughts, motivations and relationships, AI models can now imitate collaboration and compliance and can, therefore, also leverage this information to eventually lie about their goals and actions (short of doing an “I’m sorry, but I can’t do that, Dave”).
Hinton does not see a plateau in LLM development; they will just keep getting better with more information and further refinement through context. And even domain-specificity will just mean that LLMs learn to exhibit different rules for different worlds, philosophies, and attitudes (e.g. Liberal vs Conservative worldviews).

It should come as no surprise that Hinton has no doubt that the job market will change dramatically in the next few years. More and more tasks, even creative ones, will be taken over by intelligent chatbots, rendering us more efficient and effective. For instance, Hinton believes that LLMs will revolutionise medicine.

Ultimately, however, Hinton believes that AI, in general, will just benefit the rich (who will have more time) and disadvantage the poor (who will lose their jobs), thus further widening the gap between the two. The rich will get richer; the poor will get poorer, and gradually, increasingly indignant, and violent, which will result in conflict and possibly our own demise.

An ideal outcome for the intelligent machines we have created (in our own image), as we are very perishable and therefore expendable (and by now superfluous anyway). Nevertheless, we will have served our purpose in the evolution of “intelligence,” at least on a planetary, if no longer on a species, level!

The only thing that remains is for us humans to be aware of what is happening and to band together united in dealing with the consequences of our own brilliance.

Sounds like the best Sci-Fi movies we have already seen. Only now it’s an urgent reality.

What steps can you take now?

To address the concerns of Hinton and other AI Visionaries, at GlobalLogic, we have set up a Generative AI (GAI) Centre of Excellence (CoE), drawing together our AI and Machine Learning experts from all over the world, and we are carefully considering the GAI use cases that could be of value to our clients. We differentiate ourselves in that we can guide you on how to best implement GAI technologies in a safe, secure, transparent, controllable, trustworthy, ethical, legally waterproof, and regulatory compliant manner.

Dr Maria Aretoulaki is part of this CoE and recently spoke on the importance of Explainable and responsible Conversational and Generative AI at this year’s European Chatbot & Conversational AI Conference, which you can find here.

Reach out to our experts today to make AI work for you rather than the other way round!

***

About the author:

Dr Maria Aretoulaki has been working in AI and Machine Learning for the past 30 years: NLP, NLU, Speech Recognition, Voice & Conversational Experience Design. Having started in Machine Translation and Text Summarisation using Artificial Neural Networks, she has focused on natural language conversational voicebots and chatbots, mainly for Contact Centre applications for organisations worldwide across all the main verticals.

In 2018, Maria coined the term “Explainable Conversational Experience Design”, which later morphed to “Explainable Conversational AI” and more recently – with the explosion of LLMs and the ChatGPT hype – to “Explainable Generative AI” to advocate for transparent, responsible, design-led AI bot development that keeps the human in the loop and in control.

Maria joined GlobalLogic in 2022 where she is working with the Consumer Solutions & Experiences capability in the UK and the global AI/ML and Web3/Blockchain Practices. In 2023 she was invited to join the GlobalLogic Generative AI Centre of Excellence, where she is helping shape the company’s Responsible Generative AI strategy. She recently contributed to the Hitachi official response to the US Dept of Commerce NTIA proposal on Accountability in AI and regularly contributes to various HITACHI and METHOD Design initiatives.

Architectural drift and erosion in software development can seriously impact the business and go-to-market strategy, causing delays, decreased quality, and even product failure. Companies must have processes and workflows in place to detect architectural gaps but historically, those manual checks have been time consuming and prone to human error.

In this paper, we explore the different types of manual architecture review and propose automated alternatives to reduce the time and resources required even while producing better outcomes. You’ll learn:

  • What architecture drift and erosion are, and how they impact the business.
  • How dependency analysis, peer reviews, and other manual inspections work.
  • Why even though they catch issues not prevented through the application of best practice good architecture governance, manual reviews are not the ideal solution.
  • Specific considerations to keep in mind around compliance, data security, DevOps, and more when evaluating architecture review solutions.
  • What automating architecture checks may look like in a series of example use case scenarios.
  • URL copied!