Thinking out loud
Where we share the insights, questions, and observations that shape our approach.
How to make your enterprise data ready for AI
As AI continues to transform industries, one thing becomes increasingly clear: the success of AI-driven initiatives depends not just on algorithms but on the quality and readiness of the data that fuels them. Without well-prepared data, even the most advanced artificial intelligence endeavors can fall short of their promise. In this guide, we cover the practical steps you need to take to prepare your data for AI.
What's the point of AI-ready data?
The conversation around AI has shifted dramatically in recent years. No longer a distant possibility, AI is now actively changing business landscapes - transforming supply chains through predictive analytics, personalizing customer experiences with advanced recommendation engines, and even assisting in complex fields like financial modeling and healthcare diagnostics.
The focus today is not on whether AI technologies can fulfill its potential but on how organizations can best deploy it to achieve meaningful, scalable business outcomes.
Despite pouring significant resources into AI, businesses are still finding it challenging to fully tap into its economic potential.
For example, according to Gartner , 50% of organizations are actively assessing GenAI's potential, and 33% are in the piloting stage. Meanwhile, only 9% have fully implemented generative AI applications in production, while 8% do not consider them at all.

Source: www.gartner.com
The problem often comes down to a key but frequently overlooked factor: the relationship between AI and data. The key issue is the lack of data preparedness . In fact, only 37% of data leaders believe that their organizations have the right data foundation for generative AI, with just 11% agreeing strongly. That means specifically that chief data officers and data leaders need to develop new data strategies and improve data quality to make generative AI work effectively .
What does your business gain by getting your data AI-ready?
When your data is clean, organized, and well-managed , AI can help you make smarter decisions, boost efficiency, and even give you a leg up on the competition .
So, what exactly are the benefits of putting in the effort to prepare your data for AI? Let’s break it down into some real, tangible advantages.
- Clean, organized data allows AI to quickly analyze large amounts of information, helping businesses understand customer preferences, spot market trends, and respond more effectively to changes.
- Getting data AI-ready can save time by automating repetitive tasks and reducing errors.
- When data is properly prepared, AI can offer personalized recommendations and targeted marketing, which can enhance customer satisfaction and build loyalty.
- Companies that prepare their data for AI can move faster, innovate more easily, and adapt better to changes in the market, giving them a clear edge over competitors.
- Proper data preparation ensures businesses can comply with regulations and protect sensitive information.
Importance of data readiness for AI
Unlike traditional algorithms that were bound by predefined rules, modern AI systems learn and adapt dynamically when they have access to data that is both diverse and high-quality.
For many businesses, the challenge is that their data is often trapped in outdated legacy systems that are not built to handle the volume, variety, or velocity required for effective AI. To enable AI to innovate, companies need to first free their data from old silos and establish a proper data infrastructure.
Key considerations for data modernization
- Bring together data from different sources to create a complete picture, which is essential for AI systems to make useful interpretations.
- Build a flexible data infrastructure that can handle increasing amounts of data and adapt to changing AI needs.
- Set up systems to process data in real-time or near-real-time for applications that need immediate insights.
- Consider ethical and privacy issues and comply with regulations like GDPR or CCPA.
- Continuously monitor data quality and AI performance to maintain accuracy and usefulness.
- Employ data augmentation techniques to increase the variety and volume of data for training AI models when needed.
- Create feedback mechanisms to improve data quality and AI performance based on real-world results.
Creating data strategy for AI
Many organizations fall into the trap of trying to apply AI across every function, often ending up with wasted resources and disappointing results. A smarter approach is to start with a focused data strategy.
Think about where AI can truly make a difference – would it be automating repetitive scheduling tasks, personalizing customer experiences with predictive analytics , or using generative AI for content creation and market analysis?
Pinpoint high-impact areas to gain business value without spreading your efforts too thin.
Building a solid AI strategy is also about creating a strong data foundation that brings all factors together. This means making sure your data is not only reliable, secure, and well-organized but also set up to support specific AI use cases effectively.
It also involves creating an environment that encourages experimentation and learning. This way, your organization can continuously adapt, refine its approach, and get the most out of AI over time.
Building an AI-optimized data infrastructure
After establishing an AI strategy, the next step is building a data platform that works like the organization’s central nervous system, connecting all data sources into a unified, dynamic ecosystem.
Why do you need it? Because traditional data architectures were built for simpler times and can't handle the sheer diversity and volume of today's data - everything from structured databases to unstructured content like videos, audio, and user-generated data.
An AI-ready data platform needs to accommodate all these different data types while ensuring quick and efficient access so that AI models can work with the most relevant, up-to-date information.
Your data platform needs to show "data lineage" - essentially, a clear map of how data moves through your system. This includes where the data originates, how it’s transformed over time, and how it gets used in the end. Understanding this flow maintains trust in the data, which AI models rely on to make accurate decisions.
At the same time, the platform should support "data liquidity." This is about breaking data into smaller, manageable pieces that can easily flow between different systems and formats. AI models need this kind of flexibility to get access to the right information when they need it.
Adding active metadata management to this mix provides context, making data easier to interpret and use. When all these components are in place, they turn raw data into a valuable, AI-ready asset.
Setting up data governance and management rules
Think of data governance as defining the rules of the game: how data should be collected, stored, and accessed across your organization. This includes setting up clear policies on data ownership, access controls, and regulatory compliance to protect sensitive information and ensure your data is ethical, unbiased, and trustworthy.
Data management , on the other hand, is all about putting these rules into action. It involves integrating data from different sources, cleaning it up, and storing it securely , all while making sure that high-quality data is always available for your AI projects. Effective data management also means balancing security with access so your team can quickly get to the data they need without compromising privacy or compliance. Together, strong governance and management practices create a fluid, efficient data environment.
The crux of the matter - preparing your data
Remember that data readiness goes beyond just accumulating volume. The key is to make sure that data remains accurate and aligned with the specific AI objectives. Raw data, coming straight from its source, is often filled with errors, inconsistencies, and irrelevant information that can mislead AI models or distort results.
When you handle data with care, you can be confident that your AI systems will deliver tangible business value across the organization.
Focus on the quality of your training data . It needs to be accurate, consistent, and up-to-date. If there are gaps or errors, your AI models will deliver unreliable results. Address these issues by using data cleaning techniques , like filling in missing values (imputation), removing irrelevant information (noise reduction), and ensuring that all entries follow the same format.
Create a solid data foundation that ensures all assets are ready for AI applications. Rising data volumes (think of transaction histories, service requests, or customer records) can quickly overwhelm AI systems if not properly organized. Therefore, make sure your data is well- categorized, labeled, and stored in a format that’s easy for AI to access and analyze.
Also, make a habit of regularly reviewing your data to keep it accurate, relevant, and ready for use.
Preparing data for generative AI
For generative AI, data preparation is even more specialized, as these models require high-quality datasets that are free of errors, diverse and balanced to prevent biased or misleading outputs.
Your dataset should represent a wide range of scenarios , giving the model a thorough base to learn from, which requires incorporating data from multiple sources, demographics, and contexts.
Also, consider that generative AI models often require specific preprocessing steps depending on the type of data and the model architecture. For example, text data might need tokenization, while image data might require normalization or augmentation.
The big picture - get your organization AI-ready too
All your efforts with data and AI tools won't matter much if your organization isn’t prepared to embrace these changes. The key is building a team that combines tech talent - like data scientists and machine learning experts - with people who understand your business deeply. This means you might need to train and upskill your existing employees to fill gaps.
But there is more – you also need to think about creating a culture that welcomes transformation . Encourage experimentation, cross-team collaboration, and continuous learning. Make sure everyone understands both the potential and the risks of AI. When your team feels confident and aligned with your AI strategy, that’s when you’ll see the real impact of all your hard work.
By focusing on these steps, you create a solid foundation that helps AI deliver real results, whether that's through better decision-making, improving customer experiences, or staying competitive in a fast-changing market. Preparing your data may take some effort upfront, but it will make a big difference in how well your AI projects perform in the long run.
Challenges of the legacy migration process and best practices to mitigate them
Legacy software is the backbone of many organizations, but as technology advances, these systems can become more of a burden than a benefit. Migrating from a legacy system to a modern solution is a daunting task fraught with challenges, from grappling with outdated code and conflicting stakeholder interests to managing dependencies on third-party vendors and ensuring compliance with stringent regulatory standards.
However, with the right strategies and leveraging advanced technologies like Generative AI, these challenges can be effectively mitigated.
Challenge #1: Limited knowledge of the legacy solution
The average lifespan of business software can vary widely depending on several factors, such as the type of software or the industry it serves. Nevertheless, no matter if the software is 5 or 25 years old, it is highly possible its creators and subject matter experts are not accessible anymore (or they barely remember what they built and how it really works), the documentation is incomplete, the code messy and the technology forgotten a long time ago.
Lack of knowledge of the legacy solution not only blocks its further development and maintenance but also negatively affects its migration – it significantly slows down the analysis and replacement process.
Mitigation:
The only way to understand what kind of functionality, processes and dependencies are covered by the legacy software and what really needs to get migrated is in-depth analysis. An extensive discovery phase initiating every migration project should cover:
- interviews with the key users and knowledge keepers,
- observations of the employees and daily operations performed within the system,
- study of all the available documentation and resources,
- source code examination.
The discovery phase, although long (and boring!), demanding, and very costly, is crucial for the migration project’s success. Therefore, it is not recommended to give in to the temptation to take any shortcuts there.
At Grape Up , we do not. We make sure we learn the legacy software in detail, optimizing the analytical efforts at the same time. We support the discovery process by leveraging Generative AI tools . They help us to understand the legacy spaghetti code, forgotten purpose, dependencies, and limitations. GenAI enables us to make use of existing incomplete documentation or to go through technologies that nobody has expertise in anymore. This approach significantly speeds the discovery phase up, making it smoother and more efficient.
Challenge #2: Blurry idea of the target solution & conflicting interests
Unfortunately, understanding the legacy software and having a complete idea of the target replacement are two separate things. A decision to build a new solution, especially in a corporate environment, usually encourages multiple stakeholders (representing different groups of interests) to promote their visions and ideas. Often conflicting, to be precise.
This nonlinear stream of contradicting requirements leads to an uncontrollable growth of the product backlog, which becomes extremely difficult to manage and prioritize. In consequence, efficient decision-making (essential for the product’s success) is barely possible.
Mitigation:
A strong Product Management community with a single product leader - empowered to make decisions and respected by the entire organization – is the key factor here. If combined with a matching delivery model (which may vary depending on a product & project specifics), it sets the goals and frames for the mission and guides its crew.
For huge legacy migration projects with a blurry scope, requiring constant validation and prioritization, an Agile-based, continuous discovery & delivery process is the only possible way to go. With a flexible product roadmap (adjusted on the fly), both creative and development teams work simultaneously, and regular feedback loops are established.
High pressure from the stakeholders always makes the Product Leader’s job difficult. Bold scope decisions become easier when MVP/MDP (Minimum Viable / Desirable Product) approach & MoSCoW (must-have, should-have, could-have, and won't-have, or will not have right now) prioritization technique are in place.
At Grape Up, we assist our clients with establishing and maintaining efficient product & project governance, supporting the in-house management team with our experienced consultants such as Business Analysts, Scrum Masters, Project Managers, or Proxy Product Owners.
Challenge #3: Strategical decisions impacting the future
Migrating the legacy software gives the organization a unique opportunity to sunset outdated technologies, remove all the infrastructural pain points, reach out for modern solutions, and sketch a completely new architecture.
However, these are very heavy decisions. They must not only address the current needs but also be adaptable to future growth. Wrong choices can result in technical debt, forcing another costly migration – much sooner than planned.
Mitigation:
A careful evaluation of the current and future needs is a good starting point for drafting the first technical roadmap and architecture. Conducting a SWOT analysis (Strengths, Weaknesses, Opportunities, Threats) for potential technologies and infrastructural choices provides a balanced view, helping to identify the most suitable options that align with the organization's long-term plan. For Grape Up, one of the key aspects of such an analysis is always industry trends.
Another crucial factor that supports this difficult decision-making process is maintaining technical documentation through Architectural Decision Records (ADRs). ADRs capture the rationale behind key decisions, ensuring that all stakeholders understand the choices made regarding technologies, frameworks, or architectures. This documentation serves as a valuable reference for future decisions and discussions, helping to avoid repeating past mistakes or unnecessary changes (e.g. when a new architect joins the team and pushes for his own technical preferences).

Challenge #4: Dependencies and legacy 3 rd parties
When migrating from a legacy system, one of the significant challenges is managing dependencies with numerous other applications and services which are integrated with the old solution, and need to remain connected with the new one. Many of these are often provided by third-party vendors that may not be willing or able to quickly respond to our project’s needs and adapt to any changes, posing a significant risk to the migration process. Unfortunately, some of the dependencies are likely to be hidden and spotted not early enough, affecting the project’s budget and timeline.
Mitigation:
To mitigate this risk, it's essential to establish strong governance over third-party relationships before the project really begins. This includes forming solid partnerships and ensuring that clear contracts are in place, detailing the rules of cooperation and responsibilities. Prioritizing demands related to third-party integrations (such as API modifications, providing test environments, SLA, etc.), testing the connections early, and building time buffers into the migration plan are also crucial steps to reduce the impact of potential delays or issues.
Furthermore, leveraging Generative AI, which Grape Up does when migrating the legacy solution, can be a powerful tool in identifying and analyzing the complexities of these dependencies. Our consultants can also help to spot potential risks and suggest strategies to minimize disruptions, ensuring that third-party systems continue to function seamlessly during and after the migration.
Challenge #5: Lack of experience and sufficient resources
A legacy migration requires expertise and resources that most organizations lack internally. It is 100% natural. These kinds of tasks occur rarely; therefore, in most cases, owning a huge in-house IT department would be irrational.
Without prior experience in legacy migrations, internal teams may struggle with project initiation; for that reason, external support becomes necessary. Unfortunately, quite often, the involvement of vendors and contractors results in new challenges for the company by increasing its vulnerability (e.g., becoming dependent on externals, having data protection issues, etc.).
Mitigation:
To boost insufficient internal capabilities, it's essential to partner with experienced and trusted vendors who have a proven track record in legacy migrations. Their expertise can help navigate the complexities of the process while ensuring best practices are followed.
However, it's recommended to maintain a balance between internal and external resources to keep control over the project and avoid over-reliance on external parties. Involving multiple vendors can diversify the risk and prevent dependency on a single provider.
By leveraging Generative AI, Grape Up manages to optimize resource use, reducing the amount of manual work that consultants and developers do when migrating the legacy software. With a smaller external headcount involved, it is much easier for organizations to manage their projects and keep a healthy balance between their own resources and their partners.
Challenge #6: Budget and time pressure
Due to their size, complexity, and importance for the business, budget constraints and time pressure are always common challenges for legacy migration projects. Resources are typically insufficient to cover all the requirements (that keep on growing), unexpected expenses (that always pop up), and the need to meet hard deadlines. These pressures can result in compromised quality, incomplete migrations, or even the entire project’s failure if not managed effectively.
Mitigation:
Those are the other challenges where strong governance and effective product ownership would be helpful. Implementing an iterative approach with a focus on delivering an MVP (Minimum Viable Product) or MDP (Minimum Desirable Product) can help prioritize essential features and manage scope within the available budget and time.
For tracking convenience, it is useful to budget each feature or part of the system separately. It’s also important to build realistic time and financial buffers and continuously update estimates as the project progresses to account for unforeseen issues. There are multiple quick and sufficient (called “magic”) estimation methods that your team may use for that purpose, such as silent grouping.
As stated before, at Grape Up, we use Generative AI to reduce the workload on teams by analyzing the old solution and generating significant parts of the new one automatically. This helps to keep the project on track, even under tight budget and time constraints.
Challenge #7: Demanding validation process
A critical but typically disregarded and forgotten aspect of legacy migration is ensuring the new system meets not only all the business demands but also compliance, security, performance, and accessibility requirements. What if some of the implemented features appear to be illegal? Or our new system lets only a few concurrent users log in?
Without proper planning and continuous validation, these non-functional requirements can become major issues shortly before or after the release, putting the entire project at risk.
Mitigation:
Implementation of comprehensive validation, monitoring, and testing strategies from the project's early stages is a must. This should encompass both functional and non-functional requirements to ensure all aspects of the system are covered.
Efficient validation processes must not be a one-time activity but rather a regular occurrence. It also needs to involve a broad range of stakeholders and experts, such as:
- representatives of different user groups (to verify if the system covers all the critical business functions and is adjusted to their specific needs – e.g. accessibility-related),
- the legal department (to examine whether all the planned features are legally compliant),
- quality assurance experts (to continuously perform all the necessary tests, including security and performance testing).
Prioritizing non-functional requirements, such as performance and security, is essential to prevent potential issues from undermining the project’s success. For each legacy migration, there are also individual, very project-specific dimensions of validation. At Grape Up, during the discovery phase our analysts empowered by GenAI take their time to recognize all the critical aspects of the new solution’s quality, proposing the right thresholds, testing tools, and validation methods.
Challenge #8: Data migration & rollout strategy
Migrating data from a legacy system is one of the most challenging tasks of a migration project, particularly when dealing with vast amounts of historical data accumulated over many years. It is complex and costly, requiring meticulous planning to avoid data loss, corruption, or inconsistency.
Additionally, the release of the new system can have a significant impact on customers, especially if not handled smoothly. The risk of encountering unforeseen issues during the rollout phase is high, which can lead to extended downtime, customer dissatisfaction, and a prolonged stabilization period.
Mitigation:
Firstly, it is essential to establish comprehensive data migration and rollout strategies early in the project. Perhaps migrating all historical data is not necessary? Selective migration can significantly reduce the complexity, cost, and time involved.
A base plan for the rollout is equally important to minimize customer impact. This includes careful scheduling of releases, thorough testing in staging environments that closely mimic production, and phased rollouts that allow for gradual transition rather than a big-bang approach.
At Grape Up, we strongly recommend investing in Continuous Integration and Continuous Delivery (CI/CD) pipelines that can streamline the release process, enabling automated testing, deployment, and quick iterations. Test automation ensures that any changes or fixes (that are always numerous when rolling out) are rapidly validated, reducing the risk of introducing new issues during subsequent releases.
Post-release, a hypercare phase is crucial to provide dedicated support and rapid response to any problems that arise. It involves close monitoring of the system’s performance, user feedback, and quick deployment of fixes as needed. By having a hypercare plan in place, the organization can ensure that any issues are addressed promptly, reducing the overall impact on customers and business operations.
Summary
Legacy migration is undoubtedly a complex and challenging process, but with careful planning, strong governance, and the right blend of internal and external expertise, it can be navigated successfully. By prioritizing critical aspects such as in-depth analysis, strategic decision-making, and robust validation processes, organizations can mitigate the risks involved and avoid common pitfalls.
Managing budgets and expenses effectively is crucial, as unforeseen costs can quickly escalate. Leveraging advanced technologies like Generative AI not only enhances the efficiency and accuracy of the migration process but also helps control costs by streamlining tasks and reducing the overall burden on resources.
At Grape Up, we understand the intricacies of legacy migration and are committed to helping our clients transition smoothly to modern solutions that support future growth and innovation. With the right strategies in place, your organization can move beyond the limitations of legacy systems, achieving a successful migration within budget while embracing a future of improved performance, scalability, and flexibility.
Android AAOS 14 - 4 Zone HVAC
In this article, we will explore the implementation of a four-zone climate control system for vehicles using Android Automotive OS (AAOS) version 14. Multi-zone climate control systems allow individual passengers to adjust the temperature for their specific areas, enhancing comfort and personalizing the in-car experience. We will delve into the architecture, components, and integration steps necessary to create a robust and efficient four-zone HVAC system within the AAOS environment.
Understanding four-zone climate control
A four-zone climate control system divides the vehicle's cabin into four distinct areas: the driver, front passenger, left rear passenger, and right rear passenger. Each zone can be independently controlled to set the desired temperature. This system enhances passenger comfort by accommodating individual preferences and ensuring an optimal environment for all occupants.
Modifying systemUI for four-zone HVAC in Android AAOS14
To implement a four-zone HVAC system in Android AAOS14, we first need to modify the SystemUI, which handles the user interface. The application is located in packages/apps/Car/SystemUI . The HVAC panel is defined in the file res/layout/hvac_panel.xml .
Here is an example definition of the HVAC panel with four sliders for temperature control and four buttons for seat heating:
<!--
~ Copyright (C) 2022 The Android Open Source Project
~
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<com.android.systemui.car.hvac.HvacPanelView
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:systemui="http://schemas.android.com/apk/res-auto"
android:id="@+id/hvac_panel"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="@dimen/hvac_panel_full_expanded_height"
android:background="@color/hvac_background_color">
<androidx.constraintlayout.widget.Guideline
android:id="@+id/top_guideline"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="horizontal"
app:layout_constraintGuide_begin="@dimen/hvac_panel_top_padding"/>
<androidx.constraintlayout.widget.Guideline
android:id="@+id/bottom_guideline"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="horizontal"
app:layout_constraintGuide_end="@dimen/hvac_panel_bottom_padding"/>
<!-- HVAC property IDs can be found in VehiclePropertyIds.java, and the area IDs depend on each OEM's VHAL implementation. -->
<com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView
android:id="@+id/driver_hvac"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@+id/row2_driver_hvac"
systemui:hvacAreaId="1">
<include layout="@layout/hvac_temperature_bar_overlay"/>
</com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView>
<com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView
android:id="@+id/row2_driver_hvac"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintTop_toBottomOf="@+id/driver_hvac"
app:layout_constraintBottom_toBottomOf="parent"
systemui:hvacAreaId="16">
<include layout="@layout/hvac_temperature_bar_overlay"/>
</com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView>
<com.android.systemui.car.hvac.SeatTemperatureLevelButton
android:id="@+id/seat_heat_level_button_left"
android:background="@drawable/hvac_panel_button_bg"
style="@style/HvacButton"
app:layout_constraintTop_toBottomOf="@+id/top_guideline"
app:layout_constraintLeft_toRightOf="@+id/driver_hvac"
app:layout_constraintBottom_toTopOf="@+id/recycle_air_button"
systemui:hvacAreaId="1"
systemui:seatTemperatureType="heating"
systemui:seatTemperatureIconDrawableList="@array/hvac_heated_seat_default_icons"/>
<com.android.systemui.car.hvac.toggle.HvacBooleanToggleButton
android:id="@+id/recycle_air_button"
android:layout_width="@dimen/hvac_panel_button_dimen"
android:layout_height="@dimen/hvac_panel_group_height"
android:background="@drawable/hvac_panel_button_bg"
app:layout_constraintTop_toBottomOf="@+id/seat_heat_level_button_left"
app:layout_constraintLeft_toRightOf="@+id/driver_hvac"
app:layout_constraintBottom_toTopOf="@+id/row2_seat_heat_level_button_left"
systemui:hvacAreaId="117"
systemui:hvacPropertyId="354419976"
systemui:hvacTurnOffIfAutoOn="true"
systemui:hvacToggleOnButtonDrawable="@drawable/ic_recycle_air_on"
systemui:hvacToggleOffButtonDrawable="@drawable/ic_recycle_air_off"/>
<com.android.systemui.car.hvac.SeatTemperatureLevelButton
android:id="@+id/row2_seat_heat_level_button_left"
android:background="@drawable/hvac_panel_button_bg"
style="@style/HvacButton"
app:layout_constraintTop_toBottomOf="@+id/recycle_air_button"
app:layout_constraintLeft_toRightOf="@+id/row2_driver_hvac"
app:layout_constraintBottom_toBottomOf="@+id/bottom_guideline"
systemui:hvacAreaId="16"
systemui:seatTemperatureType="heating"
systemui:seatTemperatureIconDrawableList="@array/hvac_heated_seat_default_icons"/>
<LinearLayout
android:id="@+id/fan_control"
android:background="@drawable/hvac_panel_button_bg"
android:layout_width="@dimen/hvac_fan_speed_bar_width"
android:layout_height="@dimen/hvac_panel_group_height"
app:layout_constraintTop_toBottomOf="@+id/top_guideline"
app:layout_constraintLeft_toRightOf="@+id/seat_heat_level_button_left"
app:layout_constraintRight_toLeftOf="@+id/seat_heat_level_button_right"
android:layout_centerVertical="true"
android:layout_centerHorizontal="true"
android:orientation="vertical">
<com.android.systemui.car.hvac.referenceui.FanSpeedBar
android:layout_weight="1"
android:layout_width="match_parent"
android:layout_height="0dp"/>
<com.android.systemui.car.hvac.referenceui.FanDirectionButtons
android:layout_weight="1"
android:layout_width="match_parent"
android:layout_height="0dp"
android:orientation="horizontal"
android:layoutDirection="ltr"/>
</LinearLayout>
<com.android.systemui.car.hvac.toggle.HvacBooleanToggleButton
android:id="@+id/ac_master_switch"
android:background="@drawable/hvac_panel_button_bg"
android:scaleType="center"
style="@style/HvacButton"
app:layout_constraintBottom_toBottomOf="@+id/bottom_guideline"
app:layout_constraintLeft_toRightOf="@+id/row2_seat_heat_level_button_left"
systemui:hvacAreaId="117"
systemui:hvacPropertyId="354419984"
systemui:hvacTurnOffIfPowerOff="false"
systemui:hvacToggleOnButtonDrawable="@drawable/ac_master_switch_on"
systemui:hvacToggleOffButtonDrawable="@drawable/ac_master_switch_off"/>
<com.android.systemui.car.hvac.toggle.HvacBooleanToggleButton
android:id="@+id/defroster_button"
android:background="@drawable/hvac_panel_button_bg"
style="@style/HvacButton"
app:layout_constraintLeft_toRightOf="@+id/ac_master_switch"
app:layout_constraintBottom_toBottomOf="@+id/bottom_guideline"
systemui:hvacAreaId="1"
systemui:hvacPropertyId="320865540"
systemui:hvacToggleOnButtonDrawable="@drawable/ic_front_defroster_on"
systemui:hvacToggleOffButtonDrawable="@drawable/ic_front_defroster_off"/>
<com.android.systemui.car.hvac.toggle.HvacBooleanToggleButton
android:id="@+id/auto_button"
android:background="@drawable/hvac_panel_button_bg"
systemui:hvacAreaId="117"
systemui:hvacPropertyId="354419978"
android:scaleType="center"
android:layout_gravity="center"
android:layout_width="0dp"
style="@style/HvacButton"
app:layout_constraintLeft_toRightOf="@+id/defroster_button"
app:layout_constraintRight_toLeftOf="@+id/rear_defroster_button"
app:layout_constraintBottom_toBottomOf="@+id/bottom_guideline"
systemui:hvacToggleOnButtonDrawable="@drawable/ic_auto_on"
systemui:hvacToggleOffButtonDrawable="@drawable/ic_auto_off"/>
<com.android.systemui.car.hvac.toggle.HvacBooleanToggleButton
android:id="@+id/rear_defroster_button"
android:background="@drawable/hvac_panel_button_bg"
style="@style/HvacButton"
app:layout_constraintLeft_toRightOf="@+id/auto_button"
app:layout_constraintBottom_toBottomOf="@+id/bottom_guideline"
systemui:hvacAreaId="2"
systemui:hvacPropertyId="320865540"
systemui:hvacToggleOnButtonDrawable="@drawable/ic_rear_defroster_on"
systemui:hvacToggleOffButtonDrawable="@drawable/ic_rear_defroster_off"/>
<com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView
android:id="@+id/passenger_hvac"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@+id/row2_passenger_hvac"
systemui:hvacAreaId="2">
<include layout="@layout/hvac_temperature_bar_overlay"/>
</com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView>
<com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView
android:id="@+id/row2_passenger_hvac"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toBottomOf="@+id/passenger_hvac"
app:layout_constraintBottom_toBottomOf="parent"
systemui:hvacAreaId="32">
<include layout="@layout/hvac_temperature_bar_overlay"/>
</com.android.systemui.car.hvac.referenceui.BackgroundAdjustingTemperatureControlView>
<com.android.systemui.car.hvac.SeatTemperatureLevelButton
android:id="@+id/seat_heat_level_button_right"
android:background="@drawable/hvac_panel_button_bg"
style="@style/HvacButton"
app:layout_constraintTop_toBottomOf="@+id/top_guideline"
app:layout_constraintRight_toLeftOf="@+id/passenger_hvac"
app:layout_constraintBottom_toTopOf="@+id/row2_seat_heat_level_button_right"
systemui:hvacAreaId="2"
systemui:seatTemperatureType="heating"
systemui:seatTemperatureIconDrawableList="@array/hvac_heated_seat_default_icons"/>
<com.android.systemui.car.hvac.SeatTemperatureLevelButton
android:id="@+id/row2_seat_heat_level_button_right"
android:background="@drawable/hvac_panel_button_bg"
style="@style/HvacButton"
app:layout_constraintTop_toBottomOf="@+id/seat_heat_level_button_right"
app:layout_constraintRight_toLeftOf="@+id/row2_passenger_hvac"
app:layout_constraintBottom_toBottomOf="@+id/bottom_guideline"
systemui:hvacAreaId="32"
systemui:seatTemperatureType="heating"
systemui:seatTemperatureIconDrawableList="@array/hvac_heated_seat_default_icons"/>
</com.android.systemui.car.hvac.HvacPanelView>
The main changes are:
- Adding
BackgroundAdjustingTemperatureControlViewfor each zone and changing theirsystemui:hvacAreaIdto match the values fromVehicleAreaSeat::ROW_1_LEFT, VehicleAreaSeat::ROW_2_LEFT, VehicleAreaSeat::ROW_1_RIGHT, andVehicleAreaSeat::ROW_2_RIGHT. - Adding
SeatTemperatureLevelButtonfor each zone.
The layout needs to be arranged properly to match the desired design. Information on how to describe the layout in XML can be found at Android Developers - Layout resource .
The presented layout also requires changing the constant values in the res/values/dimens.xml file. Below is the diff with my changes:
diff --git a/res/values/dimens.xml b/res/values/dimens.xml
index 11649d4..3f96413 100644
--- a/res/values/dimens.xml
+++ b/res/values/dimens.xml
@@ -73,7 +73,7 @@
<dimen name="car_primary_icon_size">@*android:dimen/car_primary_icon_size</dimen>
<dimen name="hvac_container_padding">16dp</dimen>
- <dimen name="hvac_temperature_bar_margin">32dp</dimen>
+ <dimen name="hvac_temperature_bar_margin">16dp</dimen>
<dimen name="hvac_temperature_text_size">56sp</dimen>
<dimen name="hvac_temperature_text_padding">8dp</dimen>
<dimen name="hvac_temperature_button_size">76dp</dimen>
@@ -295,9 +295,9 @@
<dimen name="hvac_panel_row_animation_height_shift">0dp</dimen>
<dimen name="temperature_bar_collapsed_width">96dp</dimen>
- <dimen name="temperature_bar_expanded_width">96dp</dimen>
+ <dimen name="temperature_bar_expanded_width">128dp</dimen>
<dimen name="temperature_bar_collapsed_height">96dp</dimen>
- <dimen name="temperature_bar_expanded_height">356dp</dimen>
+ <dimen name="temperature_bar_expanded_height">200dp</dimen>
<dimen name="temperature_bar_icon_margin">20dp</dimen>
<dimen name="temperature_bar_close_icon_dimen">96dp</dimen>
VHAL configuration
The next step is to add additional zones to the VHAL configuration. The configuration file is located at hardware/interfaces/automotive/vehicle/2.0/default/impl/vhal_v2_0/DefaultConfig.h .
In my example, I modified HVAC_SEAT_TEMPERATURE and HVAC_TEMPERATURE_SET :
{.config = {.prop = toInt(VehicleProperty::HVAC_SEAT_TEMPERATURE),
.access = VehiclePropertyAccess::READ_WRITE,
.changeMode = VehiclePropertyChangeMode::ON_CHANGE,
.areaConfigs = {VehicleAreaConfig{
.areaId = SEAT_1_LEFT,
.minInt32Value = -3,
.maxInt32Value = 3,
},
VehicleAreaConfig{
.areaId = SEAT_1_RIGHT,
.minInt32Value = -3,
.maxInt32Value = 3,
},
VehicleAreaConfig{
.areaId = SEAT_2_LEFT,
.minInt32Value = -3,
.maxInt32Value = 3,
},
VehicleAreaConfig{
.areaId = SEAT_2_RIGHT,
.minInt32Value = -3,
.maxInt32Value = 3,
},
}},
.initialValue = {.int32Values = {0}}}, // +ve values for heating and -ve for cooling
{.config = {.prop = toInt(VehicleProperty::HVAC_TEMPERATURE_SET),
.access = VehiclePropertyAccess::READ_WRITE,
.changeMode = VehiclePropertyChangeMode::ON_CHANGE,
.configArray = {160, 280, 5, 605, 825, 10},
.areaConfigs = {VehicleAreaConfig{
.areaId = (int)(VehicleAreaSeat::ROW_1_LEFT),
.minFloatValue = 16,
.maxFloatValue = 32,
},
VehicleAreaConfig{
.areaId = (int)(VehicleAreaSeat::ROW_1_RIGHT),
.minFloatValue = 16,
.maxFloatValue = 32,
},
VehicleAreaConfig{
.areaId = (int)(VehicleAreaSeat::ROW_2_LEFT),
.minFloatValue = 16,
.maxFloatValue = 32,
},
VehicleAreaConfig{
.areaId = (int)(VehicleAreaSeat::ROW_2_RIGHT),
.minFloatValue = 16,
.maxFloatValue = 32,
}
}},
.initialAreaValues = {{(int)(VehicleAreaSeat::ROW_1_LEFT), {.floatValues = {16}}},
{(int)(VehicleAreaSeat::ROW_1_RIGHT), {.floatValues = {17}}},
{(int)(VehicleAreaSeat::ROW_2_LEFT), {.floatValues = {16}}},
{(int)(VehicleAreaSeat::ROW_2_RIGHT), {.floatValues = {19}}},
}},
This configuration modifies the HVAC seat temperature and temperature set properties to include all four zones: front left, front right, rear left, and rear right. The areaId for each zone is specified accordingly. The minInt32Value and maxInt32Value for seat temperatures are set to -3 and 3, respectively, while the temperature range is set between 16 and 32 degrees Celsius.
After modifying the VHAL configuration, the new values will be transmitted to the VendorVehicleHal. This ensures that the HVAC settings are accurately reflected and controlled within the system. For detailed information on how to use these configurations and further transmit this data over the network, refer to our articles: "Controlling HVAC Module in Cars Using Android: A Dive into SOME/IP Integration" and "Integrating HVAC Control in Android with DDS" . These resources provide comprehensive guidance on leveraging network protocols like SOME/IP and DDS for effective HVAC module control in automotive systems.
Building the application
Building the SystemUI and VHAL components requires specific commands and steps to ensure they are correctly compiled and deployed.
mmma packages/apps/Car/SystemUI/
mmma hardware/interfaces/automotive/vehicle/2.0/default/
Uploading the applications
After building the SystemUI and VHAL, you need to upload the compiled applications to the device. Use the following commands:
adb push out/target/product/rpi4/system/system_ext/priv-app/CarSystemUI/CarSystemUI.apk /system/system_ext/priv-app/CarSystemUI/
adb push out/target/product/rpi4/vendor/bin/hw/android.hardware.automotive.vehicle@2.0-default-service /vendor/bin/hw
Conclusion
In this guide, we covered the steps necessary to modify the HVAC configurations by updating the XML layout and VHAL configuration files. We also detailed the process of building and deploying the SystemUI and VHAL components to your target device.
By following these steps, you ensure that your system reflects the desired changes and operates as intended.
Automated E2E testing with Gauge and Selenium
Everyone knows how important testing is in modern software development. In today's CI/CD world tests are even more crucial, often playing the role of software acceptance criteria. With this in mind, it is clear that modern software needs good, fast, reliable and automated tests to help deliver high-quality software quickly and without major bugs.
In this article, we will focus on how to create E2E/Acceptance tests for an application with a micro-frontend using Gauge and Selenium framework. We will check how to test both parts of our application - API and frontend within one process that could be easily integrated into a CD/CD.
What is an Automated End-To-End (E2E) testing?
Automated end-to-end testing is one of the testing techniques that aims to test the functionality of the whole application (microservice in our case) and its interactions with other microservices, databases, etc. We can say that thanks to automated E2E testing, we are able to simulate real-world scenarios and test our application from the ‘user’ perspective. In our case, we can think of a ‘user’ not only as a person who will use our application but also as our API consumers - other microservices. Thanks to such a testing approach, we can be sure that our application interacts well with the surrounding world and that all components are working as designed.
What is an application with a micro-frontend?
We can say that a micro-frontend concept is a kind of an extension of the microservice approach that covers also a frontend part. So, instead of having one big frontend application and a dedicated team of frontend specialists, we can split it into smaller parts and integrate it with backend microservices and teams. Thanks to this fronted application is ‘closer’ to the backend.
The expertise is concentrated in one team that knows its domain very well. This means that the team can implement software in a more agile way, adapt to the changing requirements, and deliver the product much faster - you may also know such concept as a team/software verticalization.
Acceptance testing in practice
Let’s take a look at a real-life example of how we can implement acceptance tests in our application.
Use case
Our team is responsible for developing API (backend microservices) in a large e-commerce application. We have API automated tests integrated into our CI/CD pipeline - we use the Gauge framework to develop automated acceptance tests for our backend APIs. We execute our E2E tests against the PreProd environment every time we deploy a new version of a microservice. If the tests are successful, we can deploy the new version to the production environment.
Due to organizational changes and team verticalization, we have to assume responsibility and ownership of several micro-frontends. Unfortunately, these micro-frontend applications do not have automated tests.
We decided to solve this problem as soon as possible, with as little effort as possible. To achieve this goal, we decided to extend our automated Gauge tests to cover the frontend part as well.
As a result of investigating how to integrate frontend automated tests into our existing solution, we concluded that the easiest way to do this is to use Selenium WebDriver. Thanks to that, we can still use the Gauge framework as a base – test case definition, providing test data, etc. – and test our frontend part.
In this article, we will take a look at how we integrate Selenium WebDriver with Gauge tests for one of our micro-frontend pages– “order overview.”
Gauge framework
Gauge framework is a free and open-source framework for creating and running E2E/acceptance tests. It supports different languages like Java, JavaScript, C#, Python, and Golang so we can choose our preferred language to implement test steps.
Each test scenario consists of steps, each independent so we can reuse it across many test scenarios. Scenarios can be grouped into specifications. To create a scenario, all we have to do is call proper steps with desired arguments in a proper order. So, having proper steps makes scenario creation quite easy, even for a non-technical person.
Gauge specification is a set of test cases (scenarios) that describe the application feature that needs to be tested. Each specification is written using a Markdown-like syntax.
Visit store and search for the products
=======================================
Tags: preprod
table:testData.csv
Running before each scenario
* Login as a user <user> with password <password>
Search for products
-------------------------------------
* Goto store home page
* Search for <product>
Tear down steps for this specification
---------------------------------------
* Logout user <user>
In this Specification Visit store and search for the products is the specification heading, Search for products is a single scenario which consists of two steps Goto store home page and Search for <product> .
Login as a user is a step that will be performed before every scenario in this specification. The same applies to the Logout user step, which will be performed after each scenario.
Gauge support Specification tagging and data-driven testing.
The tag feature allows us to tag Specification or scenarios and then execute tests only for specific tags
Data-driven testing allows us to provide test data in table form. Thanks to that, the scenario will be executed for all table rows. In our example, Search for products scenario will be executed for all products listed in the testData.csv file. Gauge supports data-driven testing using external CSV files and Markdown tables defined in the Specification.
For more information about writing Gauge specifications, please visit: https://docs.gauge.org/writing-specifications?os=windows&language=java&ide=vscode#specifications-spec . Gauge framework also provides us with a test report in the form of an HTML document in which we can find detailed information about test execution.

Test reports can be also extended with screenshots of failure or custom messages
For more information about framework, and how to install and use it, please visit the official page: https://gauge.org/ .
Selenium WebDriver
Gauge itself doesn’t have a capability for automating browsers, so if we want to use it to cover frontend testing, then we need to use some web driver for that. In our example, we will use the Selenium WebDriver.
Selenium WebDriver is a part of a well-known Selenium Framework. It uses browser APIs provided by different vendors to control the browsers. This allows us to use different WebDriver implementations and run our tests using almost any popular browser. Thanks to that, we can easily test our UI on different browsers within a single test execution
For more information, please visit: https://www.selenium.dev/ .
To achieve our goal of testing both parts of our application—frontend and API endpoints—in the scope of one process, we can combine these two solutions, so we use Selenium WebDriver while implementing Gauge test steps.
Example
If we already know what kind of tools we would like to use to implement our tests so, let’s take a look at how we can do this.
First of all, let’s take a look at our project POM file.
Pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.1.4</version>
<relativePath/>
</parent>
<groupId>com.gauge.automated</groupId>
<artifactId>testautomation-gauge</artifactId>
<version>1.0.0-SNAPSHOT</version>
<name>testautomation-gauge</name>
<description>testautomation - user acceptance tests using gauge framework</description>
<properties>
<java.version>17</java.version>
<gauge-java.version>0.10.2</gauge-java.version>
<selenium.version>4.14.1</selenium.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>com.thoughtworks.gauge</groupId>
<artifactId>gauge-java</artifactId>
<version>${gauge-java.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>5.9.3</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>${selenium.version}</version>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-api</artifactId>
<version>${selenium.version}</version>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-chrome-driver</artifactId>
<version>${selenium.version}</version>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-chromium-driver</artifactId>
<version>${selenium.version}</version>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-json</artifactId>
<version>${selenium.version}</version>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-remote-driver</artifactId>
<version>${selenium.version}</version>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-http</artifactId>
<version>${selenium.version}</version>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-support</artifactId>
<version>${selenium.version}</version>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-manager</artifactId>
<version>${selenium.version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>build-info</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>com.thoughtworks.gauge.maven</groupId>
<artifactId>gauge-maven-plugin</artifactId>
<version>1.6.1</version>
<executions>
<execution>
<phase>test</phase>
<configuration>
<specsDir>specs</specsDir>
</configuration>
<goals>
<goal>execute</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
As we can see, all we need to do to use the Selenium WebDriver together with Gauge is add proper dependencies to our POM file. In this example, we focus on a Chrome WebDriver implementation, but if you want to use another browser—Firefox, Edge, or Safari—all you need to do is add the proper Selenium dependency and configure the driver.
Next, what we need to do to enable Chrome Selenium WebDriver is to configure it:
protected ChromeDriver setupChromeDriver()
{
ChromeOptions chromeOptions = new ChromeOptions();
// we should configure our environment to run chrome as non-root user instead
chromeOptions.addArguments("--no-sandbox");
chromeOptions.addArguments("--remote-allow-origins=*");
// to run chrome in a headless mode
chromeOptions.addArguments("--headless=new");
// to avoid Chrome crashes in certain VMs
chromeOptions.addArguments("--disable-dev-shm-usage");
chromeOptions.addArguments("--ignore-certificate-errors");
return new ChromeDriver(chromeOptions);
And that’s all, now we can use Selenium WebDriver in the Gauge step implementation. If you want to use a different WebDriver implementation, you have to configure it properly, but all other steps will remain the same. Now let’s take a look at some implementation details.
Sample Specification
Create order for a login user with default payment and shipping address
============================================================================================================
Tags: test,preprod, prod
table:testData.csv
Running before each scenario
* Login as a user <user> with password <password>
Case-1: Successfully create new order
----------------------------------------------------------------------------------
* Create order draft with item "TestItem"
* Create new order for a user
* Verify order details
* Get all orders for a user <user>
* Change status <status> for order <orderId>
* Fetch and verify order <orderId>
* Remove order <orderId>
Tear down steps for this specification
---------------------------------------------------------------------------------------------------------------------------------------------------
* Delete orders for a user <user>
In our example, we use just a few simple steps, but you can use as many steps as you wish, and they can be much more complicated with more arguments and so on.
Steps implementation
Here is an implementation for some of the test steps. We use Java to implement the steps, but Gauge supports many other languages to do this so feel free to use your favorite.
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.springframework.http.HttpHeaders;
import org.springframework.http.ResponseEntity;
import org.springframework.web.reactive.function.client.WebClientResponseException;
import com.thoughtworks.gauge.Step;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
public class ExampleSpec extends BasicSpec
{
@Step("Login as a user <user> with password <password>")
public void logInAsAUser(final String login, final String password)
{
final ChromeDriver driver = setupChromeDriver();
login(driver, login, password);
}
@Step("Create order draft with item <itemName>")
public void createOrderDraft(final String itemName)
{
OrderDraftRequest request = buildDraftRequest(itemName);
ResponseEntity<String> response = callOrderDraftEndpoint(request);
assertNotNull(response);
assertEquals(201, response.getStatusCodeValue());
}
@Step("Create new order for a user")
public void createOrder(final String itemName)
{
final ChromeDriver driver = setupChromeDriver();
createOrder(driver);
}
@Step("Verify order details")
public void verifyOrderDetails()
{
final WebDriver driver = (WebDriver) ScenarioDataStore.get
(SCENARIO_DATA_STORE_WEB_DRIVER);
final WebElement orderId = driver.findElement(By.tagName("order-id"));
validateWebElement(orderId);
final WebElement orderDate = popLinkHeader.findElement(By.className("order-date"));
validateWebElement(orderId);
}
private ResponseEntity<String> callOrderDraftEndpoint(final OrderDraftRequest request)
{
ResponseEntity<String> response;
final String traceId = generateXTraceId();
log.info("addToCart x-trace-id {}", traceId);
try
{
response = webClient.post()
.uri(uriBuilder -> uriBuilder.path(appConfiguration.getOrderDraftEndpoint())
.header(HttpHeaders.AUTHORIZATION, "Bearer " + appConfiguration.getToken())
.header("Accept-Language", "de")
.bodyValue(request)
.retrieve()
.toEntity(String.class)
.block(Duration.ofSeconds(100));
}
catch (final WebClientResponseException webClientResponseException)
{
response = new ResponseEntity<>(webClientResponseException.getStatusCode());
}
return response;
}
private void login(final WebDriver driver, final String login, final String password)
{
driver.get(getLoginUrl().toString());
// find email input
WebElement emailInput = driver.findElement(By.xpath("//*[@id=\"email\"]"));
// find password input
WebElement passwordInput = driver.findElement(By.xpath("//*[@id=\"password\"]"));
// find login button
WebElement loginButton = driver.findElement(By.xpath("//*[@id=\"btn-login\"]"));
// type user email into email input
emailInput.sendKeys(login);
// type user password into password input
passwordInput.sendKeys(password);
// click on login button
loginButton.click();
}
private void createOrder(WebDriver driver) {
driver.get(getCheckoutUrl().toString());
WebElement createOrderButton = driver.findElement(By.xpath("//*[@id=\"create-
order\"]"));
createOrderButton.click();
}
private void validateWebElement(final WebElement webElement)
{
assertNotNull(webElement);
assertTrue(webElement.isDisplayed());
}
As we can see, it is fairly simple to use Selenium WebDriver within Gauge tests. WebDriver plugins provide a powerful extension to our tests and allow us to create Gauge scenarios that also test the frontend part of our application. You can use multiple WebDriver implementations to cover different web browsers, ensuring that your UI looks and behaves the same in different environments.
The presented example can be easily integrated into your CI/CD process. Thanks to this, it can be a part of the acceptance tests of our application. This will allow you to deliver your software even faster with the confidence that our changes are well-tested.
GrapeChat – the LLM RAG for enterprise
LLM is an extremely hot topic nowadays. In our company, we drive several projects for our customers using this technology. There are more and more tools, researches, and resources, including no-code, all-in-one solutions.
The topic for today is RAG – Retrieval Augmented Generation. The aim of RAG is to retrieve necessary knowledge and generate answers to the users’ questions based on this knowledge. Simply speaking, we need to search the company knowledge base for relevant documents, add those documents to the conversation context, and instruct an LLM to answer questions using the knowledge. But in detail, it’s nothing simple, especially when it comes to permissions.
Before you start
There are two technologies that take the current software development sector by storm, taking advantage of the LLM revolution : Microsoft Azure cloud platform, along with other Microsoft services, and Python programming language.
If your company uses Microsoft services, and SharePoint and Azure are within your reach, you can create a simple RAG application fast. Microsoft offers a no-code solution and application templates with source code in various languages (including easy-to-learn Python) if you require minor customizations.
Of course, there are some limitations, mainly in the permission management area, but you should also consider how much you want your company to rely on Microsoft services.
If you want to start from scratch, you should start by defining your requirements (as usual). Do you want to split your users into access groups, or do you want to assign access to resources for individuals? How do you want to store and classify your files? How deeply do you want to analyze your data (what about dependencies)? Is Python a good choice, after all? What about the costs? How to update permissions? There are a lot of questions to answer before you start. In Grape Up, we went through this process and implemented GrapeChat, our internal RAG-based chatbot using our Enterprise data.

Now, I invite you to learn more from our journey.
The easy way

Source: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/use-your-data-securely
The most time-efficient way to create a chatbot using RAG is to use the official manual from Microsoft . It covers everything – from pushing data up to the front-end application. However, it’s not very cost-efficient. To make it work with your data, you need to create an AI Search resource, and the simplest one costs 234€ per month (you will pay for the LLM usage, too). Moreover, SharePoint integration is not in the final stage yet , which forces you to manually upload data. You can lower the entry threshold by uploading your data to Blob storage instead of using SharePoint directly, and then you can use Power Automate to do it automatically for new files, but it requires more and more hard to troubleshoot UI-created components, with more and more permission management by your Microsoft-care team (probably your IT team) and a deeper integration between Microsoft and your company.
And then there is the permission issue.
When using Microsoft services, you can limit access to the documents being processed during RAG by using Azure AI Search security filters . This method requires you to assign a permission group when adding each document to the system (to be more specific, during indexing), and then you can add a permission group as a parameter to the search request. Of course, there is much more offered by Microsoft in terms of security of the entire application (web app access control, network filtering, etc.).
To use those techniques, you must have your own implementation (say bye-bye to no-code). If you like starting a project from a blueprint, go here . Under the link, you’ll find a ready-to-use Azure application, including the back-end, front-end, and all necessary resources, along with scripts to set it up. There are also variants linked in the README file, written in other languages (Java, .Net, JavaScript).

Source: https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/docs/aks/aks-hla.png
However, there are still at least three topics to consider.
1) You start a new project, but with some code already written. Maybe the quality of the code provided by Microsoft is enough for you. Maybe not. Maybe you like the code structure. Maybe not. From my experience, learning the application to adjust it may take more time than starting from scratch. Please note that this application is not a simple CRUD, but something much more complex, making profits from a sophisticated toolbox.
2) Permission management is very limited. “Permission” is a keyword that distinguishes RAG and Enterprise-RAG. Let’s imagine that you have a document (for example, the confluence page) available to a limited number of users (for example, your company’s board). One day, the board member decides to grant access to this very page to one of the non-board managers. The manager is not part of the “board” group, the document is already indexed, and Confluence uses a dual-level permission system (space and document), which is not aligned with external SSO providers (Microsoft’s Entra ID).
Managing permissions in this system is a very complex task. Even if you manage to do it, there are two levels of protection – the Entra ID that secures your endpoint and the filter parameter in the REST request to restrict documents being searched during RAG. Therefore, the potential attack vector is very wide – if somebody has access to the Entra ID (for example, a developer working on the system), she/he can overuse the filtering API to get any documents, including the ones for the board members’ eyes only.
3) You are limited to Azure AI Search. Using Azure OpenAI is one thing (you can use OpenAI API without Azure, you can go with Claude, Gemini, or another LLM), but using Azure AI Search increases cost and limits your possibilities. For example, there is no way to utilize connections between documents in the system, when one document (e.g. an email with a question) should be linked to another one (e.g. a response email with the answer).
All in all, you couple your company with Microsoft very strict – using Entra ID permission management, Azure resources, Microsoft Storage (Azure Blob or SharePoint), etc. I’m not against Microsoft, but I’m against a single point of failure and addiction to a single service provider.
The hard way
I would say a “better way”, but it’s always a matter of your requirements and possibilities.
The hard way is to start the project with a blank page. You need to design the user’s touch point, the backend architecture, and the permission management.
In our company, we use SSO – the same identity for all resources: data storage, communicators, and emails. Therefore, the main idea is to propagate the user’s identity to authorize the user to obtain data.
Let’s discuss the data retrieval part first. The user logs into the messaging app (Slack, Teams, etc.) with their own credentials. The application uses their token to call the GrapeChat service. Therefore, the user’s identity is ensured. The bot decides (using LLM) to obtain some data. The service exchanges the user’s token for a new user’s token, allowed to call the database. This process is allowed only for the service with the user logged in. It's impossible to access the database without both the GrapeChat service and the user's token. The database verifies credentials and filters data. Let me underline this part – the database is in charge of data security. It’s like a typical database, e.g. PostgreSQL or MySQL – the user uses their own credentials to access the data, and nobody challenges its permission system, even if it stores data of multiple users.
Wait a minute! What about shared credentials, when a user stores data that should be available for other users, too?
It brings us to the data uploading process and the database itself.
The user logs into some data storage. In our case, it may be a messaging app (conversations are a great source of knowledge), email client, Confluence, SharePoint, shared SMB resource, or a cloud storage service (e.g. Dropbox). However, the user’s token is not used to copy the data from the original storage to our database.
There are three possible solutions.
- The first one is to actively push data from its original storage to the database. It’s possible in just a few systems, e.g. as automatic forwarding for all emails configured on the email server.
- The second one is to trigger the database to download new data, e.g. with a webhook. It’s also possible in some systems, e.g. Contentful to send notifications about changes this way.
- The last one is to periodically call data storages and compare stored data with the origin. This is the worst idea (because of the possible delay and comparing process) but, unfortunately, the most common one. In this approach, the database actively downloads data based on a schedule.
Using those solutions requires separate implementations for each data origin.
In all those cases, we need a non-user’s account to process user’s data. The solution we picked is to create a “superuser” account and restrict it to non-human access. Only the database can use this account and only in an isolated virtual network.
Going back to the group permission and keeping in mind that data is acquired with “superuser” access, the database encrypts each document (a single piece of data) using the public keys of all users that should have access to it. Public keys are stored with the Identity (in our case, this is a custom field in Active Directory), and let me underline it again – the database is the only entity that process unencrypted data and the only one that uses “superuser” access. Then, when accessing the data, a private key (obtained from an Active Directory using the user’s SSO token) of each allowed user can be used for decryption.
Therefore, the GrapeChat service is not part of the main security processes, but on the other hand, we need a pretty complex database module.
The database and the search process
In our case, the database is a strictly secured container running 3 applications – SQL database, vector database, and a data processing service. Its role is to acquire and embed data, update permissions, and execute search. The embedding part is easy. We do it internally (in the database module) with the Instructor XL model, but you can choose a better one from the leaderboard . Allowed users’ IDs are stored within the vector database (in our case – Qdrant ) for filtering purposes, and the plain text content is encrypted with users’ public keys.
When the DB module searches for a query, it uses the vector DB first, including metadata to filter allowed users. Then, the DB service obtains associated entities from the SQL DB. In the next steps, the service downloads related entities using simple SQL relations between them. There is also a non-data graph node, “author”, to keep together documents created by the same person. We can go deeper through the graph relation-by-relation if the caller has rights to the content. The relation-search deepness is a parameter of the system.
We do use a REST field filter like the one offered by the native MS solution, too, but in our case, we do the permission-aware search first. So, if there are several people in the Slack conversation and one of them mentions GrapeChat, the bot uses his permission in the first place and then, additionally, filters results not to expose a document to other channel members if they are not allowed to see it. In other words, the calling user can restrict search results according to teammates but is not able to extend the results above her/his permissions.
What happens next?
The GrapeChat service is written in Java. This language offers a nice Slack SDK, and Spring AI, so we've seen no reason to opt for Python with the Langchain library. The much more important component is the database service, built of three elements described above. To make the DB fast and smalll, we recommend using Rust programming language, but you can also use Python, according to the knowledge of your developers.
Another important component is a document parser. The task is easy with simple, plain text messages, but your company knowledge includes tons of PDFs, Word docs, Excel spreadsheets, and even videos. In our architecture, parsers are external, replaceable modules written in various languages working with the DB in the same isolated network.
RAG for Enterprise
With all the achievements of recent technology, RAG is not rocket science anymore. However, when it comes to the Enterprise data, the task is getting more and more complex. Data security is one of the biggest concerns in the LLM era, so we recommend starting small – with a limited number of non-critical documents, with limited access, and a wisely secured system.
In general, the task is not impossible, and can be easily handled with a proper application design. Working on an internal tool is a great opportunity to gain experience and prepare better for your next business cases, especially when the IT sector is so young and immature. This way we, here at GrapeUp, use our expertise to serve our customers in a better way.
Integrating HVAC control in Android with DDS
As modern vehicles become more connected and feature-rich, the need for efficient and reliable communication protocols has grown. One of the critical aspects of automotive systems is the HVAC (Heating, Ventilation, and Air Conditioning) system , which enhances passenger comfort. This article explores how to integrate HVAC control in Android with the DDS (Data Distribution Service) protocol, enabling robust and scalable communication within automotive systems.
This article builds upon the concepts discussed in our previous article, "Controlling HVAC Module in Cars Using Android: A Dive into SOME/IP Integration." It is recommended to read that article first, as it covers the integration of HVAC with SOME/IP, providing foundational knowledge that will be beneficial for understanding the DDS integration described here.
What is HVAC?
HVAC systems in vehicles are responsible for maintaining a comfortable cabin environment. These systems regulate temperature, airflow, and air quality within the vehicle. Key components include:
- Heaters : Warm the cabin using heat from the engine or an electric heater.
- Air Conditioners : Cool the cabin by compressing and expanding refrigerant.
- Ventilation : Ensures fresh air circulation within the vehicle.
- Air Filters : Remove dust and pollutants from incoming air.
Effective HVAC control is crucial for passenger comfort, and integrating this control with an Android device allows for a more intuitive user experience.
Detailed overview of the DDS protocol
Introduction to DDS
Data Distribution Service (DDS) is a middleware protocol and API standard for data-centric connectivity . It enables scalable, real-time, dependable, high-performance, and interoperable data exchanges between publishers and subscribers. DDS is especially popular in mission-critical applications like aerospace, defense, automotive, telecommunications, and healthcare due to its robustness and flexibility.
Key functionalities of DDS
- Data-Centric Publish-Subscribe (DCPS) : DDS operates on the publish-subscribe model where data producers (publishers) and data consumers (subscribers) communicate through topics. This model decouples the communication participants in both time and space, enhancing scalability and flexibility.
- Quality of Service (QoS) : DDS provides extensive QoS policies that can be configured to meet specific application requirements. These policies control various aspects of data delivery, such as reliability, durability, latency, and resource usage.
- Automatic Discovery : DDS includes built-in mechanisms for the automatic discovery of participants, topics, and data readers/writers. This feature simplifies the setup and maintenance of communication systems, as entities can join and leave the network dynamically without manual configuration.
- Real-Time Capabilities : DDS is designed for real-time applications, offering low latency and high throughput. It supports real-time data distribution, ensuring timely delivery and processing of information.
- Interoperability and Portability : DDS is standardized by the Object Management Group (OMG), which ensures interoperability between different DDS implementations and portability across various platforms.
Structure of DDS
Domain Participant : The central entity in a DDS system is the domain participant. It acts as the container for publishers, subscribers, topics, and QoS settings. A participant joins a domain identified by a unique ID, allowing different sets of participants to communicate within isolated domains.
Publisher and Subscriber :
- Publisher : A publisher manages data writers and handles the dissemination of data to subscribers.
- Subscriber : A subscriber manages data readers and processes incoming data from publishers.
Topic : Topics are named entities representing a data type and the QoS settings. They are the points of connection between publishers and subscribers. Topics define the structure and semantics of the data exchanged.
Data Writer and Data Reader :
- Data Writer : Data writers are responsible for publishing data on a topic.
- Data Reader : Data readers subscribe to a topic and receive data from corresponding data writers.
Quality of Service (QoS) Policies : QoS policies define the contract between data writers and data readers. They include settings such as:
- Reliability : Controls whether data is delivered reliably (with acknowledgment) or best-effort.
- Durability : Determines how long data should be retained by the middleware.
- Deadline : Specifies the maximum time allowed between consecutive data samples.
- Latency Budget : Sets the acceptable delay from data writing to reading.
Ensuring communication correctness
DDS ensures correct communication through various mechanisms:
- Reliable Communication : Using QoS policies, DDS can guarantee reliable data delivery. For example, the Reliability QoS can be set to "RELIABLE," ensuring that the subscriber acknowledges all data samples.
- Data Consistency : DDS maintains data consistency using mechanisms like coherent access, which ensures that a group of data changes is applied atomically.
- Deadline and Liveliness : These QoS policies ensure that data is delivered within specified time constraints. The Deadline policy ensures that data is updated at expected intervals, while the Liveliness policy verifies that participants are still active.
- Durability : DDS supports various durability levels to ensure data persistence. This ensures that late-joining subscribers can still access historical data.
- Ownership Strength : In scenarios where multiple publishers can publish on the same topic, the Ownership Strength QoS policy determines which publisher's data should be used when conflicts occur.
Building the CycloneDDS Library for Android
To integrate HVAC control in Android with the DDS protocol, we will use the CycloneDDS library. CycloneDDS is an open-source implementation of the DDS protocol, providing robust and efficient data distribution. The source code for CycloneDDS is available at Eclipse CycloneDDS GitHub , and the instructions for building it for Android are detailed at CycloneDDS Android Port .
Prerequisites
Before starting the build process, ensure you have the following prerequisites installed:
- Android NDK: Download and install the latest version from the Android NDK website .
- CMake: Download and install CMake from the CMake website.
- A suitable build environment (e.g., Linux or macOS).
Step-by-step build instructions
1. Clone the CycloneDDS Repository : First, clone the CycloneDDS repository to your local machine:
git clone https://github.com/eclipse-cyclonedds/cyclonedds.git
cd cyclonedds
2. Set Up the Android NDK : Ensure that the Android NDK is properly installed and its path is added to your environment variables.
export ANDROID_NDK_HOME=/path/to/your/android-ndk
export PATH=$ANDROID_NDK_HOME/toolchains/llvm/prebuilt/linux-x86_64/bin:$PATH
3. Create a Build Directory : Create a separate build directory to keep the build files organized:
mkdir build-android
cd build-android
4. Configure the Build with CMake : Use CMake to configure the build for the Android platform. Adjust the ANDROID_ABI parameter based on your target architecture (e.g., armeabi-v7a , arm64-v8a , x86 , x86_64 ):
cmake -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK_HOME/build/cmake/android.toolchain.cmake \
-DANDROID_ABI=arm64-v8a \
-DANDROID_PLATFORM=android-21 \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_SHARED_LIBS=OFF \
-DCYCLONEDDS_ENABLE_SSL=NO \
..
5. Build the CycloneDDS Library : Run the build process using CMake. This step compiles the CycloneDDS library for the specified Android architecture:
cmake --build .
Integrating CycloneDDS with VHAL
After building the CycloneDDS library, the next step is to integrate it with the VHAL (Vehicle Hardware Abstraction Layer) application.
1. Copy the Built Library : Copy the libddsc.a file from the build output to the VHAL application directory:
cp path/to/build-android/libddsc.a path/to/your/android/source/hardware/interfaces/automotive/vehicle/2.0/default/
2. Modify the Android.bp File : Add the CycloneDDS library to the Android.bp file located in the hardware/interfaces/automotive/vehicle/2.0/default/ directory:
cc_prebuilt_library_static {
name: "libdds",
vendor: true,
srcs: ["libddsc.a"],
strip: {
none: true,
},
}
3. Update the VHAL Service Target : In the same Android.bp file, add the libdds library to the static_libs section of the android.hardware.automotive.vehicle@2.0-default-service target:
cc_binary {
name: "android.hardware.automotive.vehicle@2.0-default-service",
srcs: ["VehicleService.cpp"],
shared_libs: [
"liblog",
"libutils",
"libbinder",
"libhidlbase",
"libhidltransport",
"android.hardware.automotive.vehicle@2.0-manager-lib",
],
static_libs: [
"android.hardware.automotive.vehicle@2.0-manager-lib",
"android.hardware.automotive.vehicle@2.0-libproto-native",
"android.hardware.automotive.vehicle@2.0-default-impl-lib",
"libdds",
],
vendor: true,
}
Defining the Data Model with IDL
To enable DDS-based communication for HVAC control in our Android application, we need to define a data model using the Interface Definition Language (IDL). In this example, we will create a simple IDL file named hvacDriver.idl that describes the structures used for HVAC control, such as fan speed, temperature, and air distribution.
hvacDriver.idl
Create a file named hvacDriver.idl with the following content:
module HVACDriver
{
struct FanSpeed
{
octet value;
};
struct Temperature
{
float value;
};
struct AirDistribution
{
octet value;
};
};
Generating C Code from IDL
Once the IDL file is created, we can use the idlc (IDL compiler) tool provided by CycloneDDS to generate the corresponding C code. The generated files will include hvacDriver.h and hvacDriver.c , which contain the data structures and serialization/deserialization code needed for DDS communication.
Run the following command to generate the C code:
idlc hvacDriver.idl
This command will produce two files:
- hvacDriver.h
- hvacDriver.c
Integrating the generated code with VHAL
After generating the C code, the next step is to integrate these files into the VHAL (Vehicle Hardware Abstraction Layer) application.
Copy the Generated Files : Copy the generated hvacDriver.h and hvacDriver.c files to the VHAL application directory:
cp hvacDriver.h path/to/your/android/source/hardware/interfaces/automotive/vehicle/2.0/default/
cp hvacDriver.c path/to/your/android/source/hardware/interfaces/automotive/vehicle/2.0/default/
Include the Generated Header : In the VHAL source files where you intend to use the HVAC data structures, include the generated header file. For instance, in VehicleService.cpp , you might add:
#include "hvacDriver.h"
Modify the Android.bp File : Update the Android.bp file in the hardware/interfaces/automotive/vehicle/2.0/default/ directory to compile the generated C files and link them with your application:
cc_library_static {
name: "hvacDriver",
vendor: true,
srcs: ["hvacDriver.c"],
}
cc_binary {
name: "android.hardware.automotive.vehicle@2.0-default-service",
srcs: ["VehicleService.cpp"],
shared_libs: [
"liblog",
"libutils",
"libbinder",
"libhidlbase",
"libhidltransport",
"android.hardware.automotive.vehicle@2.0-manager-lib",
],
static_libs: [
"android.hardware.automotive.vehicle@2.0-manager-lib",
"android.hardware.automotive.vehicle@2.0-libproto-native",
"android.hardware.automotive.vehicle@2.0-default-impl-lib",
"libdds",
"hvacDriver",
],
vendor: true,
}
Implementing DDS in the VHAL Application
To enable DDS-based communication within the VHAL (Vehicle Hardware Abstraction Layer) application, we need to implement a service that handles DDS operations. This service will be encapsulated in the HVACDDSService class, which will include methods for initialization and running the service.
Step-by-step implementation
1. Create the HVACDDSService Class : First, we will define the HVACDDSService class with methods for initializing the DDS entities and running the service to handle communication.
2. Initialization : The init method will create a DDS participant, and for each structure (FanSpeed, Temperature, AirDistribution), it will create a topic, reader, and writer.
3. Running the Service : The run method will continuously read messages from the DDS readers and trigger a callback function to handle data changes.
void HVACDDSService::init()
{
/* Create a Participant. */
participant = dds_create_participant (DDS_DOMAIN_DEFAULT, NULL, NULL);
if(participant < 0)
{
LOG(ERROR) << "[DDS] " << __func__ << " dds_create_participant: " << dds_strretcode(-participant);
}
/* Create a Topic. */
qos = dds_create_qos();
dds_qset_reliability(qos, DDS_RELIABILITY_RELIABLE, DDS_SECS(10));
dds_qset_durability(qos, DDS_DURABILITY_TRANSIENT_LOCAL);
topic_temperature = dds_create_topic(participant, &Driver_Temperature_desc, "HVACDriver_Temperature", qos, NULL);
if(topic_temperature < 0)
{
LOG(ERROR) << "[DDS] " << __func__ << " dds_create_topic(temperature): "<< dds_strretcode(-topic_temperature);
}
reader_temperature = dds_create_reader(participant, topic_temperature, NULL, NULL);
if(reader_temperature < 0)
{
LOG(ERROR) << "[DDS] " << __func__ << " dds_create_reader(temperature): " << dds_strretcode(-reader_temperature);
}
writer_temperature = dds_create_writer(participant, topic_temperature, NULL, NULL);
if(writer_temperature < 0)
{
LOG(ERROR) << "[DDS] " << __func__ << " dds_create_writer(temperature): " << dds_strretcode(-writer_temperature);
}
.....
}
void HVACDDSService::run()
{
samples_temperature[0] = Driver_Temperature__alloc();
samples_fanspeed[0] = Driver_FanSpeed__alloc();
samples_airdistribution[0] = Driver_AirDistribution__alloc();
while (true)
{
bool no_data = true;
rc = dds_take(reader_temperature, samples_temperature, infos, MAX_SAMPLES, MAX_SAMPLES);
if (rc < 0)
{
LOG(ERROR) << "[DDS] " << __func__ << " temperature dds_take: " << dds_strretcode(-rc);
}
/* Check if we read some data and it is valid. */
if ((rc > 0) && (infos[0].valid_data))
{
no_data = false;
Driver_Temperature *msg = (Driver_Temperature *) samples_temperature[0];
LOG(INFO) << "[DDS] " << __func__ << " === [Subscriber] Message temperature(" << (float)msg->value << ")";
if (tempChanged_)
{
std::stringstream ss;
ss << std::fixed << std::setprecision(2) << msg->value;
tempChanged_(ss.str());
}
}
......
if(no_data)
{
/* Polling sleep. */
dds_sleepfor (DDS_MSECS (20));
}
}
}
Building and deploying the application
After implementing the HVACDDSService class and integrating it into your VHAL application, the next steps involve building the application and deploying it to your Android device.
Building the application
1. Build the VHAL Application : Ensure that your Android build environment is set up correctly and that all necessary dependencies are in place. Then, navigate to the root of your Android source tree and run the build command:
source build/envsetup.sh
lunch <target>
m -j android.hardware.automotive.vehicle@2.0-service
2. Verify the Build : Check that the build completes successfully and that the binary for your VHAL service is created. The output binary should be located in the out/target/product/<device>/system/vendor/bin/ directory.
Deploying the application
1. Push the Binary to the Device : Connect your Android device to your development machine via USB, and use adb to push the built binary to the device:
adb push out/target/product/<device>/system/vendor/bin/android.hardware.automotive.vehicle@2.0-service /vendor/bin/
2. Restart device
Conclusion
In this article, we have covered the steps to integrate DDS (Data Distribution Service) communication for HVAC control in an Android Automotive environment using the CycloneDDS library. Here's a summary of the key points:
1. CycloneDDS Library Setup :
- Cloned and built CycloneDDS for Android.
- Integrated the built library into the VHAL application.
2. Data Model Definition :
- Defined a simple data model for HVAC control using IDL.
- Generated the necessary C code from the IDL definitions.
3. HVACDDSService Implementation :
- Created the HVACDDSService class to manage DDS operations .
- Implemented methods for initialization ( init ) and runtime processing ( run ).
- Set up DDS entities such as participants, topics, readers, and writers.
- Integrated DDS service into the VHAL application's main loop.
4. Building and Deploying the Application
- Built the VHAL application and deployed it to the Android device.
- Ensured correct permissions and successfully started the VHAL service.
By following these steps, you can leverage DDS for efficient, scalable, and reliable communication in automotive systems, enhancing HVAC systems' control and monitoring capabilities in Android Automotive environments. This integration showcases the potential of DDS in automotive applications, providing a robust framework for data exchange across different components and services.
Choosing the right approach: How generative AI powers legacy system modernization
In today's rapidly evolving digital landscape, the need to modernize legacy systems and applications is becoming increasingly critical for organizations aiming to stay competitive. Once the backbone of business operations, legacy systems are now potential barriers to efficiency, innovation, and security.
As technology progresses, the gap between outdated systems and modern requirements widens, making modernization not just beneficial but essential.
This article provides an overview of different legacy system modernization approaches, including the emerging role of generative AI (GenAI). We will explore how GenAI can enhance this process, making it not only faster and more cost-effective but also better aligned with current and future business needs.
Understanding legacy systems
Legacy systems are typically maintained due to their critical role in existing business operations. They often feature:
- Outdated technology stacks and programming languages.
- Inefficient and unstable performance.
- High susceptibility to security vulnerabilities due to outdated security measures.
- Significant maintenance costs and challenges in sourcing skilled personnel.
- Difficulty integrating with newer technologies and systems.
Currently, almost 66% of enterprises continue to rely on outdated applications to run their key operations, and 60% use them for customer-facing tasks.
Why is this the case?
Primarily because of a lack of understanding of the older technology infrastructure and the technological difficulties associated with modernizing legacy systems. However, legacy application modernization is often essential. In fact, 70% of global CXOs consider mainframe and legacy modernization a top business priority.
The necessity of legacy software modernization
As technology rapidly evolves, businesses find it increasingly vital to update their aging infrastructure to keep pace with industry standards and consumer expectations. Legacy systems modernization is crucial for several reasons:
- Security Improvements : Outdated software dependencies in older systems often lack updates, leaving critical bugs and security vulnerabilities unaddressed.
- Operational Efficiency : Legacy systems can slow down operations with their inefficiencies and frequent maintenance needs.
- Cost Reduction : Although initially costly, the long-term maintenance of outdated systems is often more expensive than modernizing them.
- Scalability and Flexibility : Modern systems are better equipped to handle increasing loads and adapt to changing business needs.
- Innovation Enablement : Modernized systems can support new technologies and innovations, allowing businesses to stay ahead in competitive markets.
Modernizing legacy code presents an opportunity to address multiple challenges from both a business and an IT standpoint, improving overall organizational performance and agility.
Different approaches to legacy modernization
When it comes to modernizing legacy systems, there are various approaches available to meet different organizational needs and objectives. These strategies can vary greatly depending on factors such as the current state of the legacy systems, business goals, budget constraints, and desired outcomes.
Some modernization efforts might focus on minimal disruption and cost, opting to integrate existing systems with new functionalities through APIs or lightly tweaking the system to fit a new operating environment. Other approaches might involve more extensive changes, such as completely redesigning the system architecture to incorporate advanced technologies like microservices or even rebuilding the system from scratch to meet modern standards and capabilities.
Each approach has its own set of advantages, challenges, and implications for the business processes and IT landscape. The choice of strategy depends on balancing these factors with the long-term vision and immediate needs of the organization.
Rewriting legacy systems with generative AI
One of the approaches to legacy system modernization involves rewriting the system's codebase from scratch while aiming to maintain or enhance its existing functionalities. This method is especially useful when the current system no longer meets the evolving standards of technology, efficiency, or security required by modern business environments.
By starting anew, organizations can leverage the latest technologies and architectures, making the system more adaptable and scalable to future needs.
Generative AI is particularly valuable in this context for several reasons:
- Uncovering hidden relations and understanding embedded business rules : GenAI supports the analysis of legacy code to identify complex relationships and dependencies crucial for maintaining system interactions during modernization. It also deciphers embedded business rules, ensuring that vital functionalities are preserved and enhanced in the updated system.
- Improved accuracy : GenAI enhances the accuracy of the modernization process by automating tasks such as code analysis and documentation, which reduces human errors and ensures a more precise translation of legacy functionalities to the new system.
- Optimization and performance : With GenAI, the new code can be optimized for performance from the outset. It can integrate advanced algorithms that improve efficiency and adaptability, which are often lacking in older systems.
- Reducing development time and cost : The automation capabilities of GenAI significantly reduce the time and resources needed for rewriting systems. Faster development cycles and fewer human hours needed for coding and testing lower the overall cost of the modernization project.
- Increasing security measures: GenAI can help implement advanced security protocols in the new system, reducing the risk of data breaches and associated costs. This is crucial in today's digital environment, where security threats are increasingly sophisticated.
By integrating GenAI in this modernization approach, organizations can achieve a more streamlined transition to a modern system architecture, which is well-aligned with current and future business requirements. This ensures that the investment in modernization delivers substantial returns in terms of system performance, scalability, and maintenance costs.

How generative AI fits in legacy system modernization process
Generative AI enables faster speeds and provides a deeper understanding of the business context, which significantly boosts development across all phases, from design and business analysis to code generation , testing, and verification.
Here's how GenAI transforms the modernization process:
1. Analysis Phase
Automated documentation and in-depth code analysis : GenAI's ability to assist in automatic documenting, reverse engineering, and extracting business logic from legacy codebases is a powerful capability for modernization projects. It overcomes the limitations of human memory and outdated documentation to help ensure a comprehensive understanding of existing systems before attempting to upgrade or replace them.
Business-context awareness : By analyzing the production source code directly, GenAI helps comprehend the embedded business logic, which speeds up the migration process and improves the safety and accuracy of the transition.
2 . Preparatory Phase
Tool compatibility and integration: GenAI tools can identify and integrate with many compatible development tools, recommend necessary plugins or extensions within supported environments, and enhance the existing development environment by automating routine tasks and providing intelligent code suggestions to support effective modernization efforts.
LLM-assisted knowledge discovery : Large Language Models (LLMs) can be used to delve deep into a legacy system’s data and codebase to uncover critical insights and hidden patterns. This knowledge discovery process aids in understanding complex dependencies, business logic, and operational workflows embedded within the legacy system. This step is crucial for ensuring that all relevant data and functionalities are considered before beginning the migration, thereby reducing the risk of overlooking critical components.
3. Migration/Implementation Phase
Code generation and conversion : Using LLMs, GenAI aids in the design process by transforming outdated code into contemporary languages and frameworks, thereby improving the functionality and maintainability of applications.
Automated testing and validation : GenAI supports the generation of comprehensive test cases to ensure that all new functionalities are verified against specified requirements and that the migrated system operates as intended. It helps identify and resolve potential issues early, ensuring a high level of accuracy and functionality before full deployment.
Modularization and refactoring : GenAI can also help break down complex, monolithic applications into manageable modules, enhancing system maintainability and scalability. It identifies and suggests strategic refactoring for areas with excessive dependencies and scattered functionalities.
4. Operations and Optimization Phase
AI-driven monitoring and optimization : Once the system is live, GenAI continues to monitor its performance, optimizing operations and predicting potential failures before they occur. This proactive maintenance helps minimize downtime and improve system reliability.
Continuous improvement and DevOps automation : GenAI facilitates continuous integration and deployment practices, automatically updating and refining the system to meet evolving business needs. It ensures that the modernized system is not only stable but also continually evolving with minimal manual intervention.
Across All Phases
- Sprint execution support : GenAI enhances agile sprint executions by providing tools for rapid feature development, bug fixes, and performance optimizations, ensuring that each sprint delivers maximum value.
- Security enhancements and compliance testing : It identifies security vulnerabilities and compliance issues early in the development cycle, allowing for immediate remediation that aligns with industry standards.
- Predictive analytics for maintenance and monitoring : It also helps anticipate potential system failures and performance bottlenecks using predictive analytics, suggesting proactive maintenance and optimizations to minimize downtime and improve system reliability.
Should enterprises use genAI in legacy system modernization?
To determine if GenAI is necessary for a specific modernization project, organizations should consider the complexity and scale of their legacy systems, the need for improved accuracy in the modernization process, and the strategic value of faster project execution.
If the existing systems are cumbersome and deeply intertwined with critical business operations, or if security, speed, and accuracy are priorities, then GenAI is likely an indispensable tool for ensuring successful modernization with optimal outcomes.
Conclusion
Generative AI significantly boosts the legacy system modernization process by introducing advanced capabilities that address a broad range of challenges. From automating documentation and code analysis in the analysis phase to supporting modularization and system integration during implementation, this technology provides critical support that speeds up modernization, ensures high system performance, and aligns with modern technological standards.
GenAI integration not only streamlines processes but also equips organizations to meet future challenges effectively, driving innovation and competitive advantage in a rapidly evolving digital landscape.
How to design the LLM Hub Platform for enterprises
In today's fast-paced digital landscape, businesses constantly seek ways to boost efficiency and cut costs. With the rising demand for seamless customer interactions and smoother internal processes, large corporations are turning to innovative solutions like chatbots. These AI-driven tools hold the potential to revolutionize operations, but their implementation isn't always straightforward.
The rapid advancements in AI technology make it challenging to predict future developments. For example, consider the differences in image generation technology that occurred over just two years:

Source: https://medium.com/@junehao/comparing-ai-generated-images-two-years-apart-2022-vs-2024-6c3c4670b905
Find more examples in this blog post .
This text explores the requirements for an LLM Hub platform, highlighting how it can address implementation challenges, including the rapid development of AI solutions, and unlock new opportunities for innovation and efficiency. Understanding the importance of a well-designed LLM Hub platform empowers businesses to make informed decisions about their chatbot initiatives and embark on a confident path toward digital transformation.
Key benefits of implementing chatbots
Several factors fuel the desire for easy and affordable chatbot solutions.
- Firstly, businesses recognize the potential of chatbots to improve customer service by providing 24/7 support, handling routine inquiries, and reducing wait times.
- Secondly, chatbots can automate repetitive tasks , freeing up human employees for more complex and creative work.
- Finally, chatbots can boost operational efficiency by streamlining processes across various departments, from customer service to HR.
However, deploying and managing chatbots across diverse departments and functions can be complex and challenging. Integrating chatbots with existing systems, ensuring they understand and respond accurately to a wide range of inquiries, and maintaining them with regular updates requires significant technical expertise and resources.
This is where LLM Hubs come into play.
What is an LLM Hub?
An LLM Hub is a centralized platform designed to simplify the deployment and management of multiple chatbots within an organization. It provides a single interface to oversee various AI-driven tools, ensuring they work seamlessly together. By centralizing these functions, an LLM Hub makes implementing updates, maintaining security standards, and managing data sources easier.
This centralization allows for consistent and efficient management, reducing the complexity and cost associated with deploying and maintaining chatbot solutions across different departments and functions.
Why does your organization need an LLM Hub?
The need for such solutions is clear. Without the adoption of AI tools, businesses risk falling behind quickly. Furthermore, if companies neglect to manage AI usage, employees might use AI tools independently, leading to potential data leaks. One example of this risk is described in an article detailing leaked conversations using ChatGPT, where sensitive information, including system login credentials, was exposed during a system troubleshooting session at a pharmacy drug portal.
Cost is another critical factor. The affordability of deploying chatbots at scale depends on licensing fees, infrastructure costs, and maintenance expenses. A comprehensive LLM Hub platform that is both cost-effective and scalable allows businesses to adopt chatbot technology with minimal financial risk.
Considerations for the LLM Hub implementation
However, achieving this requires careful planning. Let’s consider, for example, data security . To provide answers tailored to employees and potential customers, we need to integrate the models with extensive data sources. These data sources can be vast, and there is a significant risk of inadvertently revealing more information than intended. The weakest link in any company's security chain is often human error, and the same applies to chatbots. They can make mistakes, and end users may exploit these vulnerabilities through clever manipulation techniques.
We can implement robust tools to monitor and control the information being sent to users. This capability can be applied to every chatbot assistant within our ecosystem, ensuring that sensitive data is protected. The security tools we use - including encryption, authentication mechanisms, and role-based access control - can be easily implemented and tailored for each assistant in our LLM Hub or configured centrally for the entire Hub, depending on the specific needs and policies of the organization.
As mentioned, deploying, and managing chatbots across diverse departments and functions can also be complex and challenging. Efficient development is crucial for organizations seeking to stay compliant with regulatory requirements and internal policies while maximizing operational effectiveness. This requires utilizing standardized templates or blueprints within an LLM Hub, which not only accelerates development but also ensures consistency and compliance across all chatbots.
Additionally, LLM Hubs offer robust tools for compliance management and control, enabling organizations to monitor and enforce regulatory standards, access controls, and data protection measures seamlessly. These features play a pivotal role in reducing the complexity and cost associated with deploying and maintaining chatbot solutions while simultaneously safeguarding sensitive data and mitigating compliance risks.

In the following chapter, we will delve into the specific technical requirements necessary for the successful implementation of an LLM Hub platform, addressing the challenges and opportunities it presents.
LLM Hub - technical requirements
Several key technical requirements must be met to ensure that LLM Hub functions effectively within the organization's AI ecosystem. These requirements focus on data integration, adaptability, integration methods, and security measures . For this use case, 4 major requirements were set based on the business problem we want to solve.
- Independent Integration of Internal Data Sources: The LLM Hub should seamlessly integrate with the organization's existing data sources. This ensures that data from different departments or sources within the organization can be seamlessly incorporated into the LLM Hub. It enables the creation of chatbots that leverage valuable internal data, regardless of the specific chatbot's function. Data owners can deliver data sources, which promotes flexibility and scalability for diverse use cases.
- Easy Onboarding of New Use Cases: The LLM Hub should streamline the process of adding new chatbots and functionalities. Ideally, the system should allow the creation of reusable solutions and data tools. This means the ability to quickly create a chatbot and plug in data tools, such as internal data sources or web search functionalities into it. This reusability minimizes development time and resources required for each new chatbot, accelerating AI deployment.
- Security Verification Layer for the Entire Platform: Security is paramount in LLM-Hub development when dealing with sensitive data and infinite user interactions. The LLM Hub must be equipped with robust security measures to protect user privacy and prevent unauthorized access or malicious activities. Additionally, a question-answer verification layer must be implemented to ensure the accuracy and reliability of the information provided by the chatbots.
- Possibility of Various Integrations with the Assistant Itself: The LLM Hub should offer diverse integration options for AI assistants. Interaction between users and chatbots within the Hub should be available regardless of the communication platform. Whether users prefer to engage via an API, a messaging platform like Microsoft Teams, or a web-based interface, the LLM Hub should accommodate diverse integration options to meet user preferences and operational needs.
High-level design of the LLM Hub
A well-designed LLM Hub platform is key to unlocking the true potential of chatbots within an organization. However, building such a platform requires careful consideration of various technical requirements. In the previous section, we outlined four key requirements. Now, we will take an iterative approach to unveil the LLM Hub architecture.
Data sources integration
Figure 1
The architectural diagram in Figure 1 displays a design that prioritizes independent integration of internal data sources. Let us break down the key components and how they contribute to achieving the goal:
- Domain Knowledge Storage (DKS) – knowledge storage acts as a central repository for all the data extracted from the internal source. Here, the data is organized using a standardized schema for all domain knowledge storages. This schema defines the structure and meaning of the data (metadata), making it easier for chatbots to understand and query the information they need regardless of the original source.
- Data Loaders – data loaders act as bridges between the LLM Hub and specific data sources within the organization. Each loader can be configured and created independently using its native protocols (APIs, events, etc.), resulting in structured knowledge in DKS. This ensures that LLM Hub can integrate with a wide variety of data sources without requiring significant modifications in the assistant. Data Loaders, along with DKS, can be provided by data owners who are experts in the given domain.
- Assistant – represents a chatbot that can be built using the LLM Hub platform. It uses the RAG approach, getting knowledge from different DKSs to understand the topic and answer user questions. It is the only piece of architecture where use case owners can make some changes like prompt engineering, caching, etc.
Functions
Figure 2 introduces pre-built functions that can be used for any assistant. It enables easier onboarding for new use cases . Functions can be treated as reusable building blocks for chatbot development . Assistants can easily enable and disable specific functions using configuration.
They can also facilitate knowledge sharing and collaboration within an organization. Users can share functions they have created, allowing others to leverage them and accelerate chatbot development efforts.
Using pre-built functions, developers can focus on each chatbot's unique logic and user interface rather than re-inventing the wheel for common functionalities like internet search. Also, using function calling, LLM can decide whether specific data knowledge storage should be called or not, optimizing the RAG process, reducing costs, and minimizing unnecessary calls to external resources.
Figure 2
Middleware
With the next diagram (Figure 3), we introduce an additional layer of middleware, a crucial enhancement that fortifies our software by incorporating a unified authentication process and a prompt validation layer. This middleware acts as a gatekeeper , ensuring that all requests meet our security and compliance standards before proceeding further into the system.
When a user sends a request, the middleware's authentication module verifies the user's credentials to ensure they have the necessary permissions to access the requested resources. This step is vital in maintaining the integrity and security of our system, protecting sensitive data, and preventing unauthorized access. By implementing a robust authentication mechanism, we safeguard our infrastructure from potential breaches and ensure that only legitimate users interact with our assistants.
Next, the prompt validation layer comes into play. This component is designed to scrutinize each incoming request to ensure it complies with company policies and guidelines. Given the sophisticated nature of modern AI models, there are numerous ways to craft queries that could potentially extract sensitive or unauthorized information. For instance, as highlighted in a recent study , there are methods to extract training data through well-constructed queries. By validating prompts before they reach the AI model, we mitigate these risks, ensuring that the data processed is both safe and appropriate.
Figure 3
The middleware, comprising the authentication (Auth) and Prompt Verification Layer, acts as a gatekeeper to ensure secure and valid interactions. The authentication module verifies user credentials, while the Prompt Verification Layer ensures that incoming requests are appropriate and within the scope of the AI model's capabilities. This dual-layer security approach not only safeguards the system but also ensures that users receive relevant and accurate responses.
Adaptability is the key here. It is designed to be a common component for all our assistants, providing a standardized approach to security and compliance. This uniformity simplifies maintenance, as updates to the authentication or validation processes can be implemented across the board without needing to modify each assistant individually. Furthermore, this modular design allows for easy expansion and customization, enabling us to tailor the solution to meet the specific needs of different customers.
This means a more reliable and secure system that can adapt to their unique requirements. Whether you need to integrate new authentication protocols, enforce stricter compliance rules, or scale the system to accommodate more users, our middleware framework is flexible enough to handle these changes seamlessly.
Handlers
We are coming to the very beginning of our process: the handlers. Figure 4 highlights the crucial role of these components in managing requests from various sources . Users can interact through different communication platforms, including popular ones in office environments such as Teams and Slack. These platforms are familiar to employees, as they use them daily for communication with colleagues.
Handling prompts from multiple sources can be complex due to the variations in how each platform structures requests. This is where our handlers play a critical role.
They are designed to parse incoming requests and convert them into a standardized format , ensuring consistency in responses regardless of the communication platform used. By developing robust handlers, we ensure that the AI model provides uniform answers across all communicators, thereby enhancing reliability and user experience.
Moreover, these handlers streamline the integration process, allowing for easy scalability as new communication platforms are adopted. This flexibility is essential for adapting to the evolving technological landscape and maintaining a cohesive user experience across various channels.
The API handler facilitates the creation of custom, tailored front-end interfaces . This capability allows the company to deliver unique and personalized chat experiences that are adaptable to various scenarios.
For example, front-end developers can leverage the API handler to implement a mobile version of the chatbot or enable interactions with the AI model within a car. With comprehensive documentation, the API handler provides an effective solution for developing and integrating these features seamlessly.
In summary, the handlers are a foundational element of our AI infrastructure, ensuring seamless communication, robust security, and scalability. By standardizing requests and enabling versatile front-end integrations, they provide a consistent and high-quality user experience across various communication platforms.
Figure 4
Conclusions
The development of the LLM Hub platform is a significant step forward in adopting AI technology within large organizations. It effectively addresses the complexities and challenges of implementing chatbots in an easy, fast, and cost-effective way. But to maximize the potential of LLM Hub, architecture is not enough, and several key factors must be considered:
- Continuous Collaboration: Collaboration between data owners, use case owners, and the platform team is essential for the platform to stay at the forefront of AI innovation.
- Compliance and Control: In the corporate world, robust compliance measures must be implemented to ensure the chatbots adhere to industry and organizational standards. LLM Hub can be a perfect place for it. It can implement granular access controls, audit trails, logging, or policy enforcements.
- Templates for Efficiency: LLM Hub should provide customizable templates for all chatbot components that can be used in a new use case. Facilitating templates will help teams accelerate the creation and deployment of new assistants, improving efficiency and reducing time to market.
By adhering to these rules, organizations can unlock new ways for growth, efficiency, and innovation in the era of artificial intelligence. Investing in a well-designed LLM Hub platform equips corporations with the chatbot tools to:
- Simplify Compliance: LLM Hub ensures that chatbots created in the platform adhere to industry regulations and standards, safeguarding your company from legal implications and maintaining a positive brand name.
- Enhance Security : Security measures built into the platform foster trust among all customers and partners, safeguarding sensitive data and the organization's intellectual property.
- Accelerate chatbot development : Templates and tools provided by LLM Hub, or other use case owners enhance quickly development and launch of sophisticated chatbots.
- Asynchronous Collaboration and Work Reduction: An LLM Hub enables teams to work asynchronously on chatbot development, eliminating the need to duplicate efforts, e.g., to create a connection to the same data source or make the same action.
As AI technology continues to evolve, the potential applications of LLM Hubs will expand, opening new opportunities for innovation. Organizations can leverage this technology to not only enhance customer interactions but also to streamline internal processes, improve decision-making, and foster a culture of continuous improvement. By integrating advanced analytics and machine learning capabilities, the LLM Hub can provide deeper insights and predictive capabilities, driving proactive business strategies.
Furthermore, the modularity and scalability of the LLM Hub platform means that it can grow alongside the organization, adapting to changing needs without requiring extensive overhauls. Specifically, this growth potential translates to the ability to seamlessly integrate new tools and functionalities into the entire LLM Hub ecosystem. Additionally, new chatbots can be simply added to the platform and use already implemented tools as the organization expands. This future-proof design ensures that investments made today will continue to yield benefits in the long run.
The successful implementation of an LLM Hub can transform the organizational landscape, making AI an integral part of the business ecosystem. This transformation enhances operational efficiency and positions the organization as a leader in technological innovation, ready to meet future challenges and opportunities.
Controlling HVAC module in cars using Android: A dive into SOME/IP integration
In modern automotive design, controlling various components of a vehicle via mobile devices has become a significant trend, enhancing user experience and convenience. One such component is the HVAC (Heating, Ventilation, and Air Conditioning) system, which plays a crucial role in ensuring passenger comfort. In this article, we'll explore how to control the HVAC module in a car using an Android device , leveraging the power of the SOME/IP protocol.
Understanding HVAC
HVAC stands for Heating, Ventilation, and Air Conditioning. In the context of automotive engineering, the HVAC system regulates the temperature, humidity, and air quality within the vehicle cabin. It includes components such as heaters, air conditioners, fans, and air filters. Controlling the HVAC system efficiently contributes to passenger comfort and safety during the journey.
Introduction to SOME/IP
In the SOME/IP paradigm, communication is structured around services, which encapsulate specific functionalities or data exchanges. There are two main roles within the service-oriented model:
Provider: The provider is responsible for offering services to other ECUs within the network. In the automotive context, a provider ECU might control physical actuators, read sensor data, or perform other tasks related to vehicle operation. For example, in our case, the provider would be an application running on a domain controller within the vehicle.
The provider offers services by exposing interfaces that define the methods or data structures available for interaction. These interfaces can include operations to control actuators (e.g., HVAC settings) or methods to read sensor data (e.g., temperature, humidity).
Consumer: The consumer, on the other hand, is an ECU that utilizes services provided by other ECUs within the network. Consumers can subscribe to specific services offered by providers to receive updates or invoke methods as needed. In the automotive context, a consumer might be responsible for interpreting sensor data, sending control commands, or performing other tasks based on received information.
Consumers subscribe to services they are interested in and receive updates whenever there is new data available. They can also invoke methods provided by the service provider to trigger actions or control functionalities. In our scenario, the consumer would be an application running on the Android VHAL (Vehicle Hardware Abstraction Layer), responsible for interacting with the vehicle's network and controlling HVAC settings.
SOME/IP communication flow
The communication flow in SOME/IP follows a publish-subscribe pattern, where providers publish data or services, and consumers subscribe to them to receive updates or invoke methods. This asynchronous communication model allows for efficient and flexible interaction between ECUs within the network.

Source: https://github.com/COVESA/vsomeip/wiki/vsomeip-in-10-minutes
In our case, the application running on the domain controller (provider) would publish sensor data such as temperature, humidity, and HVAC status. Subscribed consumers, such as the VHAL application on Android, would receive these updates and could send control commands back to the domain controller to adjust HVAC settings based on user input.
Leveraging VHAL in Android for vehicle networking
To communicate with the vehicle's network, Android provides the Vehicle Hardware Abstraction Layer (VHAL). VHAL acts as a bridge between the Android operating system and the vehicle's onboard systems , enabling seamless integration of Android devices with the car's functionalities. VHAL abstracts the complexities of vehicle networking protocols, allowing developers to focus on implementing features such as HVAC control without worrying about low-level communication details.

Source: https://source.android.com/docs/automotive/vhal/previous/properties
Implementing SOMEIP Consumer in VHAL
To integrate a SOMEIP consumer into VHAL on Android 14, we will use the vsomeip library. Below are the steps required to implement this solution:
Cloning the vsomeip Repository
Go to the main directory of your Android project and create a new directory named external/sdv:
mkdir -p external/sdv
cd external/sdv
git clone https://android.googlesource.com/platform/external/sdv/vsomeip
Implementing SOMEIP Consumer in VHAL
In the hardware/interfaces/automotive/vehicle/2.0/default directory, you can find the VHAL application code. In the VehicleService.cpp file, you will find the default VHAL implementation.
int main(int /* argc */, char* /* argv */ []) {
auto store = std::make_unique<VehiclePropertyStore>();
auto connector = std::make_unique<DefaultVehicleConnector>();
auto hal = std::make_unique<DefaultVehicleHal>(store.get(), connector.get());
auto service = android::sp<VehicleHalManager>::make(hal.get());
connector->setValuePool(hal->getValuePool());
android::hardware::configureRpcThreadpool(4, true /* callerWillJoin */);
ALOGI("Registering as service...");
android::status_t status = service->registerAsService();
if (status != android::OK) {
ALOGE("Unable to register vehicle service (%d)", status);
return 1;
}
ALOGI("Ready");
android::hardware::joinRpcThreadpool();
return 0;
}
The default implementation of VHAL is provided in DefaultVehicleHal which we need to replace in VehicleService.cpp .
From:
auto hal = std::make_unique<DefaultVehicleHal>(store.get(), connector.get());
To:
auto hal = std::make_unique<VendorVehicleHal>(store.get(), connector.get());
For our implementation, we will create a class called VendorVehicleHal and inherit from the DefaultVehicleHal class. We will override the set and get functions.
class VendorVehicleHal : public DefaultVehicleHal {
public:
VendorVehicleHal(VehiclePropertyStore* propStore, VehicleHalClient* client);
VehiclePropValuePtr get(const VehiclePropValue& requestedPropValue,
StatusCode* outStatus) override;
StatusCode set(const VehiclePropValue& propValue) override;
};
The get function is invoked when the Android system requests information from VHAL, and set when it wants to set it. Data is transmitted in a VehiclePropValue object defined in hardware/interfaces/automotive/vehicle/2.0/types.hal.
It contains a variable, prop, which is the identifier of our property. The list of all properties can be found in the types.hal file.
We will filter out only the values of interest and redirect the rest to the default implementation.
StatusCode VendorVehicleHal::set(const VehiclePropValue& propValue) {
ALOGD("VendorVehicleHal::set propId: 0x%x areaID: 0x%x", propValue.prop, propValue.areaId);
switch(propValue.prop)
{
case (int)VehicleProperty::HVAC_FAN_SPEED :
break;
case (int)VehicleProperty::HVAC_FAN_DIRECTION :
break;
case (int)VehicleProperty::HVAC_TEMPERATURE_CURRENT :
break;
case (int)VehicleProperty::HVAC_TEMPERATURE_SET:
break;
case (int)VehicleProperty::HVAC_DEFROSTER :
break;
case (int)VehicleProperty::HVAC_AC_ON :
break;
case (int)VehicleProperty::HVAC_MAX_AC_ON :
break;
case (int)VehicleProperty::HVAC_MAX_DEFROST_ON :
break;
case (int)VehicleProperty::EVS_SERVICE_REQUEST :
break;
case (int)VehicleProperty::HVAC_TEMPERATURE_DISPLAY_UNITS :
break;
}
return DefaultVehicleHal::set(propValue);
}
Now we need to create a SOME/IP service consumer. If you're not familiar with the SOME/IP protocol or the vsomeip library, I recommend reading the guide "vsomeip in 10 minutes" .
It provides a step-by-step description of how to create a provider and consumer for SOME/IP.
In our example, we'll create a class called ZoneHVACService and define SOME/IP service, instance, method, and event IDs:
#define ZONE_HVAC_SERVICE_ID 0x4002
#define ZONE_HVAC_INSTANCE_ID 0x0001
#define ZONE_HVAC_SET_TEMPERATURE_ID 0x1011
#define ZONE_HVAC_SET_FANSPEED_ID 0x1012
#define ZONE_HVAC_SET_AIR_DISTRIBUTION_ID 0x1013
#define ZONE_HVAC_TEMPERATURE_EVENT_ID 0x2011
#define ZONE_HVAC_FANSPEED_EVENT_ID 0x2012
#define ZONE_HVAC_AIR_DISTRIBUTION_EVENT_ID 0x2013
#define ZONE_HVAC_EVENT_GROUP_ID 0x3011
class ZoneHVACService {
public:
ZoneHVACService(bool _use_tcp) :
app_(vsomeip::runtime::get()->create_application(vsomeipAppName)), use_tcp_(
_use_tcp) {
}
bool init() {
if (!app_->init()) {
LOG(ERROR) << "[SOMEIP] " << __func__ << "Couldn't initialize application";
return false;
}
app_->register_state_handler(
std::bind(&ZoneHVACService::on_state, this,
std::placeholders::_1));
app_->register_message_handler(
ZONE_HVAC_SERVICE_ID, ZONE_HVAC_INSTANCE_ID, vsomeip::ANY_METHOD,
std::bind(&ZoneHVACService::on_message, this,
std::placeholders::_1));
app_->register_availability_handler(ZONE_HVAC_SERVICE_ID, ZONE_HVAC_INSTANCE_ID,
std::bind(&ZoneHVACService::on_availability,
this,
std::placeholders::_1, std::placeholders::_2, std::placeholders::_3));
std::set<vsomeip::eventgroup_t> its_groups;
its_groups.insert(ZONE_HVAC_EVENT_GROUP_ID);
app_->request_event(
ZONE_HVAC_SERVICE_ID,
ZONE_HVAC_INSTANCE_ID,
ZONE_HVAC_TEMPERATURE_EVENT_ID,
its_groups,
vsomeip::event_type_e::ET_FIELD);
app_->request_event(
ZONE_HVAC_SERVICE_ID,
ZONE_HVAC_INSTANCE_ID,
ZONE_HVAC_FANSPEED_EVENT_ID,
its_groups,
vsomeip::event_type_e::ET_FIELD);
app_->request_event(
ZONE_HVAC_SERVICE_ID,
ZONE_HVAC_INSTANCE_ID,
ZONE_HVAC_AIR_DISTRIBUTION_EVENT_ID,
its_groups,
vsomeip::event_type_e::ET_FIELD);
app_->subscribe(ZONE_HVAC_SERVICE_ID, ZONE_HVAC_INSTANCE_ID, ZONE_HVAC_EVENT_GROUP_ID);
return true;
}
void send_temp(std::string temp)
{
LOG(INFO) << "[SOMEIP] " << __func__ << " temp: " << temp;
std::shared_ptr< vsomeip::message > request;
request = vsomeip::runtime::get()->create_request();
request->set_service(ZONE_HVAC_SERVICE_ID);
request->set_instance(ZONE_HVAC_INSTANCE_ID);
request->set_method(ZONE_HVAC_SET_TEMPERATURE_ID);
std::shared_ptr< vsomeip::payload > its_payload = vsomeip::runtime::get()->create_payload();
its_payload->set_data((const vsomeip_v3::byte_t *)temp.data(), temp.size());
request->set_payload(its_payload);
app_->send(request);
}
void send_fanspeed(uint8_t speed)
{
LOG(INFO) << "[SOMEIP] " << __func__ << " speed: " << (int)speed;
std::shared_ptr< vsomeip::message > request;
request = vsomeip::runtime::get()->create_request();
request->set_service(ZONE_HVAC_SERVICE_ID);
request->set_instance(ZONE_HVAC_INSTANCE_ID);
request->set_method(ZONE_HVAC_SET_FANSPEED_ID);
std::shared_ptr< vsomeip::payload > its_payload = vsomeip::runtime::get()->create_payload();
its_payload->set_data(&speed, 1U);
request->set_payload(its_payload);
app_->send(request);
}
void start() {
app_->start();
}
void on_state(vsomeip::state_type_e _state) {
if (_state == vsomeip::state_type_e::ST_REGISTERED) {
app_->request_service(ZONE_HVAC_SERVICE_ID, ZONE_HVAC_INSTANCE_ID);
}
}
void on_availability(vsomeip::service_t _service, vsomeip::instance_t _instance, bool _is_available) {
LOG(INFO) << "[SOMEIP] " << __func__ << "Service ["
<< std::setw(4) << std::setfill('0') << std::hex << _service << "." << _instance
<< "] is "
<< (_is_available ? "available." : "NOT available.");
}
void on_temperature_message(const std::shared_ptr<vsomeip::message> & message)
{
auto payload = message->get_payload();
temperature_.resize(payload->get_length());
temperature_.assign((char*)payload->get_data(), payload->get_length());
LOG(INFO) << "[SOMEIP] " << __func__ << " temp: " << temperature_;
if(tempChanged_)
{
tempChanged_(temperature_);
}
}
void on_fanspeed_message(const std::shared_ptr<vsomeip::message> & message)
{
auto payload = message->get_payload();
fan_speed_ = *payload->get_data();
LOG(INFO) << "[SOMEIP] " << __func__ << " speed: " << (int)fan_speed_;
if(fanspeedChanged_)
{
fanspeedChanged_(fan_speed_);
}
}
void on_message(const std::shared_ptr<vsomeip::message> & message) {
if(message->get_method() == ZONE_HVAC_TEMPERATURE_EVENT_ID)
{
LOG(INFO) << "[SOMEIP] " << __func__ << "TEMPERATURE_EVENT_ID received";
on_temperature_message(message);
}
else if(message->get_method() == ZONE_HVAC_FANSPEED_EVENT_ID)
{
LOG(INFO) << "[SOMEIP] " << __func__ << "ZONE_HVAC_FANSPEED_EVENT_ID received";
on_fanspeed_message(message);
}
}
std::function<void(std::string temp)> tempChanged_;
std::function<void(uint8_t)> fanspeedChanged_;
private:
std::shared_ptr< vsomeip::application > app_;
bool use_tcp_;
std::string temperature_;
uint8_t fan_speed_;
uint8_t air_distribution_t;
};
In our example, we will connect ZoneHVACService and VendorVehicleHal using callbacks.
hal->fandirectionChanged_ = [&](uint8_t direction) {
ALOGI("HAL fandirectionChanged_ callback direction: %u", direction);
hvacService->send_fandirection(direction);
};hal->fanspeedChanged_ = [&](uint8_t speed) {
ALOGI("HAL fanspeedChanged_ callback speed: %u", speed);
hvacService->send_fanspeed(speed);
};
The last thing left for us to do is to create a configuration for the vsomeip library. It's best to utilize a sample file from the library: https://github.com/COVESA/vsomeip/blob/master/config/vsomeip-local.json
In this file, you'll need to change the address:
"unicast" : "10.0.2.15",
to the address of our Android device.
Additionally, you need to set:
"routing" : "service-sample",
to the name of our application.
The vsomeip stack reads the application address and the path to the configuration file from environment variables. The easiest way to do this in Android is to set it up before creating the ZoneHVACService object.
setenv("VSOMEIP_CONFIGURATION","/vendor/etc/vsomeip-local-hvac.json",1);
setenv("VSOMEIP_APPLICATION_NAME," "hvac-service",1);
That’s it. Now, we shoudl replace vendor/bin/hw/android.hardware.automotive.vehicle@2.0-default-service with our new build and reboot Android.
If everything was configured correctly, we should see such logs, and the provider should get our requests.
04-25 06:52:12.989 3981 3981 I automotive.vehicle@2.0-default-service: Starting automotive.vehicle@2.0-default-service ...
04-25 06:52:13.005 3981 3981 I automotive.vehicle@2.0-default-service: Registering as service...
04-25 06:52:13.077 3981 3981 I automotive.vehicle@2.0-default-service: Ready
04-25 06:52:13.081 3981 4011 I automotive.vehicle@2.0-default-service: Starting UDP receiver
04-25 06:52:13.081 3981 4011 I automotive.vehicle@2.0-default-service: Socket created
04-25 06:52:13.082 3981 4010 I automotive.vehicle@2.0-default-service: HTTPServer starting
04-25 06:52:13.082 3981 4010 I automotive.vehicle@2.0-default-service: HTTPServer listen
04-25 06:52:13.091 3981 4012 I automotive.vehicle@2.0-default-service: Initializing SomeIP service ...
04-25 06:52:13.091 3981 4012 I automotive.vehicle@2.0-default-service: [SOMEIP] initInitialize app
04-25 06:52:13.209 3981 4012 I automotive.vehicle@2.0-default-service: [SOMEIP] initApp initialized
04-25 06:52:13.209 3981 4012 I automotive.vehicle@2.0-default-service: [SOMEIP] initClient settings [protocol=UDP]
04-25 06:52:13.210 3981 4012 I automotive.vehicle@2.0-default-service: [SOMEIP] Initialized SomeIP service result:1
04-25 06:52:13.214 3981 4028 I automotive.vehicle@2.0-default-service: [SOMEIP] on_availabilityService [4002.1] is NOT available.
04-25 06:54:35.654 3981 4028 I automotive.vehicle@2.0-default-service: [SOMEIP] on_availabilityService [4002.1] is available.
04-25 06:54:35.774 3981 4028 I automotive.vehicle@2.0-default-service: [SOMEIP] on_message Message received: [4002.0001.2012] to Client/Session [0000/0002]
04-25 06:54:35.774 3981 4028 I automotive.vehicle@2.0-default-service: [SOMEIP] on_messageZONE_HVAC_FANSPEED_EVENT_ID received
04-25 06:54:35.774 3981 4028 I automotive.vehicle@2.0-default-service: [SOMEIP] on_fanspeed_message speed: 1
04-25 06:54:35.775 3981 4028 I automotive.vehicle@2.0-default-service: SOMEIP fanspeedChanged_ speed: 1
04-25 06:54:36.602 3981 4028 I automotive.vehicle@2.0-default-service: [SOMEIP] on_message Message received: [4002.0001.2012] to Client/Session [0000/0003]
04-25 06:54:36.602 3981 4028 I automotive.vehicle@2.0-default-service: [SOMEIP] on_messageZONE_HVAC_FANSPEED_EVENT_ID received
04-25 06:54:36.603 3981 4028 I automotive.vehicle@2.0-default-service: [SOMEIP] on_fanspeed_message speed: 2
04-25 06:54:36.603 3981 4028 I automotive.vehicle@2.0-default-service: SOMEIP fanspeedChanged_ speed: 2
04-25 06:54:37.605 3981 4028 I automotive.vehicle@2.0-default-service: [SOMEIP] on_message Message received: [4002.0001.2012] to Client/Session [0000/0004]
04-25 06:54:37.606 3981 4028 I automotive.vehicle@2.0-default-service: [SOMEIP] on_messageZONE_HVAC_FANSPEED_EVENT_ID received
04-25 06:54:37.606 3981 4028 I automotive.vehicle@2.0-default-service: [SOMEIP] on_fanspeed_message speed: 3
04-25 06:54:37.606 3981 4028 I automotive.vehicle@2.0-default-service: SOMEIP fanspeedChanged_ speed: 3
Summary
In conclusion, the integration of Android devices with Vehicle Hardware Abstraction Layer (VHAL) for controlling HVAC systems opens up a new realm of possibilities for automotive technology. By leveraging the power of SOME/IP communication protocol and the vsomeip library, developers can create robust solutions for managing vehicle HVAC functionalities.
By following the steps outlined in this article, developers can create custom VHAL implementations tailored to their specific needs. From defining service interfaces to handling communication callbacks, every aspect of the integration process has been carefully explained to facilitate smooth development.
As automotive technology continues to evolve, the convergence of Android devices and vehicle systems represents a significant milestone in the journey towards smarter, more connected vehicles. The integration of HVAC control functionalities through VHAL and SOME/IP not only demonstrates the potential of modern automotive technology but also paves the way for future innovations in the field.
Integrating generative AI with knowledge graphs for enhanced data analytics
The integration of generative AI into data analytics is transforming business data management and interpretation, opening vast possibilities across industries.
Recent statistics from a Gartner survey indicate significant strides in the adoption of generative AI: 45% of organizations are now piloting generative AI projects, with 10% having fully integrated these systems into their operations. This marks a considerable increase from earlier figures, demonstrating a rapid adoption curve. Additionally, by 2026, it's predicted that more than 80% of organizations will use generative AI applications, up from less than 5% just three years prior.
Combining generative AI and knowledge graphs for data analytics
The potential impact of combining generative AI with knowledge graphs is particularly promising. This synergy enhances data analytics by improving accuracy, speeding up data processing, and enabling deeper insights into complex datasets. As adoption continues to expand, the mentioned technologies will transform how organizations leverage data for strategic advantage.
This article details the specific benefits of generative AI and knowledge graphs and how their integration can boost data-based decision-making processes.
Maximizing generative AI potential in data analytics
Generative AI has revolutionized data analytics by automating tasks that traditionally required significant human effort and by providing new methods to manage and interpret large datasets. Here is a more detailed explanation of how GenAI operates in various aspects of data analytics.
Rapid Summarization of Information
GenAI's ability to swiftly process and summarize large volumes of data is a boon in situations that demand quick insights from extensive datasets. This is especially critical in areas like financial analysis or market trend monitoring, where rapid information condensation can significantly expedite decision-making processes.
Enhanced Data Enrichment
In the initial stages of data analytics , raw data is often unstructured and may contain errors or gaps. GenAI plays a crucial role in enriching this raw data before it can be effectively visualized or analyzed. This includes cleaning the data, filling in missing values, generating new features, and integrating external data sources to add depth and context. Such capabilities are particularly beneficial in scenarios like predictive modeling for customer behavior, where historical data may not fully capture current trends.
Automation of Repetitive Data Preparation Tasks
Data preparation is often the most time-consuming part of data analytics. GenAI helps automate these processes with unmatched precision and speed. This not only enhances the efficiency and accuracy of data preparation but also improves data quality by quickly identifying and correcting inconsistencies.
Complex Data Simplification
GenAI expertly simplifies complex data patterns, making them easy to understand and accessible. This allows users with varying levels of expertise to derive actionable insights and make informed decisions effortlessly.
Interactive Data Exploration via Conversational Interfaces
GenAI uses Natural Language Processing (NLP) to facilitate interactions, allowing users to query data in everyday language. This significantly lowers the barrier to data exploration, making analytics tools more user-friendly and extending their use across different organizational departments.
The use of knowledge graphs in data analytics
Knowledge graphs prove increasingly useful in data analytics, providing a solid framework to improve decision-making in various industries. These graphs represent data as interconnected networks of entities linked by relationships, enabling intuitive and sophisticated analysis of complex datasets.
What are associative knowledge graphs?
Associative knowledge graphs are a specialized subset of knowledge graphs that excel in identifying and leveraging intricate and often subtle associations among data elements. These associations include not only direct links but also indirect and inferred relationships that are crucial for deep data analysis, AI modeling, and complex decision-making processes where understanding subtle connections can be crucial.
Associative knowledge graphs functionalities
Associative knowledge graphs are useful in dynamic environments where data constantly evolves. They can incorporate incremental updates without major structural changes , allowing them to quickly adapt and maintain accuracy without extensive modifications. This is particularly beneficial in scenarios where knowledge graphs need to be updated frequently with new information without retraining or restructuring the entire graph.
Designed to handle complex queries involving multiple entities and relationships, these graphs offer advanced capabilities beyond traditional relational databases. This is due to their ability to represent data in a graph structure that reflects the real-world interconnections between different pieces of information. Whether the data comes from structured databases, semi-structured documents, or unstructured sources like texts and multimedia, associative knowledge graphs can amalgamate these different data types into a unified model.
Additionally, associative knowledge graphs generate deeper insights in data analytics through cognitive and associative linking . They connect disparate data points by mimicking human cognitive processes, revealing patterns important for strategic decision-making.

Generative AI and associative knowledge graphs: Synergy for analytics
The integration of Generative AI with associative knowledge graphs enhances data processing and analysis in three key ways: speed, quality of insights, and deeper understanding of complex relationships.
Speed: GenAI automates conventional data management tasks , significantly reducing the time required for data cleansing, validation, and enrichment. This helps decrease manual efforts and speed up data handling. Combining it with associative knowledge graphs simplifies data integration and enables faster querying and manipulation of complex datasets, enhancing operational efficiency.
Quality of Insights: GenAI and associative knowledge graphs work together to generate high-quality insights. GenAI quickly processes large datasets to deliver timely and relevant information. Knowledge graphs enhance these outputs by providing semantic and contextual depth, where precise insights are vital.
Deeper Understanding of Complex Relationships: By illustrating intricate data relationships, knowledge graphs reveal hidden patterns and correlations which leads to more comprehensive and actionable insights that can improve data utilization in complex scenarios.
Example applications
Healthcare:
- Patient Risk Prediction : GenAI and associative knowledge graphs can be used to predict patient risks and health outcomes by analyzing and interpreting comprehensive data, including historical records, real-time health monitoring from IoT devices, and social determinants of health. This integration enables creation of personalized treatment plans and preventive care strategies.
- Operational Efficiency Optimization : These technologies optimize resource allocation, staff scheduling, and patient flow by integrating data from various hospital systems (electronic health records, staffing schedules, patient admissions). This results in more efficient resource utilization, reduced waiting times and improved overall care delivery.
Insurance, Banking & Finance:
- Risk Assessment / Credit Scoring : Using a broad array of data points such as historical financial data, social media activity, and IoT device data, GenAI and knowledge graphs can help generate accurate risk assessments and credit scores. This comprehensive analysis uncovers complex relationships and patterns, enhancing the understanding of risk profiles.
- Customer Lifetime Value Prediction : These technologies are utilized to analyze transaction and interaction data to predict future banking behaviors and assess customer profitability. By tracking customer behaviors, preferences, and historical interactions, they allow for the development of personalized marketing campaigns and loyalty programs, boosting customer retention and profitability.
Retail:
- Inventory Management : Customers can also use GenAI and associative knowledge graphs to optimize inventory management and prevent overstock and stockouts. Integrating supply chain data, sales trends, and consumer demand signals ensures balanced inventory aligned with market needs, improving operational efficiency and customer satisfaction.
- Sales & Price Forecasting : Otherwise, you can forecast future sales and price trends by analyzing historical sales data, economic indicators, and consumer behavior patterns. By combining various data sources, you get a comprehensive understanding of sales dynamics and price fluctuations, aiding in strategic planning and decision-making.
gIQ – data analytics platform powered by generative AI and associative knowledge graphs
The gIQ data analytics platform demonstrates one example of integrating generative AI with knowledge graphs. Developed by Grape Up founders, this solution represents a cutting-edge approach, allowing for transformation of raw data into applicable knowledge. This integration allows gIQ to swiftly detect patterns and establish connections, delivering critical insights while bypassing the intensive computational requirements of conventional machine learning techniques. Consequently, users can navigate complex data environments easily, paving the way for informed decision-making and strategic planning.
Conclusion
The combination of generative AI and knowledge graphs is transforming data analytics by allowing organizations to analyze data more quickly, accurately, and insightfully. The increasing use of these technologies indicates that they are widely recognized for their ability to improve decision-making and operational efficiency in a variety of industries.
Looking forward, it's highly likely that the ongoing development and improvement of these technologies will unlock more advanced and sophisticated applications. This will drive innovation and give organizations a strategic advantage. Embracing these advancements isn't just beneficial, it's essential for companies that want to remain competitive in an increasingly data-driven world.
From silos to synergy: How LLM Hubs facilitate chatbot integration
In today's tech-driven business environment, large language models (LLM)-powered chatbots are revolutionizing operations across a myriad of sectors, including recruitment, procurement, and marketing. In fact, the Generative AI market can gain $1.3 trillion worth by 2032. As companies continue to recognize the value of these AI-driven tools, investment in customized AI solutions is burgeoning. However, the growth of Generative AI within organizations brings to the fore a significant challenge: ensuring LLM interoperability and effective communication among the numerous department-specific GenAI chatbots.
The challenge of siloed chatbots
In many organizations, the deployment of GenAI chatbots in various departments has led to a fragmented landscape of AI-powered assistants. Each chatbot, while effective within its domain, operates in isolation, which can result in operational inefficiencies and missed opportunities for cross-departmental AI use.
Many organizations face the challenge of having multiple GenAI chatbots across different departments without a centralized entry point for user queries. This can cause complications when customers have requests, especially if they span the knowledge bases of multiple chatbots.
Let’s imagine an enterprise, which we’ll call Company X, which uses separate chatbots in human resources, payroll, and employee benefits. While each chatbot is designed to provide specialized support within its domain, employees often have questions that intersect these areas. Without a system to integrate these chatbots, an employee seeking information about maternity leave policies, for example, might have to interact with multiple unconnected chatbots to understand how their leave would affect their benefits and salary.
This fragmented experience can lead to confusion and inefficiencies, as the chatbots cannot provide a cohesive and comprehensive response.
Ensuring LLM interoperability
To address such issues, an LLM hub must be created and implemented. The solution lies in providing a single user interface that serves as the one point of entry for all queries, ensuring LLM interoperability. This UI should enable seamless conversations with the enterprise's LLM assistants, where, depending on the specific question, the answer is sourced from the chatbot with the necessary data.
This setup ensures that even if separate teams are working on different chatbots, these are accessible to the same audience without users having to interact with each chatbot individually. It simplifies the user's experience, even as they make complex requests that may target multiple assistants. The key is efficient data retrieval and response generation, with the system smartly identifying and pulling from the relevant assistant as needed.
In practice at Company X, the user interacts with a single interface to ask questions. The LLM hub then dynamically determines which specific chatbot – whether from human resources, payroll, or employee benefits (or all of them) – has the requisite information and tuning to deliver the correct response. Rather than the user navigating through different systems, the hub brings the right system to the user.
This centralized approach not only streamlines the user experience but also enhances the accuracy and relevance of the information provided. The chatbots, each with its own specialized scope and data, remain interconnected through the hub via APIs. This allows for LLM interoperability and a seamless exchange of information, ensuring that the user's query is addressed by the most informed and appropriate AI assistant available.

Advantages of LLM Hubs
- LLM hubs provide a unified user interface from which all enterprise assistants can be accessed seamlessly. As users pose questions, the hub evaluates which chatbot has the necessary data and specific tuning to address the query and routes the conversation to that agent, ensuring a smooth interaction with the most knowledgeable source.
- The hub's core functionality includes the intelligent allocation of queries . It does not indiscriminately exchange data between services but selectively directs questions to the chatbot best equipped with the required data and configuration to respond, thus maintaining operational effectiveness and data security.
- The service catalog remains a vital component of the LLM hub, providing a centralized directory of all chatbots and their capabilities within the organization. This aids users in discovering available AI services and enables the hub to allocate queries more efficiently, preventing redundant development of AI solutions.
- The LLM hub respects the specialized knowledge and unique configurations of each departmental chatbot. It ensures that each chatbot applies its finely-tuned expertise to deliver accurate and contextually relevant responses, enhancing the overall quality of user interaction.
- The unified interface offered by LLM hubs guarantees a consistent user experience. Users engage in conversations with multiple AI services through a single touchpoint, which maintains the distinct capabilities of each chatbot and supports a smooth, integrated conversation flow.
- LLM hubs facilitate the easy management and evolution of AI services within an organization. They enable the integration of new chatbots and updates, providing a flexible and scalable infrastructure that adapts to the business's growing needs.
At Company X, the introduction of the LLM hub transformed the user experience by providing a single user interface for interacting with various chatbots.
The IT department's management of chatbots became more streamlined. Whenever updates or new configurations were made to the LLM hub, they were effectively distributed to all integrated chatbots without the need for individual adjustments.
The scalable nature of the hub also facilitated the swift deployment of new chatbots, enabling Company X to rapidly adapt to emerging needs without the complexities of setting up additional, separate systems. Each new chatbot connects to the hub, accessing and contributing to the collective knowledge network established within the company.
Things to consider when implementing the LLM Hub solution
1. Integration with Legacy Systems : Enterprises with established legacy systems must devise strategies for integrating with LLM hubs. This ensures that these systems can engage with AI-driven technologies without disrupting existing workflows.
2. Data Privacy and Security: Given that chatbots handle sensitive data, it is crucial to maintain data privacy and security during interactions and within the hub. Implementing strong encryption and secure transfer protocols, along with adherence to regulations such as GDPR, is necessary to protect data integrity.
3. Adaptive Learning and Feedback Loops: Embedding adaptive learning within LLM hubs is key to the progressive enhancement of chatbot interactions. Feedback loops allow for continual learning and improvement of provided responses based on user interactions.
4. Multilingual Support: Ideally, LLM hubs should accommodate multilingual capabilities to support global operations. This enables chatbots to interact with a diverse user base in their preferred languages, broadening the service's reach and inclusivity.
5. Analytics and Reporting: The inclusion of advanced analytics and reporting within the LLM hub offers valuable insights into chatbot interactions. Tracking metrics like response accuracy and user engagement helps fine-tune AI services for better performance.
6. Scalability and Flexibility: An LLM hub should be designed to handle scaling in response to the growing number of interactions and the expanding variety of tasks required by the business, ensuring the system remains robust and adaptable over time.
Conclusion
LLM hubs represent a proactive approach to overcoming the challenges posed by isolated chatbot s within organizations. By ensuring LLM interoperability and fostering seamless communication between different AI services, these hubs enable companies to fully leverage their AI assets.
This not only promotes a more integrated and efficient operational structure but also sets the stage for innovation and reduced complexity in the AI landscape. As GenAI adoption continues to expand, developing interoperability solutions like the LLM hub will be crucial for businesses aiming to optimize their AI investments and achieve a cohesive and effective chatbot ecosystem.
Exploring the architecture of automotive electronics: Domain vs. zone
The automotive industry is undergoing a transformative shift towards electrification and automation, with vehicles becoming increasingly reliant on sophisticated electrical and electronic systems. At the heart of this evolution lies the architecture that governs how these systems are organized and integrated within the vehicle. Two prominent paradigms have emerged in this domain: domain architecture and zone architecture.

Source: https://www.eetasia.com/the-role-of-centralized-storage-in-the-emerging-zonal-automotive-architecture/
Domain Architecture : In this approach, various electrical and electronic functions are organized around "domains" or functional modules. Each domain is responsible for a specific functional area of the vehicle, such as the engine, braking system, steering system, etc. Each domain may have its independent controllers and communication networks.
Example : In a domain architecture setup, the engine domain would handle all functions related to the vehicle's engine, including ignition, fuel injection, and emissions control. This domain would have its dedicated controller managing these operations.
Zone Architecture : In this approach, the electrical and electronic systems are organized around different "zones" within the vehicle. Zones typically correspond to specific physical areas of the vehicle, such as the front dashboard, passenger cabin, front-end, rear-end, etc. Each zone may have independent electrical and electronic systems tailored to specific needs and functions.
Example : In a zone architecture setup, the front-end zone might encompass functions like lighting, HVAC (Heating, Ventilation, and Air Conditioning), and front-facing sensors for driver assistance systems. These functions would be integrated into a system optimized for the front-end zone's requirements.
Advantages of zonal architecture over domain architecture
In the realm of automotive electronics, zone architecture offers several advantages over domain architecture, revolutionizing the way vehicles are designed, built, and operated. Let's explore these advantages in detail:
1. Software defined vehicle (SDV)
The concept of a Software Defined Vehicle (SDV) represents a paradigm shift in automotive engineering, transforming vehicles into dynamic platforms driven by software innovation. SDV involves decoupling the application layer from the hardware layer, creating a modular and flexible system that offers several significant advantages:
Abstraction of Application from Hardware : In an SDV architecture, applications are abstracted from the underlying hardware, creating a layer of abstraction that simplifies development and testing processes. This separation allows developers to focus on building software functionalities without being constrained by hardware dependencies.
Sensor Agnosticism : One of the key benefits of SDV is the ability to utilize sensors across multiple applications without being tied to specific domains. In traditional domain architectures, sensors are often dedicated to specific functions, limiting their flexibility and efficiency. In an SDV setup, sensors are treated as shared resources that can be accessed and utilized by various applications independently. This sensor agnosticism enhances resource utilization and reduces redundancy, leading to optimized system performance and cost-effectiveness.
Independent Software Updates : SDV enables independent software updates for different vehicle functions and applications. Instead of relying on centralized control units or domain-specific controllers, software functionalities can be updated and upgraded autonomously, enhancing the agility and adaptability of the vehicle.
The OTA system in zonal architecture is also simpler in general as the whole idea is based on abstracting software from hardware, and less tightly coupled software is way easier to update remotely.
With independent software updates, manufacturers can address software bugs, introduce new features and deploy security patches more efficiently. This capability ensures that vehicles remain up-to-date with the latest advancements and safety standards, enhancing user satisfaction and brand reputation.
2. Security
Zone architecture in automotive electronics offers concrete advantages over domain architecture. Let's examine how zone architecture addresses security concerns more effectively compared to domain architecture:
Network Access Vulnerabilities in Domain Architecture:
In domain architecture, connecting to the vehicle network grants access to the entire communication ecosystem, including sensors, actuators, and the central computer.
Particularly concerning is the Controller Area Network (CAN), a widely used protocol lacking built-in authentication and authorization mechanisms. Once connected to a CAN network, an attacker can send arbitrary messages as if originating from legitimate devices.
Granular Access Control in Zone Architecture:
Zone architecture introduces granular access control mechanisms, starting at the nearest gateway to the zone. Each message passing through the gateway is scrutinized, allowing only authorized communications to proceed while rejecting unauthorized ones.
By implementing granular access control, attackers accessing the network gain access only to communication between sensors and the gateway. Moreover, the architecture enables the segregation of end networks based on threat levels.
Network Segmentation for Enhanced Security:
In a zone architecture setup, it's feasible to segment networks based on the criticality of components and potential exposure to threats.
Less critical sensors and actuators can be grouped together on a single CAN network. Conversely, critical sensors vulnerable to external access can be connected via encrypted Ethernet connections, offering an additional layer of security.
In summary, zone architecture provides a reliable solution to security vulnerabilities inherent in domain architecture. By implementing granular access control and network segmentation, zone architecture significantly reduces the attack surface and enhances the overall security posture of automotive systems. This approach ensures that critical vehicle functions remain protected against unauthorized access and manipulation, safeguarding both the vehicle and its occupants from potential cyber threats.
3. Simplified and lightweight wiring
Wiring in automotive electronics plays a critical role in connecting various components and systems within the vehicle. However, it also poses challenges, particularly in terms of weight and complexity. This section, explores how zone architecture addresses these challenges, leading to simplified and lightweight wiring solutions.
The Weight of Wiring: It's important to recognize that wiring is one of the heaviest components in a vehicle, trailing only behind the chassis and engine. In fact, the total weight of wiring harnesses in a vehicle can reach up to 70 kilograms. This significant weight contributes to the overall mass of the vehicle, affecting fuel efficiency, handling, and performance.
Challenges with Traditional Wiring: Traditional wiring systems, especially in domain architecture, often involve long and complex wiring harnesses that span the entire vehicle. This extensive wiring adds to the overall weight and complexity of the vehicle, making assembly and maintenance more challenging.
The Promise of Zone Architecture: Zonal architecture offers a promising alternative by organizing vehicle components into functional zones. This approach allows for more localized placement of sensors, actuators, and control units within each zone, minimizing the distance between components and reducing the need for lengthy wiring harnesses.
Reduced Cable Length: By grouping components together within each zone, zone architecture significantly reduces the overall cable length required to connect these components. Shorter cable runs translate to lower electrical resistance, reduced signal attenuation, and improved signal integrity, resulting in more reliable and responsive vehicle systems.
Optimized Routing and Routing Flexibility: Zone architecture allows for optimized routing of wiring harnesses, minimizing interference and congestion between different systems and components. Moreover, the flexibility inherent in zone architecture enables easier adaptation to different vehicle configurations and customer preferences without the constraints imposed by rigid wiring layouts.
4. Easier and cheaper production
Zonal architecture not only enhances the functionality and efficiency of automotive electronics but also streamlines the production process, making it easier and more cost-effective. Let's explore how zone architecture achieves this:
Modular Assembly : One of the key advantages of zone architecture is its modular nature, which allows for the assembly of individual zones separately before integrating them into the complete vehicle. This modular approach simplifies the assembly process, as each zone can be constructed and tested independently, reducing the complexity of assembly lines and minimizing the risk of errors during assembly.
Reduced Wiring Complexity : The reduction in wiring complexity achieved through zone architecture has a significant impact on production costs. Wiring harnesses are one of the most expensive components in a vehicle, primarily due to the labor-intensive nature of their installation. Each wire must be routed and connected individually, and since each domain typically has its own wiring harness, the process becomes even more laborious.
Automation Challenges with Wiring : Furthermore, automating the wiring process is inherently challenging due to the intricate nature of routing wires and connecting them to various components. While automation has been successfully implemented in many aspects of automotive production, wiring assembly remains largely manual, requiring a significant workforce to complete the task efficiently.
Batch Production of Zones : With zone architecture, the assembly of individual zones can be batch-produced, allowing for standardized processes and economies of scale. This approach enables manufacturers to optimize production lines for specific tasks, reduce setup times between production runs, and achieve greater consistency and quality control.
Integration of Wiring Harnesses : Another advantage of zone architecture is the integration of wiring harnesses into larger assemblies, such as the entire zone. By combining wiring harnesses and assembly for an entire zone into a single process, manufacturers can significantly accelerate production and reduce costs associated with wiring installation and integration.
In summary, zone architecture simplifies and streamlines the production process of vehicles by allowing for modular assembly, reducing wiring complexity, addressing automation challenges, and facilitating batch production of zones. By integrating wiring harnesses into larger assemblies and optimizing production lines, manufacturers can achieve cost savings, improve efficiency, and enhance overall quality in automotive production.
Introduction to zone architecture demonstration
In our Research and Development (R&D) department, we're thrilled to present a demonstration showcasing the power and versatility of zone architecture in automotive electronics. Let's take a closer look at the key components of our setup:

1. Android Computer with Modified VHAL for HVAC:
- Our central computing platform is an Android 14 device running Android Automotive OS (AAOS). This device serves as the hub for system control and communication.
- The modified Vehicle Hardware Abstraction Layer (VHAL) is a pivotal component of our setup. This software layer enables seamless interaction with the vehicle's HVAC module.
- The modified VHAL establishes a connection with the HVAC module through Ethernet and REST API communication protocols, facilitating real-time control and monitoring of HVAC functions.
2. Zone Computer with Ubuntu Core and HVAC Controller Application:
- Each zone in our architecture is equipped with a dedicated computer running Ubuntu Core.
- These zone computers act as the brains of their respective zones, hosting our custom HVAC controller application.
- The HVAC controller application, developed in-house, is responsible for interpreting commands received from the central Android computer and orchestrating HVAC operations within the zone.
- By leveraging Ethernet and REST API communication, as well as the Controller Area Network (CAN) network, the HVAC controller application ensures seamless integration and coordination of HVAC functions.
3. Microcontroller for Physical Interface:
- To provide users with intuitive control over HVAC settings, we've integrated a microcontroller into our setup.
- The microcontroller serves as an interface between the physical controls (buttons and display) and the zone computer.
- Using the CAN interface, the microcontroller relays user inputs to the zone computer, enabling responsive and interactive control of HVAC parameters.
- By bridging the gap between the digital and physical domains, the microcontroller enhances user experience and usability, making HVAC control intuitive and accessible.
The experiment with zone architecture in automotive electronics has proven the effectiveness of our setup. In our solution, pressing a button triggers the transmission of information to the zone computer, where the temperature is adjusted and broadcasted to the respective temperature displays in the zone and to the main Android Automotive OS (head unit IVI). Additionally, changing the temperature via the interface on Android results in sending information to the appropriate zone, thereby adjusting the temperature in that zone.
During the hardware layer testing, we utilized the REST API protocol to expedite implementation. However, we observed certain limitations of this solution. Specifically, we anticipated from the outset that the REST API protocol would not suffice for our needs. The VHAL in the Android system needs to know the HTTP addresses of individual zones and specify to which zone the temperature change should be sent. This approach is not very flexible and may introduce delays associated with each connection to the HTTP server.
In the next article, we plan to review available communication protocols and methods of message description in such a network. Our goal will be to find protocols that excel in terms of speed, flexibility of application, and security. By doing so, we aim to further refine our solution and maximize its effectiveness in the context of zone architecture in automotive electronics.
Addressing data governance challenges in enterprises through the use of LLM Hubs
In an era where more than 80% of enterprises are expected to use Generative AI by 2026, up from less than 5% in 2023, the integration of AI chatbots is becoming increasingly common. This adoption is driven by the significant efficiency boosts these technologies offer, with over half of businesses now deploying conversational AI for customer interactions.
In fact, 92% of Fortune 500 companies are using OpenAI’s technology, with 94% of business executives believing that AI is a key to success in the future.
Challenges to GenAI implementation
The implementation of large language models (LLMs) and AI-driven chatbots is a challenging task in the current enterprise technology scene. Apart from the complexity of integrating these technologies, there is a crucial need to manage the vast amount of data they process securely and ethically. This emphasizes the importance of having robust data governance practices in place.
Organizations deploying generative AI chatbots may face security risks associated with both external breaches and internal data access. Since these chatbots are designed to streamline operations, they require access to sensitive information . Without proper control measures in place, there is a high possibility that confidential information may be inadvertently accessed by unauthorized personnel.
For example, chatbots or AI tools are used to automate financial processes or provide financial insights. Failures in secure data management in this context may lead to malicious breaches.
Similarly, a customer service bot may expose confidential customer data to departments that do not have a legitimate need for it. This highlights the need for strict access controls and proper data handling protocols to ensure the security of sensitive information.
Dealing with complexities of data governance and LLMs
To integrate LLMs into current data governance frameworks, organizations need to adjust their strategy. This lets them use LLMs effectively while still following important standards like data quality, security, and compliance.
- It is crucial to adhere to ethical and regulatory standards when using data within LLMs. Establish clear guidelines for data handling and privacy.
- Devise strategies for the effective management and anonymization of the vast data volumes required by LLMs.
- Regular updates to governance policies are necessary to keep pace with technological advancements, ensuring ongoing relevance and effectiveness.
- Implement strict oversight and access controls to prevent unauthorized exposure of sensitive information through, for example, chatbots.
Introducing the LLM hub: centralizing data governance
An LLM hub empowers companies to manage data governance effectively by centralizing control over how data is accessed, processed, and used by LLMs within the enterprise. Instead of implementing fragmented solutions, this hub serves as a unified platform for overseeing and integrating AI processes.
By directing all LLM interactions through this centralized platform, businesses can monitor how sensitive data is being handled. This guarantees that confidential information is only processed when required and in full compliance with privacy regulations.

Role-Based Access Control in the LLM hub
A key feature of the LLM Hub is its implementation of Role-Based Access Control (RBAC) . This system enables precise delineation of access rights, ensuring that only authorized personnel can interact with specific data or AI functionalities. RBAC limits access to authorized users based on their roles in their organization. This method is commonly used in various IT systems and services, including those that provide access to LLMs through platforms or hubs designed for managing these models and their usage.
In a typical RBAC system for an LLM Hub, roles are defined based on the job functions within the organization and the access to resources that those roles require. Each role is assigned specific permissions to perform certain tasks, such as generating text, accessing billing information, managing API keys, or configuring model parameters. Users are then assigned roles that match their responsibilities and needs.
Here are some of the key features and benefits of implementing RBAC in an LLM Hub:
- By limiting access to resources based on roles, RBAC helps to minimize potential security risks. Users have access only to the information and functionality necessary for their roles, reducing the chance of accidental or malicious breaches.
- RBAC allows for easier management of user permissions. Instead of assigning permissions to each user individually, administrators can assign roles to users, streamlining the process and reducing administrative overhead.
- For organizations that are subject to regulations regarding data access and privacy, RBAC can help ensure compliance by strictly controlling who has access to sensitive information.
- Roles can be customized and adjusted as organizational needs change. New roles can be created, and permissions can be updated as necessary, allowing the access control system to evolve with the organization.
- RBAC systems often include auditing capabilities, making it easier to track who accessed what resources and when. This is crucial for investigating security incidents and for compliance purposes.
- RBAC can enforce the principle of separation of duties, which is a key security practice. This means that no single user should have enough permissions to perform a series of actions that could lead to a security breach. By dividing responsibilities among different roles, RBAC helps prevent conflicts of interest and reduces the risk of fraud or error.
Practical application: safeguarding HR Data
Let's break down a practical scenario where an LLM Hub can make a significant difference - managing HR inquiries:
- Scenario : An organization employed chatbots to handle HR-related questions from employees. These bots need access to personal employee data but must do so in a way that prevents misuse or unauthorized exposure.
- Challenge: The main concern was the risk of sensitive HR data—such as personal employee details, salaries, and performance reviews—being accessed by unauthorized personnel through the AI chatbots. This posed a significant risk to privacy and compliance with data protection regulations.
- Solution with the LLM hub :
- Controlled access: Through RBAC, only HR personnel can query the chatbot for sensitive information, significantly reducing the risk of data exposure to unauthorized staff.
- Audit trails: The system maintained detailed audit trails of all data access and user interactions with the HR chatbots, facilitating real-time monitoring and swift action on any irregularities.
- Compliance with data privacy laws: To ensure compliance with data protection regulations, the LLM Hub now includes automated compliance checks. These help to adjust protocols as needed to meet legal standards.
- Outcome: The integration of the LLM Hub at the company led to a significant improvement in the security and privacy of HR records. By strictly controlling access and ensuring compliance, the company not only safeguarded employee information but also strengthened its stance on data ethics and regulatory adherence.
Conclusion
Robust data governance is crucial as businesses embrace LLMs and AI. The LLM Hub provides a forward-thinking solution for managing the complexities of these technologies. Centralizing data governance is key to ensuring that organizations can leverage AI to improve their operational efficiency without compromising on security, privacy, or ethical standards. This approach not only helps organizations avoid potential pitfalls but also enables sustainable innovation in the AI-driven enterprise landscape.
Fleet management task with AWS IoT – overcoming limitations of Docker Virtual Networks
We’re working for a client that produces fire trucks. There is a list of requirements and the architecture proposal in the first article and a step-by-step implementation of the prototype in the second one . This time, we’re going to close the topic with DHCP implementation and UDP tests.
DHCP server and client
A major issue with Docker is the need to assign IP addresses for containers. It is impractical to rely on automatic address assignments managed by Docker or to manually set addresses when containers are started. The architecture intended for IoT edge should ensure that the state of the device can be easily reproduced even after a power failure or reboot.
It may also be necessary to set fixed addresses for containers that will be the reference point for the entire architecture - see the Router container in our previous text. It is also worth considering the scenario where an external provider wants to connect to the edge device with extra devices. As part of the collaboration, it may be necessary to provide immutable IP addresses, e.g., for IP discovery service.
Our job is to provide a service to assign IP addresses from configurable pools for both physical and virtual devices in VLANs. It sounds like DHCP and indeed, it is DHCP, but it’s not so simple with Docker. Unfortunately, Docker uses its own addressing mechanism that cannot be linked to the network DHCP server.
The proposed solution will rely on a DHCP server and a DHCP client. At startup, the script responsible for running the Docker image will call the DHCP client and receive information about the MAC address and IP address the container will have.
Ultimately, we want to get a permanent configuration that is stored as a file or some simple database for the above. This will give us an immutable configuration for the basic parameters of the Docker container. To connect the MAC address, IP address, and Docker container, we propose adding the name of the potential Docker container to the record. This will create a link for the 3 elements that uniquely identifies the Docker container.
When the script starts, it queries the DHCP server for a possible available IP address and checks beforehand if there is already a lease for the IP/MAC address determined from the Docker container name.
This achieves a configuration that is resistant to IP conflicts and guarantees the reusability of previously assigned IP addresses.
DHCP server
For our use-case, we’ve decided to rely on isc-dhcp-server package. This is a sample configuration you can adjust for your needs.
dhcpd.conf
authoritative;
one-lease-per-client true;
subnet 10.0.1.0 netmask 255.255.255.0 {
range 10.0.1.2 10.0.1.200;
option domain-name-servers 8.8.8.8, 8.8.4.4;
option routers 10.0.1.3;
option subnet-mask 255.255.255.0;
default-lease-time 3600;
max-lease-time 7200;
}
subnet 10.0.2.0 netmask 255.255.255.0 {
range 10.0.2.2 10.0.1.200;
option domain-name-servers 8.8.8.8, 8.8.4.4;
option routers 10.0.2.3;
option subnet-mask 255.255.255.0;
default-lease-time 3600;
max-lease-time 7200;
}
Here is the breakdown for each line in the mentioned configuration. There are two subnets configured with two address pools for each VLAN in our network.
authoritative - this directive means that the DHCP server is the authoritative source for the network. If a client queries with an IP address that it was given by another DHCP server, this server will tell the client that the IP address is invalid, effectively forcing the client to ask for a new IP address.
one-lease-per-client - this ensures that each client gets only one lease at a time. This helps avoid scenarios where a single client might end up consuming multiple IP addresses, leading to a reduced available IP pool.
option domain-name-servers – this assigns DNS servers to the DHCP clients. In this case, it's using Google's public DNS servers (8.8.8.8 and 8.8.4.4).
option routers – this assigns a default gateway for the DHCP clients. Devices in this network will use 10.0.1.3 as their way out of the local network, likely to reach the internet or other networks.
option subnet-mask – this specifies the subnet mask to be assigned to DHCP clients, which in this case is 255.255.255.0. It determines the network portion of an IP address.
default-lease-time – specifies how long, in seconds, a DHCP lease will be valid if the client doesn't ask for a specific lease time. Here, it's set to 3600 seconds, which is equivalent to 1 hour.
max-lease-time - this sets the maximum amount of time, in seconds, a client can lease an IP address. Here, it's 7200 seconds or 2 hours.
DHCP Client
In our scenario, all new application containers are added to the system via bash commands executed on the Host – the firetruck’s main computer or Raspberry PI in our prototype. See the previous chapter for adding containers commands reference. The command requires IP addresses and gateways for each container.
Our approach is to obtain an address from the DHCP server (as dynamic IP) and set up a container with the address configured as static IP. To achieve this, we need a shell-friendly DHCP client. We’ve decided to go with a Python script that can be called when creating new containers.
DHCP Client Example (Python)
See comments in the scripts below for explanations of each block.
from scapy.layers.dhcp import BOOTP, DHCP
from scapy.layers.inet import UDP, IP, ICMP
from scapy.layers.l2 import Ether
from scapy.sendrecv import sendp, sniff
# Sendind discovery packet for DHCP
def locate_dhcp(src_mac_addr):
packet = Ether(dst='ff:ff:ff:ff:ff:ff', src=src_mac_addr, type=0x0800) / IP(src='0.0.0.0', dst='255.255.255.255') / \
UDP(dport=67, sport=68) / BOOTP(op=1, chaddr=src_mac_addr) / DHCP(options=[('message-type', 'discover'), 'end'])
sendp(packet, iface="enp2s0")
# Receiving offer by filtering out packets packet[DHCP].options[0][1] == 2
def capture_offer():
return sniff(iface="enp2s0", filter="port 68 and port 67",
stop_filter=lambda packet: BOOTP in packet and packet[BOOTP].op == 2 and packet[DHCP].options[0][1] == 2,
timeout=5)
# Transmitting packets with accepted offer (IP) from DHCP
def transmit_request(src_mac_addr, req_ip, srv_ip):
packet = Ether(dst='ff:ff:ff:ff:ff:ff', src=src_mac_addr, type=0x0800) / IP(src='0.0.0.0', dst='255.255.255.255') / \
UDP(dport=67, sport=68) / BOOTP(op=1, chaddr=src_mac_addr) / \
DHCP(options=[('message-type', 'request'), ("client_id", src_mac_addr), ("requested_addr", req_ip),
("server_id", srv_ip), 'end'])
sendp(packet, iface="enp2s0")
# Reading acknowledgement from DHCP. Filtering out packet[BOOTP].op == 2 and packet[DHCP].options[0][1] == 5 and ports 68/67
def capture_acknowledgement():
return sniff(iface="enp2s0", filter="port 68 and port 67",
stop_filter=lambda packet: BOOTP in packet and packet[BOOTP].op == 2 and packet[DHCP].options[0][1] == 5,
timeout=5)
# Ping offered IP address
def transmit_test_packet(src_mac_addr, src_ip_addr, dst_mac_addr, dst_ip_addr):
packet = Ether(src=src_mac_addr, dst=dst_mac_addr) / IP(src=src_ip_addr, dst=dst_ip_addr) / ICMP()
sendp(packet, iface="enp2s0")
if __name__ == "__main__":
# dummy mac address
mac_addr = "aa:bb:cc:11:22:33"
print("START")
print("SEND: Discover")
locate_dhcp(mac_addr)
print("RECEIVE: Offer")
received_packets = capture_offer()
server_mac_addr = received_packets[0]["Ether"].src
bootp_response = received_packets[0]["BOOTP"]
server_ip_addr = bootp_response.siaddr
offered_ip_addr = bootp_response.yiaddr
print("OFFER:", offered_ip_addr)
print("SEND: Request for", offered_ip_addr)
transmit_request(mac_addr, offered_ip_addr, server_ip_addr)
print("RECEIVE: Acknowledge")
received_packets2 = capture_acknowledgement()
print("ACKNOWLEDGE:", offered_ip_addr)
print("SEND: Test IP Packet")
transmit_test_packet(mac_addr, offered_ip_addr, server_mac_addr, server_ip_addr)
print("END")
Let’s talk about our use case.
The business requirement is to add another device to the edge - perhaps a thermal imaging camera. Our assumption is to guarantee as fully automatic onboarding of the device in our system as possible. Adding a new device will also mean, in our case, connecting it to the customer-provided Docker container.
Our expected result is to get a process that registers the new Docker container with the assigned IP address from the DHCP server. The IP address is, of course, dependent on the VLAN in which the new device will be located.
In summary, it is easy to see that plugging in a new device at this point just means that the IP address is automatically assigned and bound. The new device is aware of where the Router container is located - so communication is guaranteed from the very beginning.
UDP broadcast and multicast setup
Broadcast UDP is a method for sending a message to all devices on a network segment, which allows for efficient communication and discovery of other devices on the same network. In an IoT context, this can be used for the discovery of devices and services, such as finding nearby devices for data exchange or sending a command to all devices in a network.
Multicast, on the other hand, allows for the efficient distribution of data to a group of devices on a network. This can be useful in scenarios where the same data needs to be sent to multiple devices at the same time, such as a live video stream or a software update.
One purpose of the architecture was to provide a seamless, isolated, LAN-like environment for each application. Therefore, it was critical to enable applications to use not only direct, IP, or DNS-based communication but also to allow multicasting and broadcasting messages. These protocols enable devices to communicate with each other in a way that is scalable and bandwidth-efficient, which is crucial for IoT systems where there may be limited network resources available.
The presented architecture provides a solution for dockerized applications that use UDP broadcast/multicast. The router Docker container environment is intended to host applications that are to distribute data to other containers in the manner.
Let’s check whether those techniques are available to our edge networks.
Broadcast
The test phase should start on the Container1 container with an enabled UDP listener. For that, run the command.
nc -ulp 5000
The command uses the netcat (nc) utility to listen (-l) for incoming UDP (-u) datagrams on port 5000 (-p 5000).
Then, let’s produce a message on the Router container.
echo -n "foo" | nc -uv -b -s 10.0.1.3 -w1 10.0.1.255 5000
The command above is an instruction that uses the echo and netcat to send a UDP datagram containing the string "foo" to all devices on the local network segment.
Breaking down the command:
echo -n "foo" - This command prints the string "foo" to standard output without a trailing newline character.
nc - The nc command is used to create network connections and can be used for many purposes, including sending and receiving data over a network.
-uv - These options specify that nc should use UDP as the transport protocol and that it should be run in verbose mode.
-b - This option sets the SO_BROADCAST socket option, allowing the UDP packet to be sent to all devices on the local network segment.
-s 10.0.1.3 - This option sets the source IP address of the UDP packet to 10.0.1.3.
-w1 - This option sets the timeout for the nc command to 1 second.
10.0.1.255 - This is the destination IP address of the UDP packet, which is the broadcast address for the local network segment.
5000 - This is the destination port number for the UDP packet.
Please note that both source and destination addresses belong to VLAN 1. Therefore, the datagram is sent via the eth0 interface to this VLAN only.
The expected result is the docker container Container1 receiving the message from the Router container via UDP broadcast.
Multicast
Let's focus on Docker Container parameters specified when creating containers (Docker containers [execute on host] sub-chapter in the previous article ). In the context of Docker containers, the --sysctl net.ipv4.icmp_echo_ignore_broadcasts=0 option is crucial if you need to enable ICMP echo requests to the broadcast address inside the container. For example, if your containerized application relies on UDP broadcast for service discovery or communication with other containers, you may need to set this parameter to 0 to allow ICMP echo requests to be sent and received on the network.
Without setting this parameter to 0, your containerized application may not be able to communicate properly with other containers on the network or may experience unexpected behavior due to ICMP echo requests being ignored. Therefore, the --sysctl net.ipv4.icmp_echo_ignore_broadcasts=0 option can be crucial in certain Docker use cases where ICMP echo requests to the broadcast address are needed.
Usage example
Run the command below in the container Container1 (see previous chapter for naming references). We use socat, which is a command line utility that establishes a bidirectional byte stream and transfers data between them. Please note that the IP address of the multicast group does not belong to the VLAN 1 address space.
socat -u UDP4-RECV:22001,ip-add-membership=233.54.12.234:eth0 /dev/null &
Then, add the route to the multicast group.
ip route add 233.54.12.234/32 dev eth0
You can ping the address from Device 1 to verify the group has been created.
ping -I eth0 -t 2 233.54.12.234
As you can see, an interface parameter is required with the ping command to enforce using the correct outgoing interface. You can also limit the TTL parameter (-t 2) to verify the route length to the multicast group.
Now, use socat on Device1 to open the connection inside the group.
ip route add 233.54.12.234/32 dev eth0
socat STDIO UDP-DATAGRAM:233.54.12.234:22001
Please note you have to setup the route to avoid sending packets to “unknown network” directly to the router.
Now, you can type the message on Device1 and use tcpdump on Container1 to see the incoming message.
tcpdump -i eth0 -Xavvv

Summary
Nowadays, a major challenge faced by developers and customers is to guarantee maximum security while ensuring compatibility and openness to change for edge devices. As part of IoT, it is imperative to keep in mind that the delivered solution may be extended in the future with additional hardware modules, and thus, the environment into which this module will be deployed must be ready for changes.
This problem asks the non-trivial question of how to meet business requirements while taking into account all the guidelines from standards from hardware vendors or the usual legal standards.
Translating the presented architecture into a fire trucks context, all the requirements from the introduction regarding isolation and modularity of the environment have been met. Each truck has the ability to expand the connected hardware while maintaining security protocols. In addition, the Docker images that work with the hardware know only their private scope and the router's scope.
The proposed solution provides a ready answer on how to obtain a change-ready environment that meets security requirements. A key element of the architecture is to guarantee communication for applications only in the VLAN space in which they are located.
This way, any modification should not affect already existing processes on the edge side. It is also worth detailing the role played by the Router component. With it, we guarantee a way to communicate between Docker containers while maintaining a configuration that allows you to control network traffic.
We have also included a solution for UDP Broadcast / Multicast communication. Current standards among hardware include solutions that transmit data via the standard. This means that if, for example, we are waiting for emergency data on a device, we must also be ready to handle Broadcasts and ensure that packets are consumed only by those components that are designed for this purpose.
Summarizing the presented solution, one should not forget about applications in other industries as well. The idea of independent Docker images and modularity for hardware allows application even in the Automotive and high-reliability areas, where the use of multiple devices, not necessarily from the same supplier, is required.
We encourage you to think about further potential applications and thank you for taking the time to read.
Exploring Texas Instruments Edge AI: Hardware acceleration for efficient computation
In recent years, the field of artificial intelligence (AI) has witnessed a transformative shift towards edge computing, enabling intelligent decision-making to occur directly on devices rather than relying solely on cloud-based solutions. Texas Instruments, a key player in the semiconductor industry, has been at the forefront of developing cutting-edge solutions for Edge AI. One of the standout features of their offerings is the incorporation of hardware acceleration for efficient computation, which significantly improves the performance of AI models on resource-constrained devices.
Pros and cons of running AI models on embedded devices vs. cloud
In the evolving landscape of artificial intelligence , the decision to deploy models on embedded devices or rely on cloud-based solutions is a critical consideration. This chapter explores the advantages and disadvantages of running AI models on embedded devices, emphasizing the implications for efficiency, privacy, latency, and overall system performance.
Advantages of embedded AI
- Low Latency
One of the primary advantages of embedded AI is low latency. Models run directly on the device, eliminating the need for data transfer to and from the cloud. This results in faster response times, making embedded AI ideal for applications where real-time decision-making is crucial. - Privacy and Security
Embedded AI enhances privacy by processing data locally on the device. This mitigates concerns related to transmitting sensitive information to external servers. Security risks associated with data in transit are significantly reduced, contributing to a more secure AI deployment. - Edge Computing Efficiency
Utilizing embedded AI aligns with the principles of edge computing. By processing data at the edge of the network, unnecessary bandwidth usage is minimized, and only relevant information is transmitted to the cloud. This efficiency is especially beneficial in scenarios with limited network connectivity. What’s more, some problems are very inefficient to solve on cloud-based AI models, for example: video processing with real time output. - Offline Functionality
Embedded AI allows for offline functionality, enabling devices to operate independently of internet connectivity. This feature is advantageous in remote locations or environments with intermittent network access, as it expands the range of applications for embedded AI. - Reduced Dependence on Network Infrastructure
Deploying AI models on embedded devices reduces dependence on robust network infrastructure. This is particularly valuable in scenarios where maintaining a stable and high-bandwidth connection is challenging or cost ineffective. AI feature implemented on the cloud platform will be unavailable in the car after the connection is lost.
Disadvantages of embedded AI
- Lack of Scalability
Scaling embedded AI solutions across a large number of devices can be challenging. Managing updates, maintaining consistency, and ensuring uniform performance becomes more complex as the number of embedded devices increases. - Maintenance Challenges
Updating and maintaining AI models on embedded devices can be more cumbersome compared to cloud-based solutions. Remote updates may be limited, requiring physical intervention for maintenance, which can be impractical in certain scenarios. - Initial Deployment Cost
The initial cost of deploying embedded AI solutions, including hardware and development, can be higher compared to cloud-based alternatives. However, this cost may be offset by long-term benefits, depending on the specific use case and scale. - Limited Computational Power
Embedded devices often have limited computational power compared to cloud servers. This constraint may restrict the complexity and size of AI models that can be deployed on these devices, impacting the range of applications they can support. - Resource Constraints
Embedded devices typically have limited memory and storage capacities. Large AI models may struggle to fit within these constraints, requiring optimization or compromising model size for efficient deployment.
The decision to deploy AI models on embedded devices or in the cloud involves careful consideration of trade-offs. While embedded AI offers advantages in terms of low latency, privacy, and edge computing efficiency, it comes with challenges related to scalability, maintenance, and limited resources.
However, chipset manufacturers are constantly engaged in refining and enhancing their products by incorporating specialized modules dedicated to hardware-accelerated model execution. This ongoing commitment to innovation aims to significantly improve the overall performance of devices, ensuring that they can efficiently run AI models. The integration of these hardware-specific modules not only promises comparable performance but, in certain applications, even superior efficiency.
Deploy AI model on embedded device workflow

1. Design Model
Designing an AI model is the foundational step in the workflow. This involves choosing the appropriate model architecture based on the task at hand, whether it's classification, regression, or other specific objectives. This is out of the topic for this article.
2. Optimize for Embedded (Storage or RAM Memory)
Once the model is designed, the next step is to optimize it for deployment on embedded devices with limited resources. This optimization may involve reducing the model size, minimizing the number of parameters, or employing quantization techniques to decrease the precision of weights. The goal is to strike a balance between model size and performance to ensure efficient operation within the constraints of embedded storage and RAM memory.
3. Deploy (Model Runtime)
Deploying the optimized model involves integrating it into the embedded system's runtime environment. While there are general-purpose runtime frameworks like TensorFlow Lite and ONNX Runtime, achieving the best performance often requires leveraging dedicated frameworks that utilize hardware modules for accelerated computations. These specialized frameworks harness hardware accelerators to enhance the speed and efficiency of the model on embedded devices.
4. Validate
Validation is a critical stage in the workflow to ensure that the deployed model performs effectively on the embedded device. This involves rigorous testing using representative datasets and scenarios. Metrics such as accuracy, latency, and resource usage should be thoroughly evaluated to verify that the model meets the performance requirements. Validation helps identify any potential issues or discrepancies between the model's behavior in the development environment and its real-world performance on the embedded device.
Deploy model on Ti Edge AI and Jacinto 7
Deploying an AI model on Ti Edge AI and Jacinto 7 involves a series of steps to make the model work efficiently with both regular and specialized hardware. In simpler terms, we'll walk through how the model file travels from a general Linux environment to a dedicated DSP core, making use of special hardware features along the way.

1. Linux Environment on A72 Core: The deployment process initiates within the Linux environment running on the A72 core. Here, a model file resides, ready to be utilized by the application's runtime. The model file, often in a standardized format like .tflite, serves as the blueprint for the AI model's architecture and parameters.
2. Runtime Application on A72 Core: The runtime application, responsible for orchestrating the deployment, receives the model file from the Linux environment. This runtime acts as a proxy between the user, the model, and the specialized hardware accelerator. It interfaces with the Linux environment, handling the transfer of input data to be processed by the model.
3. Connection to C7xDSP Core: The runtime application establishes a connection with its library executing on the C7xDSP core. This library, finely tuned for hardware acceleration, is designed to efficiently process AI models using specialized modules such as the Matrix Multiply Accelerator.
4. Loading Model and Data into Memory: The library on the C7x DSP core receives the model description and input data, loading them into memory for rapid access. This optimized memory utilization is crucial for achieving efficient inference on the dedicated hardware.
5. Computation with Matrix Multiply Accelerator: Leveraging the power of the Matrix Multiply Accelerator, the library performs the computations necessary for model inference. The accelerator efficiently handles matrix multiplications, a fundamental operation in many neural network models.
The matrix multiply accelerator (MMA) provides the following key features:
- Support for a fully connected layer using matrix multiply with arbitrary dimension
- Support for convolution layer using 2D convolution with matrix multiply with read panel Support for ReLU non-linearity layer OTF
- Support for high utilization (>85%) for a typical convolutional neural network (CNN), such as AlexNet, ResNet, and others
- Ability to support any CNN network topologies limited only by memory size and bandwidth
6. Result Return to User via Runtime on Linux: Upon completion of computations, the results are returned to the user through the runtime application on the Linux environment. The inference output, processed with hardware acceleration, provides high-speed, low-latency responses for real-time applications.
Object recognition with AI model on Jacinto 7: Real-world challenges
In this chapter, we explore a practical example of deploying an AI model on Jacinto 7 for object recognition. The model is executed according to the provided architecture, utilizing the TVM-CL-3410-gluoncv-mxnet-mobv2 model from the Texas Instruments Edge AI Model Zoo. The test images capture various scenarios, showcasing both successful and challenging object recognition outcomes.
The deployment architecture aligns with the schematic provided, incorporating Jacinto 7's capabilities to efficiently execute the AI model. The TVM-CL-3410-gluoncv-mxnet-mobv2 model is utilized, emphasizing its pre-trained nature for object recognition tasks.
Test Scenarios: A series of test images were captured to evaluate the model's performance in real-world conditions. Notably:

Challenges and Real-world Nuances: The test results underscore the challenges of accurate object recognition in less-than-ideal conditions. Factors such as image quality, lighting, and ambiguous object appearances contribute to the intricacy of the task. The third and fourth images, where scissors are misidentified as a screwdriver, and a Coca-Cola glass is misrecognized as wine, exemplify situations where even a human might face difficulty due to limited visual information-
Quality Considerations: The achieved results are noteworthy, considering the less-than-optimal quality of the test images. The chosen camera quality and lighting conditions intentionally mimic challenging real-world scenarios, making the model's performance commendable.
Conclusion: The real-world example of object recognition on Jacinto 7 highlights the capabilities and challenges associated with deploying AI models in practical scenarios. The successful identification of objects like a screwdriver, cup, and computer mouse demonstrates the model's efficacy. However, misidentifications in challenging scenarios emphasize the need for continuous refinement and adaptation, acknowledging the intricacies inherent in object recognition tasks, especially in dynamic and less-controlled environments.







