About us
Our services

Capabilities

Legacy Modernization
Data Platforms
AI & Advanced Analytics

Industries

Automotive
Finance
Manufacturing

Solutions

Databoostr

Data Sharing & Monetization Platform

Cloudboostr

Multicloud Enterprise Kubernetes

Looking for something else?

Contact us for tailored solutions and expert guidance.

Contact
Case studies
Resources

Resources

Blog

Read our blog and stay informed about the industry’s latest trends and technology.

Ready to find your breaking point?

Stay updated with our newsletter.

Subscribe

Insights

Ebooks

Explore our resources and learn about building modern software solutions from experts and practitioners.

Read more
Careers
Contact
Blog

Thinking out loud

Where we share the insights, questions, and observations that shape our approach.

All blog post
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Automotive
Software development

How to manage fire trucks – IoT architecture with isolated applications and centralized management system

Welcome to a short cycle of articles that shows a way to combine network techniques and AWS services for a mission-critical automotive system .

We’ll show you how to design and implement an IoT system with a complex edge architecture.

The cycle consists of three articles and shows the architecture design, a step-by-step implementation guide, and some pitfalls with the way to overcome these.

Let’s start!

AWS IoT usage to manage vehicle fleet

Let’s create an application. But this won’t be a typical, yet another CRUD-based e-commerce system. This time, we’d like to build an IoT-based fleet-wise system with distributed (on-edge/in-cloud) computing.

Our customer is an automotive company that produces fire trucks. We’re not interested in engine power, mechanical systems, and firefighters' equipment. We’re hired to manage the fleet of vehicles for both the producer and its customers.

Each truck is controlled by a central, “rule-them-all” computer connected to all vehicles CAN buses, and whole extra firefighters’ equipment. The computer sends basic vehicle data (fuel level, tire pressure, etc.) to the fire station and a central emergency service supervisor. It receives new orders, calculates the best route to targets and controls all the vehicle equipment - pumps, lights, signals, and of course – the ladder. Also, it sends some telemetry and usage statistics to the producer to help design even better trucks in the future.

However, those trucks are not the same. For instance, in certain regions, the cabin must be airtight, so extra sensors are used. Some cities integrate emergency vehicles with city traffic light systems to clear the route for a running truck. Some stations require specialized equipment like winches, extra lights, power generators, crew management systems, etc.

Moreover, we need to consider that those trucks often operate in unpleasant conditions, with a limited and unreliable Internet connection available.

Of course, the customer would like to have a cloud-based server to manage everything both for the producer and end users - to collect logs and metrics with low latency, to send commands with no delay, and with a colorful, web-based, easy-to-use GUI.

Does it sound challenging? Let's break it down!

Requirements

Based on a half-an-hour session with the customer, we've collected the following, a bit chaotic, set of business requirements:

  • a star-like topology system, with a cloud in the center and trucks around it,
  • groups of trucks are owned by customers - legal entities that should have access only to their trucks,
  • each group that belongs to a customer may be customized by adding extra components, both hardware-, or software-based,
  • each truck is controlled by identical, custom, Linux-based computers running multiple applications provided by the customer or third parties,
  • truck-controlling computers are small, ARM-based machines with limited hardware and direct Internet access via GSM,
  • Internet connection is usually limited, expensive, and non-reliable,
  • the main computer should host common services, like GPS or time service,
  • some applications are built of multiple components (software and hardware-based) - hardware components communicate with the main computers via the in-vehicle IP network,
  • the applications must communicate with their servers over the Internet, and we need to control (filter/whitelist) this traffic,
  • each main computer is a router for the vehicle network,
  • each application should be isolated to minimize a potential attack scope,
  • components in trucks may be updated by adding new software or hardware components, even after leaving the production line,
  • the cloud application should be easy - read-only dashboards, truck data dump, send order, both-way emergency messages broadcast,
  • new trucks can be added to the system every day,
  • class-leading security is required - user and privileges management, encrypted and signed communication, operations tracking, etc.
  • provisioning new vehicles to the system should be as simple as possible to enable the factory workers to do it.

As we’ve learned so far, the basic architecture is as shown in the diagram below.

IoT Architecture with Isolated Applications and Centralized Management System

Our job is to propose a detailed architecture and prove the concept. Then, we’ll need a GPT-based instrument bench of developers to hammer it down.

The proposed architecture

There are two obvious parts of the architecture - the cloud one and the truck one. The cloud one is easy and mostly out-of-scope for the article. We need some frontend, some backend, and some database (well, as usual). In the trucks, we need to separate applications working on the same machine and then isolate traffic for each application. It sounds like containers and virtual networks. Before diving into each part, we need to solve the main issue - how to communicate between trucks and the cloud.

Selecting the technology

The star-like architecture of the system seems to be a very typical one - there is a server in the center with multiple clients using its services. However, in this situation, we can't distinguish between resources/services supplier (the server) and resources/services consumers (the clients). Instead, we need to consider the system as a complex, distributed structure with multiple working nodes, central management, and 3rd party integration. Due to the isolation, trucks’ main computers should containerize running applications. We could use Kubernetes clusters in trucks and another one in the cloud, but in that case, we need to implement everything manually – new truck onboarding, management at scale, resource limiting for applications, secured communication channels, and OTA updates. In the cloud, we would need to manage the cluster and pods, running even when there is no traffic.

An alternative way is the IoT. Well, as revealed in the title, this is the way that we have chosen. IoT provides a lot of services out-of-the-box - the communication channel, permissions management, OTA updates, components management, logs, metrics, and much more. Therefore, the main argument for using it was speeding up the deployment process.

However, we need to keep in mind that IoT architecture is not designed to be used with complex edge devices. This is our challenge, but fortunately, we are happy to solve it.

Selecting the cloud provider

The customer would like to use a leading provider, which reduces the choice to the top three in the World: AWS, MS Azure, and GCP.

The GCP IoT Core is the least advanced solution. It misses a lot of concepts and services available in the competitors, like a digital twin creation mechanism, complex permissions management, security evaluation, or a complex provisioning mechanism.

The Azure IoT is much more complex and powerful. On the other hand, it suffers from shortcomings in documentation, and - what is most important - some features are restricted to Microsoft instruments only (C#, Visual Studio, or PowerShell). On the other hand, it provides seamless AI tool integration, but it’s not our case for now.

But the last one – AWS IoT – fits all requirements and provides all the services needed. Two MQTT brokers are available, plenty of useful components (logs forwarding, direct tunnel for SSH access, complex permission management), and almost no limitation for IoT Core client devices. There is much more from AWS Greengrass - an extended version with higher requirements (vanilla C is not enough), but we can easily fulfill those requirements with our ARM-based trucks’ computers.

The basic architecture

Going back to the start-like topology, the most important part is the communication between multiple edge devices and the core. AWS IoT provides MQTT to enable a TCP-based, failure-resistant communication channel with a buffer that seamlessly keeps the communication on connection lost. The concept offers two MQTT brokers (in the cloud and on the edge) connected via a secured bridge. This way, we can use the MQTT as the main communication mechanism on the edge and decide which topics should be bridged and transferred to the cloud. We can also manage permissions for each topic on both sides as needed.

The cloud part is easy – we can synchronize the IoT MQTT broker with another messaging system (SNS/SQS, Kafka, whatever you like) or read/write it directly from our applications.

The edge part is much more complex. In the beginning, let’s assume that there are two applications running as executable programs on the edge. Each of these uses its own certificate to connect to the edge broker so we can distinguish between them and manage their permissions. It brings up some basic questions – how to provide certificates and ensure that one application won’t steal credentials from another. Fortunately, AWS IoT Greengrass supplies a way to run components as docker containers – it creates and provides certificates and uses IPC (inter-process communication) to allow containers to use the broker. Docker ensures isolation with low overhead, so each application is not aware of the other one. See the official documentation for details: Run a Docker container - AWS IoT Greengrass (amazon.com) .

Please note the only requirement for the applications, which is, in fact, the requirement we make to applications’ providers: we need docker images with applications that use AWS IoT SDK for communication.

See the initial architecture in the picture below.

Run a Docker container

As you can see, Application 1 contains two programs (separate docker containers) communicating with each other via the broker: Application 1_1 and Application 1_2. Thanks to the privileges management, we are sure that Application 2 can’t impact or read this communication. If required, we can also configure a common topic accessible by both applications.

Please also note that there is one more component – Nucleus. You can consider it as an orchestrator required by AWS IoT to rule the system.

Of course, we can connect thousands of similar edges to the same cloud, but we are not going to show it on pictures for readability reasons. AWS IoT provides deployment groups with versioning for OTA updates based on typical AWS SDK. Therefore, we can expose a user-friendly management system (for our client and end users) to manage applications running on edge at scale.

Virtual networks

Now, let’s challenge the architecture with a more complex scenario. Let’s assume that Application 2 communicates with an in-cabin air quality sensor – a separate computer that is in the same IP network. We can assume the sensor is a part of Application 2, and our aim is to enable such communication but also to hide it from Application 1. Let’s add some VLANs and utilize network interfaces.

Virtual networks

Starting from the physical infrastructure, the main computer uses two interfaces – eth0 to connect to the Internet and eth1 connected to a physical, managed switch (the “in-vehicle IP network” mentioned above). The Application 2_2 computer (the air quality sensor) is connected to the switch to a port tagged as VLAN 102, and the switch is connected to eth1 via a trunk port.

The eth0 interface is used by the main computer (host) to communicate with the Internet, so the main MQTT bridging is realized via this interface. On the other hand, there is also a new Greengrass-docker component called router. It’s connected to eth0 and to two virtual bridges – br101 and br102. Those bridges are not the same as the MQTT bridge. This time, we need to use the kernel-based Linux feature “bridge,” which is a logical, virtual network hub. Those bridges are connected to virtual network interfaces eth1.101 and eth1.102 and to applications’ containers.

This way, Application 1 uses its own VLAN 101 (100% virtual), and Application 2 uses its own VLAN 102 (holding both virtual and physical nodes). The application separation is still ensured, and there is no logical difference between virtual and mixed VLANs. Applications running inside VLANs can’t distinguish between physical and virtual nodes, and all IP network features (like UDP broadcasting and multicasting) are allowed. Note that nodes belonging to the same application can communicate omitting the MQTT (which is fine because the MQTT may be a bottleneck for the system).

Moreover, there is a single security-configuration point for all applications. The router container is the main gateway for all virtual and physical application-nodes, so we can configure a firewall on it or enable restricted routes between specific nodes between applications if needed. This way, we can enable applications to communicate with third-party servers over the Internet (see Application 1_1 in the picture), to communicate with individual nodes of the applications without restrictions, and to control the entire application-related traffic in a single place. And this place – the router – is just another Greengrass component, ready to be redeployed as a part of the OTA update. Also, the router is a good candidate to serve traffic targeting all networks (and all applications), e.g., to broadcast GPS position via UDP or to act as the network time server.

One more broker

What if… the application is provided as a physical machine only?

Well, as the main communication channel is MQTT, and the direct edge-to-Internet connection is available but limited, we would like to enable a physical application to use the MQTT. MQTT is a general standard for many integrated systems (small computers with limited purposes), but our edge MQTT broker is AWS-protected, so there are two options available. We can force the application supplier to be AWS-Greengrass compatible, or we need another broker. As we’re pacifists and we can’t stand forcing anybody to do anything, let’s add one more broker and one more bridge.

This time, there are two new components. The first one, an MQTT broker (Mosquitto or similar), interacts with Application 3. As we can’t configure the Mosquitto to act as a bridge for the AWS-managed broker, there is one more, custom application running on the server for this purpose only – a Greengrass component called “bridge”. This application connects to both local MQTT brokers and routes specific messages between them, as configured. Please note that Application 3 is connected to its own VLAN even if there are no virtual nodes. The reason is – there are no virtual nodes yet, but we’d like to keep the system future-proof and consistent. This way, we keep the virtual router as a network gateway for Application 3, too. Nevertheless, the non-AWS broker can listen to specific virtual interfaces, including eth1.103 in this case, so we can enable it for specific VLANs (application) if needed.

Summary

The article shows how to combine AWS IoT, docker, and virtual networks to achieve a future-proof fleet management system with hardware- and software-based applications at scale. We can use AWS tools to deliver new applications to edge devices and manage groups evoking truck owners or truck models. Each vehicle can be equipped with an ARM computer that uses AWS-native fleet provisioning on OS initialization to join the system. The proposed structure may seem to be complex, but you need to configure it only once to fulfill all requirements specified by the client.

However, theory is sometimes easier than practice, so we encourage you to read the following article with implementation details .

written by
Damian Petrecki
written by
Bartłomiej Kuciński
Automotive
Software development

Predictive maintenance in automotive manufacturing

Our  initial article on predictive maintenance covered the definition of such a system, its construction, and the key implementation challenges. In this part, we'll delve into how PdM technology is transforming different facets of the automotive industry and its advantages for OEMs, insurers, car rental companies, and vehicle owners.

Best predictive maintenance techniques and where you can use them

In the first part of the article, we discussed the importance of sensors in a PdM system. These sensors are responsible for collecting data from machines and vehicles, and they can measure various variables like temperature, vibration, pressure, or noise. Proper placement of these sensors on the machines and connecting them to IoT solutions, enables the transfer of data to the central repository of the system. After processing the data, we obtain information about specific machines or their parts that are prone to damage or downtime.

The automotive industry can benefit greatly from implementing these top predictive maintenance techniques.

Vibration analysis

 How does it work?

Machinery used in the automotive industry and car components have a specific frequency of vibration. Deviations from this standard pattern can indicate "fatigue" of the material or interference from a third-party component that may affect the machine's operation. The PdM system enables you to detect these anomalies and alert the machine user before a failure occurs.

 What can be detected?

The technique is mainly applied to high-speed rotating equipment. Vibration and oscillation analysis can detect issues such as bent shafts, loose mechanical components, engine problems, misalignment, and worn bearings or shafts.

Infrared thermography analysis

 How does it work?

The technique involves using infrared cameras to detect thermal anomalies. This technology can identify malfunctioning electrical circuits, sensors or components that are emitting excessive heat due to overheating or operating at increased speeds. With this advanced technology, it's possible to anticipate and prevent such faults, and even create heat maps that can be used in predictive models and maintenance of heating systems.

 What can be detected?

Infrared analysis is a versatile and non-invasive method that can be used on a wide scale. It is suitable for individual components, parts, and entire industrial facilities, and can detect rust, delamination, wear, or heat loss on various types of equipment.

Acoustic analysis monitoring

 How does it work?

Machines produce sound waves while operating, and these waves can indicate equipment failure or an approaching critical point. The amplitude and character of these waves are specific to each machine. Even if the sound is too quiet for humans to hear in the initial phase of malfunction, sensors can detect abnormalities and predict when a failure is likely to occur.

 What can be detected?

This PdM technology is relatively cheaper compared to others, but it does have some limitations in terms of usage. It is widely used in the Gas & Oil industry to detect gas and liquid leaks. In the automotive industry, it is commonly used for detecting vacuum leaks, unwanted friction, and stress on machine parts.

Motor circuit analysis

 How does it work?

The technique works through electronic signature analysis (ESA). It involves measuring the supply voltage and operating current of an electronic engine. It allows locating and identifying problems related to the operation of electric engine components.

 What can be detected?

Motor circuit analysis is a powerful tool that helps identify issues related to various components, such as bearings, rotor, clutch, stator winding, or system load irregularities. The main advantage of this technique is its short testing time and convenience for the operator, as it can be carried out in just two minutes while the machine is running.

PdM oil analysis

 How does it work?

An effective method for Predictive Maintenance is to analyze oil samples from equipment without causing any damage. By analyzing the viscosity and size of the sample, along with detecting the presence or absence of third substances such as water, metals, acids or bases, we can obtain valuable information about mechanical damage, erosion or overheating of specific parts.

 What can be detected?

Detecting anomalies early is crucial for hydraulic systems that consist of rotating and lubricating parts, such as pistons in a vehicle engine. By identifying issues promptly, effective solutions can be developed and potential damage to the equipment or a failure can be prevented.

Computer vision

 How does it work?

Computer vision is revolutionizing the automotive industry by leveraging AI-based technology to enhance predictive maintenance processes. It achieves this by analyzing vast datasets, including real-time sensor data and historical performance records, to rapidly predict equipment wear and tear. By identifying patterns, detecting anomalies, and issuing early warnings for potential equipment issues, computer vision enables proactive maintenance scheduling.

 What can be detected?

In the automotive industry, computer vision technology plays a crucial role in detecting equipment wear and tear patterns to predict maintenance requirements. It can also identify manufacturing defects such as scratches or flaws, welding defects in automotive components, part dimensions and volumes to ensure quality control, surface defects related to painting, tire patterns to match with wheels, and objects for robotic guidance and automation.

Who and how can benefit from predictive maintenance

Smart maintenance systems analyze multiple variables and provide a comprehensive overview, which can benefit several stakeholders in the automotive industry. These stakeholders range from vehicle manufacturing factories and the supply chain to service and dealerships, rental companies, insurance companies, and drivers.

Below, we have outlined the primary benefits that these stakeholders can enjoy. In the OEMs section, we have provided examples of specific implementations and case studies from the market.

Car rentals

 Fleet health monitoring and better prediction of the service time

Managing service and repairs for a large number of vehicles can be costly and time-consuming for rental companies. When vehicles break down or are out of service while in the possession of customers, it can negatively impact the company’s revenue. To prevent this, car rental companies need constant insight into the condition of their vehicles and the ability to predict necessary maintenance. This allows them to manage their service plan more efficiently and minimize the risk of vehicle failure while on the road.

Car dealerships

 Reducing breakdown scenarios

Car dealerships use predictive maintenance primarily to anticipate mechanical issues before they develop into serious problems. This approach helps in ensuring that vehicles sold or serviced by them are in optimal condition, which aids in preventing breakdowns or major faults for the customer down the line. By analyzing data from the vehicle's onboard sensors and historical maintenance records, dealerships can identify patterns that signify potential future failures. Predictive maintenance also benefits dealerships by allowing for proactive communication with vehicle owners, reducing breakdown scenarios, and enhancing customer satisfaction

Vehicle owners

 Peace of mind

Periodic maintenance recommendations for vehicles are traditionally based on analyzing historical data from a large population of vehicle owners. However, each vehicle is used differently and could benefit from a tailored maintenance approach. Vehicles with high mileage or heavy usage should undergo more frequent oil changes than those that are used less frequently. By monitoring the actual vehicle condition and wear, owners can ensure that their vehicles are always at 100% and can better manage and plan for maintenance expenses.

Insurance companies

 Risk & fraud

By using data from smart maintenance systems, insurance companies can enhance their risk modeling. The analysis of this data allows insurers to identify the assets that are at higher risk of requiring maintenance or replacement and adjust their premiums accordingly. In addition, smart maintenance systems can detect any instances of tampering with the equipment or negligence in maintenance. This can aid insurers in recognizing fraudulent claims.

OEMs successful development of PdM systems

BMW Group case study

The German brand implements various predictive maintenance tools and technologies, such as sensors, data analytics, and artificial intelligence, to prevent production downtime, promote sustainability, and ensure efficient resource utilization in its global manufacturing network. These innovative, cloud-based solutions are playing a vital role in enhancing their manufacturing processes and improving overall productivity.

The BMW Group's approach involves:

  •     Forecasting phenomena and anomalies using a cloud-based platform.    Individual software modules within the platform can be easily switched on and off if necessary to instantly adapt to changing requirements. The high degree of standardization between individual components allows the system to be globally accessible. Moreover, it is highly scalable and allows new application scenarios to be easily implemented.
  •     Optimizing component replacements       (this uses advanced real-time data analytics).  
  •     Carrying out maintenance and service work in line with the requirements of the actual status of the system.  
  •     Anomaly detection using advanced AI predictive algorithms.  

Meanwhile, it should be taken into account that in BMW's body and paint shop alone, welding guns perform some 15,000 spot welds per day. At the BMW Group's plant in Regensburg, the conveyor systems' control units run 24/7. So any downtime is a huge loss.

→  SOURCE case study.

FORD case study

Predictive vehicle maintenance is one of the benefits offered to drivers and automotive service providers as part of Ford's partnerships with CARUSO and HIGH MOBILITY.  In late 2020, Ford announced two new connected car agreements to potentially enable vehicle owners to benefit from  a personalized third-party offer.

CARUSO and HIGH MOBILITY will function as an  online data platform that is completely independent of Ford and allows  third-party service providers secure and compliant access to vehicle-generated data. This access will, in turn, enable third-party providers to create personalized services for Ford vehicle owners. This will enable drivers to benefit from smarter  insurance, technical maintenance and roadside recovery.

Sharing vehicle data (warning codes, GPS location, etc.) via an  open platform is expected to be a way to maintain competitiveness in the connected mobility market.

→  SOURCE case study.

Predictive maintenance is the future of the automotive market

An effective PdM system means less time spent on equipment maintenance, saving on spare parts, eliminating unplanned downtime and improved management of company resources. And with that comes more efficient production and customers’ and employees’ satisfaction.

As the data shows, organizations that have implemented a PdM system report an average  decrease of 55% in unplanned equipment failures.  Another upside is that, compared to other connected car systems (such as infotainment systems), PdM is relatively easy to monetize. Data here can remain anonymous, and all parties involved in the production and operation of the vehicle reap the benefits.

Organizations have come to recognize the hefty returns on investment provided by predictive maintenance solutions and have thus adopted it on a global scale. According to Market Research Future, the global Predictive Maintenance market is projected to grow to  111.30 billion by 2030 , suggesting that further growth is possible in the future.

written by
Adam Kozłowski
written by
Marcin Wiśniewski
Automotive
EU Data Act

Unveiling the EU Data Act: Automotive industry implications

Fasten your seatbelts! The EU Data Act aims to drive a paradigm shift in the digital economy, and the automotive industry is about to experience a high-octane transformation. Get ready to  explore the user-centric approach , new data-sharing mechanisms, and the roadmap for OEMs to adapt and thrive in the European data market. Are you prepared for this journey?

Key takeaways

  •  The EU Data Act grants users ownership and control of their data while introducing obligations for automotive OEMs to ensure fair competition.
  •  The Act facilitates data sharing between users, enterprises, and public sector bodies to promote innovation in the European automotive industry.
  •  Automotive OEMs must invest in resources and technologies to comply with the EU Data Act regulations for optimal growth opportunities.

The EU Data Act and its impact on the automotive industry

The EU Data Act applies to manufacturers, suppliers, and users of products or services placed on the market in the EU, as well as data holders and recipients based in the EU.

What is the EU Data Act regulation?

The EU Data Act is a  proposed regulation that seeks to harmonize rules on fair access to and use of data in the European Union. The regulation sets out clear guidelines on who is obliged to surrender data, who can access it, how it can be used, and for what specific purposes it can be utilized.

In June 2023, the European Union took a significant step towards finalizing the Data Act, marking a pivotal moment in data governance. While the Act awaits formal adoption by the Council and Parliament following a legal-linguistic revision, the recent informal political agreement suggests its inevitability. This groundbreaking regulation will accelerate the monetization of industrial data while ensuring a harmonized playing field across the European Union.

User-centric approach

The European Data Act is revving up the engines of change in the automotive sector, putting users in the driver’s seat of their data and imposing specific obligations on OEMs. This means that connected products and related services must provide users with direct access to data generated in-vehicle, without any additional costs, and in a secure, structured, and machine-readable format.

Data handling by OEMs

A significant change is about to happen in data practices, particularly for OEMs operating in the automotive industry. Manufacturers and designers of smart products, such as smart cars, will be required to share data with users and authorized third parties. This shared data includes a wide range of information:

 Included in the Sharing Obligation: The data collected during the user's interaction with the smart car that includes information about the car's operation and environment. This information is gathered from onboard applications such as GPS and sensor images, hardware status indications, as well as data generated during times of inaction by the user, such as when the car is on standby or switched off. Both raw and pre-processed data are collected and analyzed.

 Excluded from the Sharing Obligation: Insights derived from raw data, any data produced when the user engages in activities like content recording or transmitting, and any data from products designed to be non-retrievable are not shared.

Sharing mechanisms and interactions

Data holders must make vehicle-generated data available (including associated metadata) promptly, without charge, and in a structured, commonly used, machine-readable format.

The legal basis for sharing personal data with connected vehicle users and legal entities or data recipients other than the user varies depending on the data subject and the sector-specific legislation to be presented.

Data access and third-party services

The Data Act identifies eligible entities for data sharing, encompassing both physical persons, such as individual vehicle owners or lessees, and legal persons, like organizations operating fleets of vehicles.

Requesting data sharing

Data can be accessed by  users who are recipients either directly from the device's storage or from a remote server that captures the data. In cases where the data cannot be accessed directly, the manufacturers must promptly provide it.

The data must be free, straightforward, secure, and formatted for machine readability, and its quality should be maintained where necessary. There may be contracts that limit or deny access or further distribution of data if it breaches legal security requirements. This is a critical aspect for smart cars where sharing data might pose a risk to personal safety.

If  the recipient of data is a third party , they cannot use the data to create competing products, only for maintenance. They cannot share the data unless it is for providing a user service and cannot prevent users who are consumers from sharing it with other parties.

Fair competition and trade secrets

The Data Act mandates that manufacturers share data, even when it is protected by trade secret laws. However, safeguards exist, allowing OEMs to impose confidentiality obligations and withhold data sharing in specific circumstances. These provisions ensure a balance between data access and trade secret protection. During the final negotiations on the Data Act, safeguarding trade secrets was a primary focus.

The Data Act now has provisions to prevent potential abusive behavior by data holders. It also includes an exception to data-sharing that permits manufacturers to reject certain data access requests if they can prove that such access would result in the disclosure of trade secrets, leading to severe and irreversible economic losses.

Connected vehicle data

Connected vehicle data takes the spotlight under the EU Data Act, empowering users with real-time access to their data and enabling data sharing with repair or service providers.

The implementation of the Data Act heavily involves connected cars. As per the Act, users, including companies, have the right to access the data collected by vehicles. However, manufacturers have the option to limit access under exceptional circumstances. This has a significant impact on data collection practices in the automotive sector.

Preparing for the EU Data Act: A guide for automotive OEMs

To stay ahead of the curve, OEMs must understand the business implications of the Data Act, adapt to new regulations, and invest in the necessary resources and technologies to ensure compliance.

As connected vehicles become the norm, OEMs that embrace the Data Act will be well-positioned to capitalize on new opportunities and drive growth in the European automotive sector.

Business implications

The EU Data Act imposes significant business implications on automotive OEMs, necessitating changes in their data handling practices and adherence to new obligations. As the industry embraces the user-centric approach to data handling, OEMs must design connected products and related services that provide users with access to their in-vehicle data.

To ensure a smooth transition and maintain a competitive edge, automotive OEMs must undertake a tailored and strategic preparation process.

Adapting to new regulations

Failure to comply with the Data Act could result in legal and financial repercussions for automotive OEMs. In order to avoid any possible problems, they should invest in the necessary resources and technologies to ensure compliance with the regulations of the Data Act.

They should also engage proactively with the requirements of the Data Act and implement compliance measures strategically.

By taking the following steps, automotive OEMs can navigate the regulatory landscape effectively and seize growth opportunities in the European automotive sector:

 In-Depth Knowledge: Dive deep into the EU Data Act, with a special focus on its impact on the automotive industry. Recognize that the automotive sector is central to this regulation, requiring industry-specific understanding.

 Data Segmentation: Perform a comprehensive analysis of your data, categorizing it into distinct groups. Identify which data types fall within the purview of the EU Data Act.

 Compliance Framework Development:

  •     Internal Compliance:    Audit and update policies to comply with the EU Data Act. Develop a data governance framework for access, sharing, and privacy.
  •     Data Access Protocols:    Establish unambiguous protocols for data access and sharing, including procedures for obtaining user consent, data retrieval, and sharing modalities.

 Data Privacy and Security:

  •     Data Safeguards:    Enhance data privacy and security, including encryption and access controls.

 Data Utilization: Develop plans for leveraging this data to generate new revenue streams while adhering to the EU Data Act's mandates.

 User Engagement and Consent:

  •     Transparency:    Forge clear and transparent channels of communication with users. Keep users informed about data collection, sharing, and usage practices, and obtain well-informed consent.
  •     Consent Management:    Implement robust consent management systems to efficiently monitor and administer user consent. Ensure that users maintain control over their data.

 Legal Advisors: Engage legal experts well-versed in data protection and privacy laws, particularly those relevant to the automotive sector. Seek guidance for interpreting and implementing the EU Data Act within your specific industry context.

 Data Access Enhancement: Invest in technology infrastructure to facilitate data access and sharing as per the EU Data Act's stipulations. Ensure that data can be easily and securely provided in the required format.

 Employee Education: Educate your workforce on the intricacies of the EU Data Act and its implications for daily operations. Ensure that employees possess a strong understanding of data protection principles.

 Ongoing Compliance Oversight: Establish mechanisms for continuous compliance monitoring. Regularly assess data practices, consent management systems, and data security protocols to identify and address compliance gaps.

 Collaboration with Peers: Collaborate closely with industry associations, fellow automotive OEMs, and stakeholders to share insights, best practices, and strategies for addressing the specific challenges posed by the EU Data Act in the automotive sector.

 Future-Ready Solutions: Develop adaptable and scalable solutions that accommodate potential regulatory landscape shifts. Remain agile and prepared to adjust strategies as needed.

Boosting innovation capabilities

The Data Act may bring some challenges, but it also creates a favorable environment for innovation. By making industrial data more accessible, the Act offers a huge potential for data-driven businesses to explore innovative business models. Adapting to the Act can improve a company's ability to innovate, allowing it to use data as a strategic asset for growth and differentiation.

Summary

The EU Data Act is driving a paradigm shift in the automotive sector, putting users in control of their data and revolutionizing the way OEMs handle, share, and access vehicle-generated data.

By embracing the user-centric approach, ensuring compliance with data sharing and processing provisions, and investing in innovation capabilities, data holders can unlock new opportunities and drive growth in the European automotive market.

It's time for OEMs to take  actionable steps to comply with the new regulation .  Read this guide on building EU Data Act-compliant connected car software to learn what they are.

Get prepared to meet the EU Data Act deadlines

Ready to turn compliance into a competitive advantage?  We’re here to assist you , whether you need expert guidance on regulatory changes or customized data-sharing solutions.

‍

written by
Adam Kozłowski
written by
Marcin Wiśniewski
Automotive
Software development

Android Automotive OS 11 Camera2 and EVS  - Two different camera subsystems up and running

Android Automotive OS, AAOS in short, is a vehicle infotainment operating system that has gained a lot of traction recently, with most of the OEMs around the world openly announcing new versions of their infotainment based on Android. AAOS is based on the AOSP (Android Open Source Project) source code, which makes it fully compatible with Android, with additions that make it more useful in cars – different UI, integration with hardware layer, or vehicle-specific apps.

For OEMs and Tier1s, who are deeply accustomed to infotainment based on QNX/Autosar/Docker/Linux, and software developers working on AAOS apps, it’s sometimes difficult to quickly spin-up the development board or emulator supporting external hardware that has no out-of-the-box emulation built by Google. One of the common examples is camera access, which is missing in the official AAOS emulator these days, but the hardware itself is quite common in modern vehicles – which makes implementation of applications similar to Zoom or MS Teams for AAOS tempting to app developers.

In this article, I will explain how to build a simple test bench based on a cost-effective Raspberry Pi board and AAOS for developers to test their camera application. Examples will be based on AAOS 11 running on Raspberry Pi 4 and our Grape Up repository. Please check our previous article: " Build and Run Android Automotive OS on Raspberry Pi 4B " for a detailed description of how to run AAOS on this board.

Android Automotive OS has 2 different subsystems to access platform cameras: Camera2 and EVS. In this article, I will explain both how we can use it and how to get it running on Android Automotive OS 11.

Exterior View System (EVS)

EVS is a subsystem to display parking and maneuvering camera image. It supports multiple cameras' access and view. The main goal and advantage of that subsystem is that it boots quickly and should display a parking view before 2 seconds, which is required by law.

Source https://source.android.com/docs/automotive/camera-hal

As you can see on the attached diagram, low layers of EVS depend on OEM source code. OEM needs to deliver Camera and Display implementation. However, Android delivers a sample application (/hardware/interfaces/automotive/evs/1.0) , which uses Linux V4L2 and OpenGL to grab camera frames and display them. You can find more information about EVS at https://source.android.com/docs/automotive/camera-hal

In our example, we will use samples from Android. Additionally, I assume you build our Raspberry Pi image (see our article ), as it has multiple changes that allow AAOS to reliably run on RPi4 and support its hardware.

You should have a camera connected to your board via USB. Please check if your camera is detected by V4L2. There should be a device file:

/dev/video0

Then, type on the console:

su

setprop persist.automotive.evs.mode 1

This will start the EVS system.

To display camera views:

evs_app

Type Ctrl-C to exit the app and go back to the normal Android view.

Camera2

Camera2 is a subsystem intended for camera access by “normal” Android applications (smartphones, tablets, etc.). It is a common system for all Android applications, recently slowly being replaced by CameraX. The developer of an Android app uses Java camera API to gain access to the camera.

Camera2 has three main layers, which are shown in the diagram below:

Source https://source.android.com/docs/core/camera

Low-level Camera access is implemented in CameraProvider. OEM can implement their own provider or a V4L2 camera driver can be used.

To get Camera2, you should enable it in the Car product make file. In

packages/services/Car/car_product/build/car_base.mk change config.disable_cameraservice to false.

PRODUCT_PROPERTY_OVERRIDES += config.disable_cameraservice=false

After that, rebuild Android:

make ramdisk systemimage vendorimage

Put it in the SD card and boot RPi with it. You will be able to run the “Camera” application on the AAOS screen, see camera output from the connected webcam, and run and debug applications using Camera API.

Summary

Now you know how to run both AAOS camera APIs on the RPi4 board. You can use both APIs to develop automotive applications leveraging cameras and test them using a simple USB webcam, which you may have somewhere on the shelf. If you found this article useful, you can also look at our previous articles about AAOS – both from the application development perspective and the OS perspective . Happy coding!

written by
Michał Jaskurzyński
Automotive
Software development

How to build software architecture for new mobility services - gathering telemetry data

In the modern world, tech companies strive to collect as much information as possible about the status of owned cars to enable proactive maintenance and rapid responses to any incidents that may occur. These incidents could involve theft, damage, or the cars simply getting lost. The only way to remotely monitor their status is by obtaining telemetry data sent by the vehicles and storing it on a server or in the cloud. There are numerous methods for gathering this data, but is there an optimal approach? Is there a blueprint for designing an architecture for such a system? Let's explore.

What does “telemetry” mean in a car?

This article is about gathering telemetry data, so let's begin with a quick reminder of what it is. Telemetry in cars refers to the technology that enables the remote collection and transmission of real-time data from various components of a vehicle to a central monitoring system. This data encompasses a wide range of parameters, including, for example:

  • engine performance,
  • fuel consumption,
  • tire pressure,
  • vehicle speed,
  • percentage of electric vehicle battery,
  • braking activity,
  • acceleration,
  • GPS position,
  • odometer

Collecting vehicle details is valuable, but what is the real purpose of this information?

Why collect telemetry data from a car?

The primary use of telemetry data is to monitor a car's status from anywhere in the world, and it's especially crucial for companies like car rental firms such as Hertz or Europcar, as well as transportation companies like Uber. Here are some examples:

  • Tracking Stolen Cars : Companies can quickly track a stolen vehicle if they store its GPS position.
  • Accident Analysis : If a car is involved in an accident, the company can assess the likelihood of the event by analyzing data such as a sudden drop in speed to zero and high acceleration. This allows companies to provide replacement cars promptly.
  • Fuel or Charging Management : In cases where a rental car is returned without a full tank of fuel or not fully charged, the company can respond quickly to make the car available for the next rental.

These are just a few examples of how telemetry data can be utilized, with many more possibilities. Understanding the value of telemetry data, let's delve into the technical aspects of acquiring and using this data in the next part of the article.

To begin planning the architecture, we need answers to some fundamental questions

How will the telemetry data be used?

Architectural planning should commence with an understanding of the use cases for the collected telemetry data. This includes considering what the end user intends to do with the data and how they will access it. Common uses for this data include:

  1. Sharing data on a dashboard : To enable this, an architecture should be designed to support an API that retrieves data from databases and displays it on a dashboard,
  2. Data analytics : Depending on the specific needs, appropriate analytic tools should be planned. This can vary from real-time analysis (e.g. AWS Kinesis Data Analytics) to near real-time analysis (e.g. Kafka) or historical data analysis (e.g. AWS Athena),
  3. Sharing data with external clients : If external clients require real-time data, it's essential to incorporate a streaming mechanism into your architecture. If real-time access is not needed, a REST API should be part of the plan.

Can we collect the data from cars?

We should not collect any data from cars unless we either own the car or have a specific legal agreement to do so. This requires not only planning the architecture for acquiring access to the car but also for disposing of it. For example, if we collect telemetry or location data from a car through websockets and the company decides to sell the car, we should immediately cease tracking the car. Storing data from it, especially location data, might be illegal as it could potentially allow tracking of the location of a person inside the car.

How do we manage permissions to the car?

If we have legal permission to collect data from the car, we must include correct permission management in our architecture. Some key considerations include:

  • Credential and token encryption,
  • Secure storage of secrets, such as using AWS Secret Manager,
  • Regular rotation of credentials and tokens for security,
  • Implementing minimum access levels for services and vehicles,
  • Good management of certificates,
  • Adhering to basic security best practices.

How do we collect the data?

Now that we have access to the data, it's time to consider how to collect it. There are several known methods to do this:

  • Pull Through REST/GRPC API: In this scenario, you'll need to implement a data poller. This approach may introduce latency in data acquisition and is not the most scalable solution. Additionally, you may encounter request throttling issues due to hitting request limits.
  • External Service Push Through REST/GRPC: Here, you should set up a listener, which is essentially a service exposed with an endpoint, such as an ECS task or a Lambda function on AWS. This method might incur some costs, and it's crucial to consider automatic scaling to ensure no data is lost. Keep in mind that the endpoint will be publicly exposed, so robust permission management is essential.
  • Pulling From a Stream: This approach is often recommended as it's the most scalable and secure option. You can receive data in real-time or near real-time, making it highly efficient. The primary considerations are access to the stream and the service responsible for pulling data from it.
  • Queues: Similar to streams, queues can be used for data collection, and they may offer better data ordering. However, streams are typically faster but might be more expensive. This is another viable option for collecting vehicle data from external services.
  • Websockets: Websockets are a suitable solution when bidirectional data flow is required, and they can be superior to REST/GRPC APIs in such cases. For example, they are an appropriate choice when a client needs confirmation that data has been successfully acquired. Websockets also allow you to specify which telemetry data can be acquired and at what frequency. A notable example is the Tesla Websockets ( https://github.com/teslamotors/fleet-telemetry/blob/main/protos/vehicle_data.proto ).

Where to store the data?

After collecting the data, it's important to decide where to store it. There are various databases available, and the choice depends on your specific data use cases and access patterns. For instance:

  • Commonly Used Data : For data that will be frequently accessed, you can opt for a traditional database like MongoDB or PostgreSQL.
  • Low-Maintenance Database : If you prefer a database that requires minimal maintenance, AWS DynamoDB is a good choice.
  • Infrequently Used Data for Analytics : When data won't be used frequently but will be utilized for occasional data analytics, you can consider using an AWS S3 bucket with the appropriate storage tier, coupled with AWS Athena for data analysis.
  • Complex Data Analysis : If the data will be regularly analyzed with complex queries, AWS Redshift might be a suitable solution.

When planning your databases, don't forget to consider data retention. If historical data is no longer needed, it's advisable to remove it to avoid excessive storage costs.

Example

Here is an example of such an architecture on AWS in which:

  1. An employee grants permissions to the car to stream the data.
  2. The data is streamed using AWS Kinesis Stream and saved to an S3 bucket by AWS Kinesis Firehose for audit purposes.
  3. The data is also normalized by the AWS Lambda function and stored in AWS DynamoDB.
  4. The stored data is queried by another AWS Lambda function.
  5. The query Lambda is triggered by an AWS API Gateway to enhance security, such as limiting requests per second.
  6. The API is exposed via Route 53 to the end user, which can be, for example, a dashboard or an external API.

Conclusion

In the modern tech landscape, the quest for complete vehicle data is a paramount objective. Tech companies seek to collect critical information about the status of owned cars to enable proactive maintenance and rapid responses to a spectrum of incidents, from theft and damage to simple misplacement. This imperative relies on the remote monitoring of vehicles through the collection and storage of data on servers or in the cloud, offering the capability to monitor a vehicle's status from any corner of the globe. This is especially essential for companies like car rental firms and transportation services, with applications ranging from tracking stolen cars through GPS data to analyzing accident events and managing fuel or charging for rental vehicles.

The core of this mission is to strike a balance between data collection, security, and architectural planning. The process involves careful consideration of data collection methods, adherence to legal and security best practices, and informed choices for data storage solutions. The evolving landscape of vehicle data offers endless possibilities for tech companies to harness the power of telemetry and deliver an enhanced experience for their customers.

written by
Piotr Sidor
Automotive

Driving success in delivering innovation for automotive: Exploring various partnership models

The automotive sector has rapidly evolved, recognizing that cutting-edge software is now the primary factor differentiating car brands and models. Automotive leaders are reshaping the industry by creating innovative software-defined vehicles that seamlessly integrate advanced features, providing drivers with a genuinely captivating experience.

While some automotive manufacturers have begun building in-house expertise, others are turning to third-party software development companies for assistance. However, such partnerships can take on different forms depending on factors like the vendor’s size and specialization, which can sometimes lead to challenges in cooperation.

Still, success is possible if both parties adopt the right approach despite any problems that may arise. To ensure optimal collaboration, we recommend implementing the  Value-Aligned Partnership concept.

    By adopting this approach, vendors accept full accountability for delivery, match their contributions to the client's business goals, and use their experience to guarantee success.    We use this strategy at     Grape Up    to ensure our clients receive the best possible results.  

 This article explores the advantages and challenges of working with service vendors of various sizes and specializations. It also highlights the Value-Aligned Partnership model as a way to overcome these challenges and maximize the benefits of client-vendor cooperation.

Large vendors and system integrators

Working with multi-client software vendors can pose unique challenges for automotive companies, as these companies typically operate on a  body leasing model , providing temporary skilled personnel for development services.

Furthermore, these companies may lack  specialized expertise , which is particularly true for large organizations with high turnover rates in their development teams. Despite having impressive case studies, the team assigned to a specific project doesn't always possess the necessary experience due to  frequent personnel changes within the company.

Moreover, big vendors usually have numerous clients and projects to manage simultaneously. This may lead to limited personalized attention and cause  potential delays in delivery. Outsourcing services to countries with different time zones, quality assurance procedures, and cultural norms can further complicate the situation.

However, despite these challenges, such partnerships have significant advantages, which should be considered in the decision-making process.

Advantages of partnering with large vendors and system integrators

  •     Access to diverse expertise    : A wide array of specialists is available for various project aspects.
  •     Scalability advantage    : Project resource requirements are met without bottlenecks.
  •     Accelerated time-to-market    : Development speed is increased with skilled, available teams.
  •     Cost-efficiency    : It's a cost-effective alternative via vendor teams, reducing overhead.
  •     Risk management    : Turnover risks are handled by vendors to ensure continuity.
  •     Cross-industry insights application    : Diverse sector practices are applied to automotive projects.
  •     Agile adaptation    : They are efficiently adjusting to changing project needs.
  •     Enhanced global collaboration:    Creativity is boosted by leveraging time zones and diverse perspectives from vendors.

Working with smaller vendors

Your other option is to work with smaller software companies, but you should be aware of the potential challenges of such collaboration, as well. For example, while they are exceptionally well-versed in a particular field or technology, they might lack the breadth of knowledge necessary to meet the needs of the industry. Additionally,  limited resources or difficulties in scaling could hinder their ability to keep pace with the growing demands of the sector.

Furthermore, although these vendors  excel in specific technical areas , they may struggle to provide comprehensive end-to-end solutions, leaving gaps in the development process.

Adding to these challenges, smaller companies often have a more restricted perspective due to their  limited engagement in partnerships , infrequent conference attendance, and a narrower appeal that doesn't span a global audience. This can result in a lack of exposure to diverse ideas and practices, hindering their ability to innovate and adapt.

Benefits of working with smaller vendors

  •     Tailored excellence:    Small vendors often craft industry-specific, innovative solutions without unnecessary features.
  •     Personal priority:    They prioritize each client, ensuring dedicated attention and satisfaction.
  •     Flexible negotiation:    Smaller companies offer negotiable terms and pricing, unlike larger counterparts.
  •     Bespoke solutions:    They customize offerings based on your unique needs, unlike a generic approach.
  •     Agile responsiveness:    Quick release cycles, technical agility, and transparent interactions are their strengths.
  •     Meaningful connections:    They deeply care about your success, fostering personal relationships.
  •     Accountable quality:    They take responsibility for their products, as development and support are integrated.

Niche specialization companies

Collaborating with a company specializing in the automotive sector offers distinct advantages, addressing challenges some large and small vendors face. Their  in-depth knowledge of the automotive industry ensures  tailored solutions that meet specific requirements efficiently.

As opposed to vendors who work in several industries, they are quick to adjust to shifts in the market, allowing software development projects to be successful and meet the ever-changing needs of the automobile sector.

It is important, however, to consider the potential drawbacks when entering partnerships that rely heavily on a narrow area of expertise. Niche solutions  may not be versatile , and specialization can lead to  higher costs and resistance to innovation . Additionally, overreliance on a single source could lead to  dependency concerns and a limited perspective on the market. It is important to weigh these risks against the benefits and ensure that partnerships are balanced to avoid stagnation and limited options.

When working with software vendors, no matter their size or specialization level, it is essential to adopt the right approach to cooperation to mitigate risks and challenges.

Value-aligned partnership model explained

This cooperation model prioritizes shared values, expertise, and cultural compatibility. It's important to note, in this context, that  a company's size doesn't limit its abilities to become a successful partner. Both large and small vendors can be just as driven and invested in professional development.

What matters most is a company's mindset and commitment to continuous improvement. Small and large businesses can excel by prioritizing a robust organizational culture, regular training sessions, knowledge sharing among employees, and partnerships that leverage the strengths of both parties.

Thus, the Value-Aligned Partnership is a model that brings together the benefits of working with different types of companies.  It combines the diverse expertise of large vendors with the agility and tailored solutions of small companies while incorporating vast industry-specific knowledge.

  In a value-aligned partnership model, the vendor goes beyond simply providing software development services. They actively engage in the journey towards success by fully     immersing themselves in the client's vision    . By thoroughly comprehending the customer's values and goals, a partner ensures that every contribution they make aligns seamlessly with the overall direction of the business and can even foster innovation within the client's company.  

 Building a strong partnership based on shared values takes time and effort, but it's worth it for the exceptional outcomes that result. Open communication, collaboration, and mutual understanding are key factors in creating a foundation for     long-term cooperation    and shared success between the two parties.

In the fast-paced and ever-changing automotive industry, having expertise specific to the domain is crucial.  A value-aligned partner recognizes the importance of  retaining in-house knowledge and skills related to the sector. As a company that prioritizes this approach, Grape Up invests in measures to  minimize turnover , provide ongoing training and education, and ensure that our team possesses  deep domain expertise . This commitment to automotive know-how strengthens the partnership's reliability and establishes us as a trusted, long-term ally for the automotive company.

Conclusion

The automotive industry is transforming remarkably with the rise of software-defined vehicles. OEMs realize they can't tackle this revolution alone and seek the perfect collaborators to join them on this exciting journey. These partners bring a wealth of expertise in software development, cloud technologies, artificial intelligence, and more. With their finger on the pulse of the automotive industry, they understand the ever-changing trends and challenges. They take the time to comprehend the OEM's vision, objectives, and market positioning, enabling them to provide tailored solutions that address specific needs.

‍

written by
Marcin Wiśniewski
written by
Adam Kozłowski
Automotive
Data platforms
Software development

Unleashing the full potential of MongoDB in automotive applications

Welcome to the second part of our article series about MongoDB in automotive. In the  previous installment , we explored the power of MongoDB as a versatile data management solution for the automotive industry, focusing on its flexible data model, scalability, querying capabilities, and optimization techniques.

In this continuation, we will delve into two advanced features of MongoDB that further enhance its capabilities in automotive applications: timeseries data management and change streams. By harnessing the power of timeseries and change streams, MongoDB opens up new possibilities for managing and  analyzing real-time data in the automotive domain. Join us as we uncover the exciting potential of MongoDB's advanced features and their impact on driving success in automotive applications.

Time-series data management in MongoDB for automotive

Managing time-series data effectively is a critical aspect of many automotive applications. From sensor data to vehicle telemetry, capturing and analyzing time-stamped data is essential for monitoring performance, detecting anomalies, and making informed decisions. MongoDB offers robust features and capabilities for managing time-series data efficiently. This section will explore the key considerations and best practices for leveraging MongoDB's time-series collections.

Understanding time-series collections

Introduced in MongoDB version 5.0, time-series collections provide a specialized data organization scheme optimized for storing and retrieving time-series data. Time-series data consists of a sequence of data points collected at specific intervals, typically related to time. This could include information such as temperature readings, speed measurements, or fuel consumption over time.

MongoDB employs various optimizations in a time-series collection to enhance performance and storage efficiency. One of the notable features is the organization of data into buckets. Data points within a specific time range are grouped in these buckets. This time range, often referred to as granularity, determines the level of detail or resolution at which data points are stored within each bucket. This bucketing approach offers several advantages.

Firstly, it improves query performance by enabling efficient data retrieval within a particular time interval. With data organized into buckets, MongoDB can quickly identify and retrieve the relevant data points, reducing the time required for queries.

Secondly, bucketing allows for efficient storage and compression of data within each bucket. By grouping data points, MongoDB can apply compression techniques specific to each bucket, optimizing disk space utilization. This helps to minimize storage requirements, especially when dealing with large volumes of time-series data.

Choosing sharded or non-sharded collections

When working with time-series data, you have the option to choose between sharded and non-sharded collections. Sharded collections distribute data across multiple shards, enabling horizontal scalability and accommodating larger data volumes. However, it's essential to consider the trade-offs associated with sharding. While sharded collections offer increased storage capacity, they may introduce additional complexity and potentially impact performance compared to non-sharded collections.

In most cases, non-sharded collections are sufficient for managing time-series data, especially when proper indexing and data organization strategies are employed. Non-sharded collections provide simplicity and optimal performance for most time-series use cases, eliminating the need for managing a sharded environment.

Effective data compression for time-series collections

Given the potentially large volumes of data generated by time-series measurements, efficient data compression techniques are crucial for optimizing storage and query performance. MongoDB provides built-in compression options that reduce data size, minimizing storage requirements and facilitating faster data transfer. Using compression, MongoDB significantly reduces the disk space consumed by time-series data while maintaining fast query performance.

One of the key compression options available in MongoDB is the WiredTiger storage engine. WiredTiger offers advanced compression algorithms that efficiently compress and decompress data, reducing disk space utilization. This compression option is particularly beneficial for time-series collections where data points are stored over specific time intervals.

By leveraging WiredTiger compression, MongoDB achieves an optimal balance between storage efficiency and query performance for time-series collections. The compressed data takes up less space on disk, resulting in reduced storage costs and improved overall system scalability. Additionally, compressed data can be transferred more quickly across networks, improving data transfer speeds and reducing network bandwidth requirements.

Considerations for granularity and data retention

When designing a time-series data model in MongoDB, granularity and data retention policies are important factors to consider. Granularity refers to the level of detail or resolution at which data points are stored, while data retention policies determine how long the data is retained in the collection.

Choosing the appropriate granularity is crucial for striking a balance between data precision and performance. MongoDB provides different granularity options, such as "seconds," "minutes," and "hours," each covering a specific time span. Selecting the granularity depends on the time interval between consecutive data points that have the same unique value for a specific field, known as the meta field. You can optimize storage and query performance by aligning the granularity with the ingestion rate of data from a unique data source.

For example, if you collect temperature readings from weather sensors every five minutes, setting the granularity to "minutes" would be appropriate. This ensures that data points are grouped in buckets based on the specified time span, enabling efficient storage and retrieval of time-series data.

In addition to granularity, defining an effective data retention policy is essential for managing the size and relevance of the time-series collection over time. Consider factors such as the retention period for data points, the frequency of purging outdated data, and the impact on query performance.

MongoDB provides a Time to Live (TTL) mechanism that can automatically remove expired data points from a time-series collection based on a specified time interval. However, it's important to note that there is a known issue related to TTL for very old records in MongoDB at the time of writing this article. The issue is described in detail in the MongoDB Jira ticket SERVER-76560.

The TTL behavior in time series collections differs from regular collections. In a time series collection, TTL expiration occurs at the bucket level rather than on individual documents within the bucket. Once all documents within a bucket have expired, the entire bucket is removed during the next run of the background task that removes expired buckets.

This bucket-level expiration behavior means that TTL may not work in the exact same way as with normal collections, where individual documents are removed as soon as they expire. It's important to be aware of this distinction and consider it when designing your data retention strategy for time series collections.

When considering granularity and data retention policies, evaluate the specific requirements of your automotive application. Consider the level of precision needed for analysis, the data ingestion rate, and the desired storage and query performance. By carefully evaluating these factors and understanding the behavior of TTL in time series collections, you can design a time-series data model in MongoDB that optimizes both storage efficiency and query performance while meeting your application's needs.

Retrieving latest documents

In automotive applications, retrieving the latest documents for each unique meta key can be a common requirement. MongoDB provides an efficient approach to achieve this using the `DISTINCT_SCAN` stage in the aggregation pipeline. Let's explore how you can use this feature, along with an automotive example.

The `DISTINCT_SCAN` stage is designed to perform distinct scans on sorted data in an optimized manner. By leveraging the sorted nature of the data, it efficiently speeds up the process of identifying distinct values.

To illustrate its usage, let's consider a scenario where you have a time series collection of vehicle data that includes meta information and timestamps. You want to retrieve the latest document for each unique vehicle model. Here's an example code snippet demonstrating how to accomplish this:

```javascript
db.vehicleData.aggregate([
 { $sort: { metaField: 1, timestamp: -1 } },
 {
   $group: {
     _id: "$metaField",
     latestDocument: { $first: "$$ROOT" }
   }
 },
 { $replaceRoot: { newRoot: "$latestDocument" } }
])
```

In the above code, we first use the `$sort` stage to sort the documents based on the `metaField` field in ascending order and the `timestamp` field in descending order. This sorting ensures that the latest documents appear first within each group.

Next, we employ the `$group` stage to group the documents by the `metaField` field and select the first document using the `$first` operator. This operator retrieves the first document encountered in each group, corresponding to the latest document for each unique meta key.

Finally, we utilize the `$replaceRoot` stage to promote the `latestDocument` to the root level of the output, effectively removing the grouping and retaining only the latest documents.

By utilizing this approach, you can efficiently retrieve the latest documents per each meta key in an automotive dataset. The `DISTINCT_SCAN` stage optimizes the distinct scan operation, while the `$first` operator ensures accurate retrieval of the latest documents.

It's important to note that the `DISTINCT_SCAN` stage is an internal optimization technique of MongoDB's aggregation framework. It is automatically applied when the conditions are met, so you don't need to specify or enable it in your aggregation pipeline explicitly.

Time series collection limitations

While MongoDB Time Series brings valuable features for managing time-series data, it also has certain limitations to consider. Understanding these limitations can help developers make informed decisions when utilizing MongoDB for time-series data storage:

●  Unsupported Features : Time series collections in MongoDB do not support certain features, including transactions and change streams. These features are not available when working specifically with time series data.

● A  ggregation $out and $merge : The $out and $merge stages of the aggregation pipeline, commonly used for storing aggregation results in a separate collection or merging results with an existing collection, are not supported in time series collections. This limitation affects the ability to perform certain aggregation operations directly on time series collections.

●  Updates and Deletes : Time series collections only support insert operations and read queries. This means that once data is inserted into a time series collection, it cannot be directly modified or deleted on a per-document basis. Any updates or manual delete operations will result in an error.

MongoDB change streams for real-time data monitoring

MongoDB Change Streams provide a powerful feature for real-time data monitoring in MongoDB. Change Streams allow you to capture and react to any changes happening in a MongoDB collection in a real-time manner. This is particularly useful in scenarios where you need to track updates, insertions, or deletions in your data and take immediate actions based on those changes.

Change Streams provide a unified and consistent way to subscribe to the database changes, making it easier to build reactive applications that respond to real-time data modifications.

```javascript
// MongoDB Change Streams for Real-Time Data Monitoring

const MongoClient = require('mongodb').MongoClient;

// Connection URL
const url = 'mongodb://localhost:27017';

// Database and collection names
const dbName = 'mydatabase';
const collectionName = 'mycollection';

// Create a change stream
MongoClient.connect(url, function(err, client) {
 if (err) throw err;

 const db = client.db(dbName);
 const collection = db.collection(collectionName);

 // Create a change stream cursor with filtering
 const changeStream = collection.watch([{ $match: { operationType: 'delete' } }]);

 // Set up event listeners for change events
 changeStream.on('change', function(change) {
   // Process the delete event
   console.log('Delete Event:', change);
   // Perform further actions based on the delete event
 });

 // Close the connection
 // client.close();
});
```

In this updated example, we use the `$match` stage in the change stream pipeline to filter for delete operations only. The `$match` stage is specified as an array in the `watch()` method. The `{ operationType: 'delete' }` filter ensures that only delete events will be captured by the change stream.

Now, when a delete operation occurs in the specified collection, the `'change'` event listener will be triggered, and the callback function will execute. Inside the callback, you can process the delete event and perform additional actions based on your application's requirements. It's important to note that the change stream will only provide the document ID for delete operations. The actual content of the document is no longer available. If you need the document content, you need to retrieve it before the delete operation or store it separately for reference.

One important aspect to consider is related to the inability to distinguish whether a document was removed manually or due to the Time to Live (TTL) mechanism. Change streams do not provide explicit information about the reason for document removal. This means that when a document is removed, the application cannot determine if it was deleted manually by a user or automatically expired through the TTL mechanism. Depending on the use case, it may be necessary to implement additional logic or mechanisms within the application to handle this distinction if it is critical for the business requirements.

Here are some key aspects and benefits of using MongoDB Change Streams for real-time data monitoring:

●  Real-Time Event Capture : Change Streams allow you to capture changes as they occur, providing a real-time view of the database activity. This enables you to monitor data modifications instantly and react to them in real-time.

●  Flexibility in Filtering : You can specify filters and criteria to define the changes you want to capture. This gives you the flexibility to focus on specific documents, fields, or operations and filter out irrelevant changes, optimizing your monitoring process.

●  Data Integration and Pipelines : Change Streams can be easily integrated into your existing data processing pipelines or applications. You can consume the change events and perform further processing, transformation, or analysis based on your specific use case.

●  Scalability and High Availability : Change Streams are designed to work seamlessly in distributed and sharded MongoDB environments. They leverage the underlying replica set architecture to ensure high availability and fault tolerance, making them suitable for demanding and scalable applications.

●  Event-Driven Architecture : Using Change Streams, you can adopt an event-driven architecture for your MongoDB applications. Instead of continuously polling the database for changes, you can subscribe to the change events and respond immediately, reducing unnecessary resource consumption.

MongoDB Change Streams provide a powerful mechanism for real-time data monitoring, enabling you to build reactive, event-driven applications and workflows. By capturing and processing database changes in real-time, you can enhance the responsiveness and agility of your applications, leading to improved user experiences and efficient data processing.

Another consideration is that low-frequency events may lead to an invalid resume token. Change streams rely on a resume token to keep track of the last processed change. In cases where there are long periods of inactivity or low-frequency events, the resume token may become invalid or expired. Therefore, the application must handle this situation gracefully and take appropriate action when encountering an invalid resume token. This may involve reestablishing the change stream or handling the situation in a way that ensures data integrity and consistency.

It's important to note that while Change Streams offer real-time monitoring capabilities, they should be used judiciously, considering the potential impact on system resources. Monitoring a large number of collections or frequently changing data can introduce additional load on the database, so it's essential to carefully design and optimize your Change Streams implementation to meet your specific requirements.

Conclusion

By harnessing the capabilities of MongoDB, developers can unlock a world of possibilities in modern application development. From its NoSQL model to efficient time-series data management, batch writes, data retrieval, and real-time monitoring with Change Streams, MongoDB provides a powerful toolkit. By following best practices and understanding its limitations, developers can maximize the potential of MongoDB, resulting in scalable, performant, and data-rich applications.

MongoDB can be likened to a Swiss Army Knife in the world of database systems, offering a versatile set of features and capabilities. However, it is important to note that MongoDB is not a one-size-fits-all solution for every use case. This article series aim to showcase the capabilities and potential use cases of MongoDB.

While MongoDB provides powerful features like time-series data management and change streams for real-time data monitoring, it is essential to consider alternative solutions as well. Depending on factors such as team skills, the company-adopted tools, and specific project requirements, exploring other options like leveraging IaaS provider-native solutions such as DynamoDB for specific use cases within the automotive domain may be worthwhile.

Furthermore, it is important to highlight that both articles focused on using MongoDB in automotive primarily for hot storage, while the aspect of cold storage for automotive applications is not covered.

When starting a new project, it is crucial to conduct thorough research, consider the specific requirements, and evaluate the available options in the market. While MongoDB may provide robust features for certain scenarios, dedicated time-series databases like InfluxDB may offer a more tailored and specialized solution for specific time-series data needs. Choosing the right tool for the job requires careful consideration and an understanding of the trade-offs and strengths of each option available.

written by
Daniel Bryła
Automotive
Software development

8 in-vehicle E-Commerce applications that pave the way for a seamless drive

The automotive landscape is undergoing a transformation unlike any other. Innovative e-commerce applications empower drivers to seamlessly shop, make payments, and carry out service transactions while on the move. By providing station location information, navigation assistance, and secure payment mechanisms, these apps are charting a new course in the automotive industry. Automakers and vehicle service providers are embracing this transformative trend with open arms, recognizing the multifaceted opportunities it presents.

In this article, we’ll explore 8 types of in-vehicle e-commerce applications every automotive company needs to consider providing solutions supporting the most sought-after services.

The monetary potential of in-car payments

Projections suggest that the value of in-car payments could surge to approximately     $86 billion in 2025    , with the market poised to surpass an impressive     $625 billion by 2030   .

A crucial prerequisite for commerce-readiness in a vehicle is connectivity - specifically, the ability to connect to the internet. The trend towards connectivity is burgeoning, with an anticipated 95% of new global automobile sales by 2030 to be connected.

E-commerce applications that every automotive company needs to consider

In-vehicle fueling e-commerce applications

Imagine having the power to locate the nearest and most budget-friendly gas stations, all while enjoying seamless navigation and secure payment options. Picture this: you're cruising down the road, and your car's dashboard becomes a fueling command center, letting you fill up your tank with a few simple taps. It's not just about getting from A to B anymore – it's about doing it smarter and with style.

    CarPlay   , available in select vehicles, puts your favorite apps right at your fingertips. Say goodbye to the hassle of finding parking spaces, paying tolls, or waiting for a tow truck. Now, you can seamlessly integrate fueling apps into your CarPlay dashboard. With real-time price comparisons and integration with Apple Maps, this is the fueling revolution you've been waiting for!

Parking apps

No longer do drivers need to endure the frustration of searching for parking spaces in congested cities. In-car parking e-commerce applications serve as an integrated solution, consolidating navigation, reservation, and payment functions into a singular platform. This harmonious convergence simplifies the lives of city motorists, offering respite from the urban hustle.

  •     Smart parking apps    , acting as navigational guides to available parking spaces, mitigate the time spent searching for elusive spots while reducing congestion.
  •  The advent of     parking payment apps    has brought an end to the era of physical tickets and coins, replacing them with digital transactions that seamlessly facilitate parking payments.
  •  For those moments of forgetfulness,     parked car locator apps    come to the rescue, utilizing GPS technology to guide users to their parked vehicles.
  •     Navigation and booking apps    usher in a new era of proactive parking management. Drivers can reserve parking spots in advance, leveraging navigational assistance to reach their designated spaces. In tandem with a flexible payment system, these apps epitomize convenience.

Integrated into a car's infotainment system, here's a lineup of apps and their integration potentials:

For example, Available on Android and iOS, the     Smart Parking   cloud solution and app guide drivers to open parking spots, offers insights on parking conditions and fees and enables contactless payment. Its integration into a car's infotainment system provides real-time parking availability info and navigation to nearby spaces.

Another example is     ŠKODA   - they launched Pay to Park service, which allows users to pay for parking either via a mobile app or through an infotainment system.

In 2022,     EasyPark and Renault   announced a partnership centered on advanced parking automation. The EasyPark app will be built into Renault's newest model, the Megane E-Tech Electric. With this setup, the app's features become fully incorporated into the vehicle's infotainment system. The EasyPark app is available on both the App Store and Google Play, and its goal is to make parking more convenient for drivers.

Tolling apps

Apps tailored for this purpose eliminate the need for physical payments at toll booths. Instead, they rely on technology such as GPS and RFID to seamlessly detect vehicles entering tolled areas, automatically deducting the appropriate toll amounts. Real-time toll rate updates, transaction histories, and payment method management are seamlessly accessible from the vehicle's dashboard interface.

Leading the charge in this domain are in-vehicle tolling apps like  Uproad® Pay As You Go Toll App . Integrated with the     Parkopedia   in-car payment platform, the app offers toll payment flexibility, rate notifications, and route planning insights.

    Integrated Toll Module® (ITM®)   is another example. ITM® is a technology developed by Audi that allows drivers to use touchless transactions to pay to use compatible toll roads. It is integrated into the vehicle's infotainment system.

In-car dining

Hungry on the road? No worries! In-car digital food ordering is here to save the day. Whether you're craving a quick snack or a gourmet meal, these apps let you order with a few taps, bringing a world of flavors to your dashboard. Your car becomes your personal dining concierge, connecting you to restaurants, takeout, and more.

 Apple CarPlay and Android Auto have paved the way for fast-food and quick-service restaurant apps, integrating seamlessly with the vehicle's interface for a streamlined ordering process.

Notably,     BMW's collaboration with Olo   food-ordering platform exemplifies the commitment to in-car dining, enabling food ordering directly through the car's infotainment system.

Shopping applications

With the rise of connected vehicles and smart technology, online marketplaces are now at the driver's fingertips. Modern vehicles equipped with advanced infotainment systems allow users to access these online bazaars, peruse products, and make purchases while en route.  Android Auto and Apple CarPlay seamlessly bring smartphone interfaces to the dashboard, fostering a safer and more integrated shopping experience.

 Connected services in vehicles equipped with internet connectivity open up a world of e-commerce possibilities, enabling users to browse and shop directly from the dashboard, while the  click-and-collect model takes this a step further, facilitating online orders with seamless pickup from nearby stores upon arrival.

 Voice-controlled virtual assistants , from Amazon Alexa to Google Assistant, further elevate the convenience. These assistants allow drivers to search for products, place orders, and manage shopping lists through voice commands, keeping their attention on the road. The availability of branded voice assistants from manufacturers like BMW and Mercedes-Benz emphasizes personalization, catering to the preferences of individual drivers.

In-vehicle repair and maintenance solutions

In-vehicle repair and maintenance apps have emerged as indispensable tools for car owners, ensuring that their vehicles remain in peak condition. These apps provide reminders for essential maintenance tasks, track service records, and notify users when a visit to the mechanic is due. Notable features include monitoring regular tasks like oil changes, setting reminders for upcoming maintenance, tracking fuel usage, supporting multiple vehicles, and logging expenses.

Popular apps include Simply Auto, CARFAX Car Care, Autosist, Fuelly, and myCARFAX, available for iOS and Android users.

Multimedia apps on the move

Whether it's streaming music, enjoying audiobooks, or tuning into radio stations, these apps enhance the journey, making long drives more enjoyable.

 Communication apps enable drivers to stay connected without compromising safety, facilitating messages through voice commands.

While parked, passengers can indulge in  streaming videos for added entertainment, ensuring a pleasant experience for all occupants.

 Augmented reality windshield displays are an interesting innovation. These applications overlay real-time information onto the windshield, enhancing navigation and providing valuable insights at a glance. With     BMW's "mixed-reality slider"   and other augmented reality initiatives, the driving experience becomes an immersive fusion of the digital and physical worlds.

The evolution of in-vehicle electric vehicle (EV) apps

As the automotive industry embraces the electric vehicle revolution, in-vehicle applications are evolving to cater to the needs of EV owners. The notable examples are the     ChargePoint   app on Android Auto, or the     Sheeva.ai   EV charging solution.

  •     Electric vehicle charging station locator apps    offer a lifeline to electric vehicle enthusiasts, helping them find nearby charging stations and monitor availability in real-time.
  •     Community-driven apps    foster a sense of camaraderie among EV drivers, sharing insights and travel tips for a seamless electric driving experience.
  •     Intelligent charging management apps    optimize EV charging schedules based on electricity rates and grid demand, maximizing efficiency and cost-effectiveness.

In-Car shopping of the future: What we can expect

Future improvements and advancements that are on the horizon as the landscape of in-car commerce takes shape, include:

  •  More complex in-car payment systems, including     biometric authentication and encrypted payment methods    .
  •  We can expect     a wider selection of food, gas, and auto maintenance services    to be offered for in-car purchase in the near future.
  •  The     interaction between in-car commerce and existing payment systems    , including mobile wallet apps and internet payment platforms, will be improved.

Closing thoughts: Navigating the future of in-vehicle E-Commerce

In-vehicle applications have forever changed the landscape of driving, transcending traditional transportation to offer a multifaceted experience. As automotive companies navigate this landscape, understanding the myriad opportunities and conveniences e-commerce applications present is crucial. By embracing these transformative trends, automotive companies can move towards a future where the driving experience is not just about the destination - it's about the extraordinary journey itself.

‍

written by
Adam Kozłowski
written by
Marcin Wiśniewski
Automotive
Data platforms

How predictive maintenance changes the automotive industry

 Ever since Henry Ford implemented the first production line and launched mass production of the Ford Model T, the automotive industry has been on the constant lookout for ways to boost performance. This aspect has become even more relevant today, given the constant market and social unrest. Coming to rescue supply chain management and product lifecycle optimization is predictive maintenance. Not only OEMs, but the entire automotive industry: insurers, car rental companies and vehicle owners are benefiting from the implementation of this technology.

Predictive maintenance explained

Predictive maintenance is an advanced maintenance approach that utilizes data science and predictive analytics to anticipate when equipment or machinery requires maintenance before it faces a breakdown.

The primary aim is to schedule maintenance at optimal times, considering convenience and cost-effectiveness while maximizing the equipment's longevity. By identifying potential issues before they become critical, predictive maintenance significantly reduces the likelihood of equipment breakdowns.

Various types of maintenance strategies are employed in different industries:

  1.     Reactive Maintenance:    Also known as "run-to-failure," this method involves waiting for equipment to fail before conducting maintenance. Therefore, unscheduled downtime and higher repair costs may occur.
  2.     Periodic Maintenance    : This approach entails performing maintenance tasks at regular intervals, regardless of the equipment's condition. It helps prevent unexpected breakdowns but may lead to unnecessary maintenance if done too frequently.
  3.     Smart Maintenance    : Smart maintenance utilizes advanced technologies like IoT devices and sensors to monitor equipment in real-time and identify anomalies or potential failures.
  4.     Condition-Based Maintenance    : This strategy relies on monitoring the equipment's condition while it is in operation. Maintenance is only carried out when data indicates a decline in performance or a deviation from normal parameters, optimizing maintenance schedules and reducing unnecessary work.
  5.     Predictive Maintenance    : The most advanced type of maintenance uses real-time operational data and predictive analytics to forecast when maintenance is required. It aims to schedule maintenance before equipment failure occurs based on data-driven predictions, thus minimizing downtime, reducing costs, and prolonging equipment lifespan.

Predictive maintenance employs various techniques, such as vibration analysis, acoustic monitoring, infrared technology, oil analysis, and motor circuit analysis. These methods enable continuous equipment condition monitoring and early detection of potential failures, facilitating timely maintenance interventions.

Differentiation between predictive maintenance and preventive maintenance

 Predictive maintenance hinges on the real-time condition of assets and is implemented only when the need arises. Its purpose is to anticipate potential failures by monitoring assets while they are actively operational. Unlike  preventive maintenance , this approach is rooted in the current operational state of an asset rather than statistical analysis and predetermined schedules.

Essential steps in creating a predictive maintenance solution

Predictive maintenance solutions utilize a combination of sensors, artificial intelligence, and data science to optimize equipment maintenance.

The development of such solutions varies depending on equipment, environment, process, and organization, leading to diverse perspectives and technologies guiding their creation. However, there are steps common to every project: data collection and analysis, model development and deployment, as well as continuous improvement.

Here is a step-by-step process of how solutions are developed in the  automotive industry :

  •     Data Collection    : Relevant data is collected from sensors, equipment logs, vehicle diagnostics, telemetry, and other sources. This data includes information about the performance, condition, and behavior of the vehicles, such as engine temperature, fuel consumption, mileage, and more. Telematics systems can provide real-time data on vehicle location, speed, and usage patterns, while maintenance logs record historical maintenance activities, repairs, and part replacements.
  •     Data Preprocessing    : The collected data is organized, and prepared for analysis. Data preprocessing involves cleaning the data by removing outliers or erroneous values, handling missing values through imputation or interpolation, and converting the data into a suitable format for analysis.
  •     Feature Engineering    : Important features or variables that can provide insights into the health and performance of the vehicles are selected from the collected data. These features can include engine vibration, temperature, fuel consumption, mileage, and more. Feature selection step involves identifying the most relevant features that have a strong correlation with the target variable (e.g., equipment failure). It helps to reduce the dimensionality of the data and improve the model's efficiency and interpretability. Later, selected features are transformed to make them more suitable for modelling. The process may include techniques such as logarithmic or exponential transformations, scaling, or encoding categorical variables.
  •     Model Development    : Machine learning algorithms are applied to the selected features to develop predictive models. These models learn from historical data and identify patterns and relationships between various factors and equipment failures. The algorithms used can include regression, decision trees, random forests, neural networks, and more.
  •     Model Training and Validation    : The developed models are trained using historical data and validated to ensure their accuracy and performance. This involves splitting the data into training and testing sets, evaluating the model's performance metrics, and fine-tuning the model if necessary.
  •     Deployment and Monitoring    : The trained models are deployed into the predictive maintenance system, which continuously monitors real-time data from sensors and other sources. Telematics systems are used to collect GPS and vehicle-specific data, which it transmits through different methods (cellular network, satellite communication, 4G mobile data, GPRS) to the central server. The system detects anomalies, recognizes patterns, and provides insights into the health of the vehicles. It can alert maintenance teams when potential issues are detected.
  •     Continuous Improvement    : The predictive maintenance solution is continuously improved by collecting feedback, monitoring its performance, and updating the models and algorithms as new data becomes available.

Most common problems in deploying predictive maintenance solutions

Implementing predictive maintenance solutions in a fleet of vehicles or in a vehicle factory is a process that requires time, consistency and prior testing. Among the main challenges of rolling out this technology, the following aspects in particular are noteworthy.


Data integration

Integrating data from many sources is a significant barrier to implementing predictive maintenance solutions. To accomplish this with a minimum delay and maximum security, it is necessary to streamline the transfer of data from machines to ERP systems. To collect, store, and analyze data from many sources, businesses must have the proper infrastructure in place.

Insufficient data

Lack of data is a major hindrance to implementing predictive maintenance systems. Large amounts of information are needed to develop reliable models for predictive maintenance. Inadequate information might result in inaccurate models, which in turn can cause costly consequences like premature equipment breakdowns or maintenance.

To get over this difficulty, businesses should collect plenty of data for use in developing reliable models. They should also check that the data is relevant to the monitored machinery and of high quality. Businesses can utilize digital twins, or digital representations of physical assets, to mimic the operation of machinery and collect data for use in predictive maintenance systems.

Process complexity

Transitioning from preventive to predictive maintenance is complex and time-intensive. It requires comprehensive steps beyond technology, including assembling a skilled team and managing upfront costs. Without qualified experts versed in software and process intricacies, project success is doubtful.

High costs

The implementation of predictive maintenance programs comes with substantial costs. These upfront expenses pose challenges, including the need to invest in specialized sensors for data collection, procure effective data analysis tools capable of managing complexity, and possibly hire or train personnel with technical expertise.

To address these hurdles, collaboration with specialized vendors and the utilization of cloud-based solutions can prove cost-effective. Additionally, digital twin technology offers a way to simulate equipment behavior and minimize reliance on physical sensors, potentially reducing overall expenses.

Privacy and security issues

The implementation of predictive maintenance involves extensive data collection and analysis, which can give rise to privacy concerns. Companies must adhere to applicable data protection laws and regulations, and establish proper protocols to safeguard the privacy of both customers and employees. Even though predictive maintenance data may be anonymized and not directly linked to specific individuals, it still necessitates robust security measures, since preventing data breaches and unauthorized access to vital company information is crucial for overall success.

What Are the Benefits of Predictive Maintenance?

Life cycle optimization, stock management, or even recycling management - in each of these fields predictive maintenance can bring substantial benefits. And this is not only for OEMs but also for fleet operators, transportation or logistics companies. And even for the end user.

Below we list the key benefits of implementing  predictive maintenance in an automotive-related company:

  •     Extended lifespan:    Predictive maintenance technology detects early signs of wear and potential malfunctions in-vehicle components such as engines, transmissions, and brakes. By addressing these issues proactively, vehicles experience fewer major breakdowns and continue to operate efficiently over a longer period.
  •     Cost savings:    By addressing issues at an early stage, automotive companies can avoid expensive breakdowns and prevent further damage. This proactive approach not only reduces the need for costly replacement parts but also minimizes the labor and operational costs associated with major repairs, resulting in significant long-term cost savings.
  •     Minimized downtime    : Through continuous monitoring and analysis, predictive maintenance predicts when maintenance or repairs are needed and schedules them during planned downtime. This minimizes the likelihood of unexpected breakdowns that can disrupt operations and lead to extended periods of vehicle inactivity. By strategically timing maintenance activities, vehicles spend more time on the road.
  •     Increased efficiency    : Any iissues are detected early, enabling timely corrective actions. This proactive approach leads to improved fuel economy, reduced emissions, and overall enhanced efficiency. Vehicles operate at their peak performance, contributing to a more sustainable and environmentally friendly fleet.
  •     Enhanced security:    Constant monitoring for abnormal vibrations, temperature variations, and fluid leaks ensures that potential issues compromising vehicle safety and security are detected promptly. By addressing these concerns before they escalate, predictive maintenance contributes to ensuring the security of both the vehicle and its occupants. This feature is particularly valuable in critical applications where reliable vehicle performance is paramount, such as emergency response scenarios.
  •     Avoiding over-maintenance    : If you over-maintain corporate resources, it can have the same negative consequences as when failing to maintain them on time. With predictive maintenance, you can focus on maintaining crucial resources at the best possible time and with the best possible results.
  •     Compliance with required standards and regulations    : Laws and regulations related to vehicle production are constantly evolving and pushing OEMs to make numerous production changes (e.g. the legislation related to EV production). Predictive maintenance allows you to better suit the new expectations of legislators and monitor the points of production that are most dependent on the legal context.  
  •     Easier management of parts and materials    : As connected cars diagnostic systems become more sophisticated, drivers have the option to make small repairs sooner and keep their vehicles in a better condition. All this means that OEMs and licensed repair shops need fewer parts and can better manage supply chains.

 Predictive maintenance clearly is not a one-size-fits-all solution for all sectors. Notably, it will work well for high production volumes and short lead times and anywhere you need to ensure reliability, security and convenience.

The automotive industry is a perfect fit for this model. As shown in the examples featured in the second part of the article, the top players in the market are tapping into this technology.

According to  Techsci Research , “  The global predictive maintenance market was valued at USD 4.270 billion in 2020 and is projected to grow around USD 22.429 billion by 2026”.

written by
Adam Kozłowski
written by
Marcin Wiśniewski
Automotive

Collaboration between OEMs and cloud service providers: Driving future innovations

Collaboration between Cloud Service Providers (CSPs) and Automotive Original Equipment Manufacturers (OEMs) lies at the heart of driving innovation and progress in the automotive industry.

The partnerships bring together the respective strengths and resources of both parties to fuel advancements in software-defined vehicles and cutting-edge technologies.

This article will delve into the transformative collaborations between Automotive Original Equipment Manufacturers (OEMs) and Cloud Service Providers (CSPs) in the automotive industry, representing a critical junction facilitating the convergence of automotive engineering and cloud computing technologies.

Why OEMs and Cloud Service Providers cooperate

CSPs are crucial in supporting the automotive industry by providing the  necessary cloud infrastructure and services . This includes  computing power, storage capacity, and networking capabilities to process and compute resources generated by software-defined vehicles.

On the other hand, OEMs are responsible for designing and manufacturing vehicles, which heavily rely on sophisticated software systems to control various functions, ranging from safety and infotainment to navigation and autonomous driving capabilities. To seamlessly  integrate software systems into the vehicles , OEMs collaborate with CSPs, leveraging  the power of cloud technologies .

 Collaboration between CSPs and automotive companies spans several key areas to elevate vehicle functionality and performance. These areas include:

  •  designing and deploying cloud infrastructure to support the requirements of connected and autonomous vehicles
  •  handling and analyzing the vast amounts of vehicle-generated data
  •  facilitating seamless communication among vehicles and with other devices and systems
  •  ensuring data security and privacy
  •  delivering over-the-air (OTA) updates swiftly and efficiently for vehicle software
  •  testing autonomous vehicle technology through cloud computing

Benefits of collaboration

The benefits of such collaboration are significant, offering  continuous software innovation, improved data analysis, but also reduced time-to-market, cost savings, product differentiation, and a competitive edge for new entrants in the automotive industry.

Cloud services enable automotive companies to unlock new possibilities, enhance vehicle performance, and deliver a  seamless driving experience for customers . Moreover, the partnership between CSPs and automotive OEMs has the  potential to revolutionize transportation , as it facilitates efficient and seamless interactions between vehicles, enhancing  road safety and overall convenience for drivers and passengers.

In terms of collaboration strategies, automotive OEMs have various options, such as utilizing public cloud platforms, deploying private clouds for increased data security, or adopting hybrid approaches that combine the advantages of both public and private clouds. The choice of strategy depends on each company's specific data storage and security requirements.

Real-life examples of cooperation between Cloud Service Providers and automotive OEMs

Several real-life examples demonstrate the successful collaboration between cloud service providers and automotive OEMs. It's important to note that some automotive companies  collaborate with more than one cloud service provider , showcasing the industry's willingness to explore multiple partnerships and leverage different technological capabilities.

In the automotive sector, adopting a  multi-cloud strategy is common but complicated due to diverse cloud usage. Carmakers employ general-purpose SaaS enterprise applications and cloud infrastructure, along with big data tools for autonomous vehicles and cloud-based resources for car design and manufacturing. They also seek to control software systems in cars, relying on cloud infrastructure for updates and data processing. Let’s have a look at how different partnerships with cloud service providers are formed depending on the various business needs.

Microsoft

    Mercedes-Benz and Microsoft   have joined forces to enhance efficiency, resilience, and sustainability in car production. Their collaboration involves linking Mercedes-Benz plants worldwide to the Microsoft Cloud through the MO360 Data Platform. This integration improves supply chain management and resource prioritization for electric and high-end vehicles.

Additionally, Mercedes-Benz and Microsoft are teaming up to test an in-car artificial intelligence system. This advanced AI will be available in over 900,000 vehicles in the U.S., enhancing the Hey Mercedes voice assistant for seamless audio requests. The ChatGPT-based system can interact with other applications to handle things like making restaurant reservations or purchasing movie tickets, and it will make voice commands more fluid and natural.

    Renault, Nissan, and Mitsubishi have partnered with Microsoft   to develop the Alliance Intelligent Cloud, a platform that connects vehicles globally, shares digital features and innovations, and provides enhanced services such as remote assistance and over-the-air updates. The Alliance Intelligent Cloud also connects cars to "smart cities" infrastructure, enabling integration with urban systems and services.

    Volkswagen and Microsoft   are building the Volkswagen Automotive Cloud together, powering the automaker's future digital services and mobility products, and establishing a cloud-based Automated Driving Platform (ADP) using Microsoft's Azure cloud computing platform to accelerate the introduction of fully automated vehicles.

Volkswagen Group's vehicles can share data with the cloud via Azure Edge services. Additionally, the Volkswagen Automotive Cloud will enable the updating of vehicle software.

    BMW and Microsoft    : BMW has joined forces with Microsoft Azure to elevate its ConnectedDrive platform, striving to deliver an interconnected and smooth driving experience for BMW customers. This collaboration capitalizes on the cloud capabilities of Microsoft Azure, empowering the ConnectedDrive platform with various services, including real-time traffic updates, remote vehicle monitoring, and engaging infotainment features.

In 2019, BMW and Microsoft announced that they were working on a project to create an open-source platform for intelligent, multimodal voice interaction.

 Hyundai-Kia and Microsoft joined forces to create advanced in-car infotainment systems. The  collaboration began in 2008 when they partnered to develop the next-gen in-car infotainment. Subsequently, in 2010, Kia introduced the UVO voice-controlled system, a result of their joint effort, utilizing Windows Embedded Auto software.

The UVO system incorporated speech recognition to maintain the driver's focus on the road and offered compatibility with various devices. In 2012, Kia enhanced the UVO system by adding a telematics suite with navigation capabilities. Throughout their partnership, their primary goal was to deliver cutting-edge technology to customers and prepare for the future.

In 2018,  Hyundai-Kia and Microsoft announced an extended long-term partnership to continue developing the next generation of in-car infotainment systems.

Amazon

    The Volkswagen Group   has transformed its operations with the Volkswagen Industrial Cloud on AWS. This cloud-based platform uses AWS IoT services to connect data from machines, plants, and systems across over 120 factory sites. The goal is to revolutionize automotive manufacturing and logistics, aiming for a 30% increase in productivity, a 30% decrease in factory costs, and €1 billion in supply chain savings.

Additionally, the partnership with AWS allows the Volkswagen Group to expand into ridesharing services, connected vehicles, and immersive virtual car-shopping experiences, shaping the future of mobility.

    The BMW Group   has built a data lake on AWS, processing 10 TB of data daily and deriving real-time insights from the vehicle and customer telemetry data. The BMW Group utilizes its Cloud Data Hub (CDH) to consolidate and process anonymous data from vehicle sensors and various enterprise sources. This centralized system enables internal teams to access the data to develop customer-facing and internal applications effortlessly.

    Rivian, an electric vehicle manufacturer    , runs powerful simulations on AWS to reduce the need for expensive physical prototypes. By leveraging the speed and scalability of AWS, Rivian can iterate and optimize its vehicle designs more efficiently.

Moreover, AWS allows Rivian to scale its capacity as needed. This is crucial for handling the large amounts of data generated by Rivian's Electric Adventure Vehicles (EAVs) and for running data insights and machine learning algorithms to improve vehicle health and performance.

    Toyota Connected    , a subsidiary of Toyota, uses AWS for its core infrastructure on the Toyota Mobility Services Platform. AWS enables Toyota Connected to handle large datasets, scale to more vehicles and fleets, and improve safety, convenience, and mobility for individuals and fleets worldwide. Using AWS services, Toyota Connected managed to increase its traffic volume by a remarkable 18-fold.

 Back in April 2019,     Ford Motor Company    , Autonomic , and Amazon Web Services (AWS) joined forces to enhance vehicle connectivity and mobility experiences. The collaboration aimed to revolutionize connected vehicle cloud services, opening up new opportunities for automakers, public transit operators, and large-scale fleet operators.

During the same period, Ford collaborated with Amazon to enable members of Amazon's loyalty club, Prime, to receive package deliveries in their cars, providing a secure and convenient option for Amazon customers.

    Honda and Amazon   have collaborated in various ways. One significant collaboration is the development of the Honda Connected Platform, which was built on Amazon Web Services (AWS) using Amazon Elastic Compute Cloud (Amazon EC2) in 2014. This platform serves as a data connection and storage system for Honda vehicles.

Another collaboration involves Honda migrating its content delivery network to Amazon CloudFront, an AWS service. This move has resulted in cost optimization and performance improvements.

    Stellantis and Amazon   have announced a partnership to introduce customer-centric connected experiences across many vehicles. Working together, Stellantis and Amazon will integrate Amazon's cutting-edge technology and software know-how throughout Stellantis' organization. This will encompass various aspects, including vehicle development and the creation of connected in-vehicle experiences.

Furthermore, the collaboration will place a significant emphasis on the digital cabin platform known as STLA SmartCockpit. The joint effort will deliver innovative software solutions tailored to this platform, and the planned implementation will begin in 2024.

 Kia has engaged in  two collaborative efforts with Amazon . Firstly, they have integrated Amazon's AI technology, specifically Amazon Rekognition, into their vehicles to enable advanced image and video analysis. This integration facilitates personalized driver-assistance features, such as customized mirror and seat positioning for different drivers, by analyzing real-time image and video data of the driver and the surrounding environment within Kia's in-car system.

Secondly, Kia has joined forces with Amazon to offer electric vehicle charging solutions. This partnership enables Kia customers to conveniently purchase and install electric car charging stations through Amazon's wide-ranging products and services, making the process hassle-free.

Even Tesla, the electric vehicle manufacturer, had collaborated with AWS to utilize its cloud infrastructure for various purposes, including over-the-air software updates, data storage, and data analysis, until the company’s cloud account was hacked and used to mine cryptocurrency,

Google

By partnering with Google Cloud,     Renault Group   aims to achieve cost reduction, enhanced efficiency, flexibility, and accelerated vehicle development. Additionally, they intend to deliver greater value to their customers by continuously innovating software.

Leveraging Google Cloud technology, Renault Group will focus on developing platforms and services for the future of Software Defined Vehicles (SDVs). These efforts encompass in-vehicle software for the "Software Defined Vehicle" Platform and cloud software for a Digital Twin.

The Google Maps platform, Cloud, and YouTube will be integrated into future     Mercedes-Benz vehicles   equipped with their next-generation operating system, MB.OS. This partnership will allow Mercedes-Benz to access to Google's geospatial offering, providing detailed information about places, real-time and predictive traffic data, and automatic rerouting. The collaboration aims to create a driving experience that combines Google Maps' reliable information with Mercedes-Benz's unique luxury brand and ambience.

    Volvo has partnered with Google   to develop a new generation of in-car entertainment and services. Volvo will use Google's cloud computing technology to power its digital infrastructure. With this partnership, Volvo's goal is to offer hands-free assistance within their cars, enabling drivers to communicate with Google through their Volvo vehicles for navigation, entertainment, and staying connected with acquaintances.

    Ford and Google   have partnered to transform the connected vehicle experience, integrating Google's Android operating system into Ford and Lincoln vehicles and utilizing Google's AI technology for vehicle development and operations. Google plans to give drivers access to Google Maps, Google Assistant, and other Google services.

Furthermore, Google will assist Ford in various areas, such as in-car infotainment systems, over-the-air updates, and the utilization of artificial intelligence technology.

    Toyota and       Google Cloud   are collaborating to bring Speech On-Device, a new AI product, to future Toyota and Lexus vehicles, providing AI-based speech recognition and synthesis without relying on internet connectivity.

Toyota intends to utilize the vehicle-native Speech On-Device in its upcoming multimedia system. By incorporating it as a key element of the next-generation Toyota Voice Assistant, a collaborative effort between Toyota Motor North America Connected Technologies and Toyota Connected organizations, the system will benefit from the advanced technology Google Cloud provides.

In 2015,     Honda and Google   embarked on a collaborative effort to introduce the Android platform to cars. Through this partnership, Honda integrated Google's in-vehicle-connected services into their upcoming models, with the initial vehicles equipped with built-in Google features hitting the market in 2022.

As an example, the 2023 Honda Accord midsize sedan has been revamped to include Google Maps, Google Assistant, and access to the Google Play store through the integrated Google interface.

Conclusion

The collaboration between cloud service providers and industries, such as automotive, has revolutionized the way businesses operate and leverage data. Organizations can  enhance efficiency, accelerate technological advancements, and unlock valuable insights by harnessing the power of scalable cloud platforms. As cloud technologies continue to evolve, the potential for innovation and growth across industries remains limitless, promising  a future of improved operations, cost savings, and enhanced decision-making processes.

written by
Adam Kozłowski
written by
Marcin Wiśniewski
Data platforms
Automotive

Driving success in automotive applications: Data management with MongoDB

MongoDB, a widely used NoSQL document-oriented database, offers developers a powerful solution for modern application development. With its flexible data model, scalability, high performance, and comprehensive tooling, MongoDB enables developers to unlock the full potential of their projects. By leveraging MongoDB's JSON-like document storage and robust querying capabilities, developers can efficiently store and retrieve data, making it an ideal choice for contemporary applications. Read the article to learn about data management with MongoDB.

Flexible data model for adaptability

One of the primary advantages of MongoDB's NoSQL model is its flexible data model, which allows developers to adapt swiftly to changing requirements and evolving data structures. Unlike traditional relational databases that rely on predefined schemas, MongoDB's schema-less approach enables developers to store documents in a JSON-like format. This flexibility allows for easy modifications to data structures without the need for expensive and time-consuming schema migrations.Consider an automotive application that needs to store vehicle data. With MongoDB, you can store a vehicle document that captures various attributes and information about a specific car. Here's an example of a vehicle document in MongoDB:[code language="javascript"]{"_id": ObjectId("617482e5e7c927001dd6dbbe"),"make": "Ford","model": "Mustang","year": 2022,"engine": {"type": "V8","displacement": 5.0},"features": ["Bluetooth", "Backup Camera", "Leather Seats"],"owners": [{"name": "John Smith", "purchaseDate": ISODate("2022-01-15T00:00:00Z")},{"name": "Jane Doe","purchaseDate": ISODate("2023-03-10T00:00:00Z")}]}[/code]In the above example, each document represents a vehicle and includes attributes such as make, model, year, engine details, features, and a sub-document for owners with their respective names and purchase dates. This flexibility allows for easy storage and retrieval of diverse vehicle data without the constraints of a fixed schema.

Scalability for growing demands

Another key aspect of MongoDB's NoSQL model is its ability to scale effortlessly to meet the demands of modern automotive applications. MongoDB offers horizontal scalability through its built-in sharding capabilities, allowing data to be distributed across multiple servers or clusters. This ensures that MongoDB can handle the increased load as the volume of vehicle data grows by seamlessly distributing it across the available resources.For instance, imagine an automotive application collecting data from a connected car fleet. As the fleet expands and generates a substantial amount of telemetry data, MongoDB's sharding feature can be employed to distribute the data across multiple shards based on a chosen shard key, such as the vehicle's unique identifier. This allows for parallel data processing across the shards, resulting in improved performance and scalability.[code language="javascript"]// Enable sharding on a collectionsh.enableSharding("automotive_db");// Define the shard key as the vehicle's unique identifiersh.shardCollection("automotive_db.vehicles", { "_id": "hashed" });[/code]In the above example, we enable sharding on the „automotive_db” database and shard the „vehicles” collection using the vehicle’s unique identifier („_id”) as the shard key. This ensures that vehicle data is evenly distributed across multiple shards, allowing for efficient data storage and retrieval as the number of vehicles increases.

Leveraging MongoDB's querying capabilities for automotive data

MongoDB provides a powerful and expressive querying language that allows developers to retrieve and manipulate data easily. MongoDB offers a rich set of query operators and aggregation pipelines to meet your needs, whether you need to find vehicles of a specific make, filter maintenance records by a particular date range, or perform complex aggregations on vehicle data.Let's explore some examples of MongoDB queries in the context of an automotive application:[code language="javascript"]// Find all vehicles of a specific makedb.vehicles.find({ make: "Ford" });// Find vehicles with maintenance records performed by a specific mechanicdb.vehicles.find({ "maintenanceRecords.mechanic": "John Smith" });// Retrieve maintenance records within a specific date rangedb.vehicles.aggregate([{$unwind: "$maintenanceRecords"},{$match: {"maintenanceRecords.date": {$gte: ISODate("2022-01-01T00:00:00Z"),$lt: ISODate("2022-12-31T23:59:59Z")}}}]);[/code]In the above examples, we use the `find` method to query vehicles based on specific criteria such as make or mechanic. We also utilize the `aggregate` method with aggregation stages like `$unwind` and `$match` to retrieve maintenance records within a particular date range. These queries demonstrate the flexibility and power of MongoDB's querying capabilities for handling various scenarios in the automotive domain.

Optimizing data management with MongoDB

Efficient data management is crucial for maximizing the performance and effectiveness of automotive applications. MongoDB provides various features and best practices to optimize data management and enhance overall system efficiency. This section will explore practical tips and techniques for optimizing data management with MongoDB.

Data compression for large result sets

When dealing with queries that return large result sets, enabling data compression can significantly reduce the time required for data transfer and improve overall performance. MongoDB supports data compression at the wire protocol level, allowing for efficient compression and decompression of data during transmission.You can include the `compressors` option with the desired compression algorithm to enable data compression using the MongoDB URI connection string.[code language="javascript"]mongodb+srv://<username>:<password>@<cluster>/<database>?compressors=snappy[/code]In the above example, the `compressors` option is set to `snappy,` indicating that data compression using the Snappy algorithm should be enabled. This configuration ensures that data is compressed before being sent over the network, reducing the amount of data transmitted and improving query response times.The technology-independent nature of MongoDB is exemplified by its ability to configure data compression directly within the URI connection string. Whether you are using the MongoDB Node.js driver, Python driver, or any other programming language, the consistent URI syntax enables seamless utilization of data compression across different MongoDB driver implementations. By employing data compression through the URI connection string, automotive applications can optimize the data transfer, reduce network latency, and achieve faster query execution and improved system performance, regardless of the programming language or driver in use.

Optimizing read preferences

When it comes to optimizing read preferences in MongoDB for automotive applications, it is crucial to choose wisely based on the specific use case and the trade-offs dictated by the CAP (Consistency, Availability, Partition tolerance) theorem. The CAP theorem states that in a distributed system, achieving all three properties simultaneously is impossible.In scenarios where data consistency is of utmost importance, opting for the `primary` read preference is recommended. With the `primary` preference, all reads are served exclusively from the primary replica, ensuring strong consistency guarantees. This is particularly valuable in applications where data integrity and real-time synchronization are critical.However, it's important to recognize that prioritizing strong consistency might come at the cost of availability and partition tolerance. In certain automotive use cases, where read availability and scalability are paramount, it may be acceptable to sacrifice some level of consistency. This is where the `secondaryPreferred` read preference can be advantageous.By configuring `secondaryPreferred,` MongoDB allows reads to be distributed across secondary replicas in addition to the primary replica, enhancing availability and load balancing. Nevertheless, it's essential to be aware that there might be a trade-off in terms of data consistency. The secondary replicas might experience replication delays, resulting in potentially reading slightly stale data.In summary, when optimizing read preferences for automotive applications, it's crucial to consider the implications of the CAP theorem. Select the appropriate read preference based on the specific requirements of your use case, carefully balancing consistency, availability, and partition tolerance. Prioritize strong consistency with the `primary` preference when real-time data synchronization is vital and consider the `secondaryPreferred` preference when reading availability and scalability are paramount, acknowledging the possibility of eventual consistency.

Utilizing appropriate clients for complex queries

While MongoDB Atlas provides a web-based UI with an aggregation pipeline for executing complex queries, it is important to note that there are cases where the web UI may not work on the full data set and can return partial data. This limitation can arise due to factors such as query complexity, data size, or network constraints.To overcome this limitation and ensure accurate and comprehensive query results, it is recommended to utilize appropriate clients such as `mongosh` or desktop clients. These clients offer a more interactive and flexible environment for executing complex queries and provide direct access to MongoDB's features and functionalities.Using `mongosh,` for example, allows you to connect to your MongoDB Atlas cluster and execute sophisticated queries directly from the command-line interface. This approach ensures that you have complete control over the execution of your queries and enables you to work with large data sets without encountering limitations imposed by the web-based UI.Here is an example of using `mongosh` to execute a complex aggregation query:[code language="javascript"]// Execute a complex aggregation queryconst pipeline = [{$match: {make: "Tesla"}},{$group: {_id: "$model",count: { $sum: 1 }}},{$sort: {count: -1}}];db.vehicles.aggregate(pipeline);[/code]Additionally, desktop clients provide a graphical user interface that allows for visualizing query results, exploring data structures, and analyzing query performance. These clients often offer advanced query-building tools, query profiling capabilities, and result visualization options, empowering developers to optimize their queries and gain valuable insights from their automotive data.

Handling large data loads

In automotive applications, dealing with large data loads is common, especially when collecting time-series data from multiple sensors or sources simultaneously. MongoDB provides several features and best practices to handle these scenarios efficiently.

  •     Bulk Write Operations    : MongoDB offers bulk write operations, which allow you to perform multiple insert, update, or delete operations in a single request. This can significantly improve the performance of data ingestion by reducing network round trips and server-side processing overhead. By batching your write operations, you can efficiently handle large data loads and optimize the insertion of time-series data into the collection.
  •     Indexing Strategies    : Efficient indexing is crucial for handling large data loads and enabling fast queries in MongoDB. When designing indexes for your automotive application, consider the specific queries you'll perform, such as retrieving data based on vehicle models, sensor readings, or other relevant fields. Properly chosen indexes can significantly improve query performance and reduce the time required to process large data loads.
  •     Parallel Processing    : In scenarios where you need to handle massive data loads, parallel processing can be beneficial. MongoDB allows you to distribute data ingestion tasks across multiple threads or processes, enabling concurrent data insertion into the collections. By leveraging parallel processing techniques, you can take advantage of the available computing resources and speed up the data ingestion process.
  •     Connection Pooling    : Establishing a connection to the MongoDB server for each data load operation can introduce overhead and impact performance. To mitigate this, MongoDB provides connection pooling, which maintains a pool of open connections to the server. Connection pooling allows efficient reuse of connections, eliminating the need to establish a new connection for every operation. This can significantly improve the performance of large data loads by reducing connection setup overhead.

Conclusion

MongoDB, a leading NoSQL document-oriented database, is providing a versatile data management solution for the automotive industry. Its flexible data model allows developers to adapt swiftly to changing requirements and evolving data structures without the need for expensive schema migrations. With scalable sharding capabilities, MongoDB effortlessly handles the growing demands of modern automotive applications, ensuring efficient data storage and retrieval as the volume of vehicle data increases. Leveraging MongoDB's powerful querying language, developers can easily retrieve and manipulate automotive data with rich query operators and aggregation pipelines. By optimizing data management techniques such as data compression, read preferences, appropriate client usage, and efficient handling of large data loads, MongoDB empowers automotive applications with enhanced performance and scalability.But our exploration doesn't stop here. In the next part of this article, we will delve into MongoDB's time-series and change stream features, uncovering how they further enhance the capabilities of automotive applications. Stay tuned for the second installment, where we will discover even more ways to drive success in automotive applications with MongoDB. Together, we will unlock the full potential of MongoDB's advanced features and continue shaping the future of data management in the automotive industry.

written by
Daniel Bryła
Software development

How to manage an M5Stack Core2 for AWS. Part 2 – C/C++

The first article discussed M5Stack management based on the Micropython language. Now, we need to dive much deeper into a rabbit hole. Let's try to use C and C++ only. The most important advantage of using C is the possibility of full, low-lever control of all controller aspects. The most important disadvantage of using C is the necessity of full, low-lever control of all controller aspects. Well… with great power comes great responsibility.

FreeRTOS

AWS FreeRTOS is a real-time operating system dedicated to AWS cloud and resource-constrained devices.There is a lot of code to write this time, so we'll use an example directly from AWS. There is no need to burn any firmware with the burning tool; however, we still need to pass the USB port to the WSL environment using usbip, as we've done in the "Micropython" sectionof the first chapter.You can download the code we're going to use from https://github.com/m5stack/Core2-for-AWS-IoT-EduKit.git. The only subdirectory we need is Blinky-Hello-World, but the repository is really small, so using a sparse checkout is pointless, and you can simply clone the entire repo.Open VSCode and install a plugin called PlatformIO. There is a bug in PlatformIO, so you can't see any files from your WSL environment using the PlatformIO browser ( Windows WSL: I can't open any files(projects) in the PIO open browser. · Issue #2316 · platformio/platformio-home · GitHub). To fix it, close VSCode, edit ~/.platformio/packages/contrib-piohome/main.*.min.jsfile in Windows, replace "\\": "/"with "/": "/", and open VSCode again.To verify the connection between PlatformIO and your controller, open PlatformIO from the very left menu and then pick "Devices" from the main left menu. You should see /dev/ttyUSB0in the center part of the screen. Please remember to pass the USB device to WSL using usbipand to allow all users to use the port with chmod.If everything looks good so far, you can open the Blinky-Hello-World directory (not the entire cloned repository) as a project from the PlatformIO home screen. Now you can follow the essential elements of the official instruction provided below.You need to have AWS CLI v2 installed on your machine. If you don't, you can install it using the official manual: Installing or updating the latest version of the AWS CLI - AWS Command Line Interface (amazon.com)Now ensure you have a valid token, and you can interact with your AWS account using CLI (I propose listing some resources as the verification, e.g., aws s3 ls).We will use the built-in script to create a Thing in AWS IoT. Just open a terminal using PlatformIO (standard bash terminal won't work, so you need to open it from Miscellaneous -> New Terminal from the main PlatformIO menu in VSC), make sure you're in Blinky-Hello-World directory, and run pio run -e core2foraws-device_reg -t register thing. The script will create the Thing and download the necessary certificate/key files. You can do it manually if you don't trust such scripts; however, this one is created by the AWS team, so I believe it's trustworthy.In the AWS IoT console, go to Manage -> All devices -> Things and see the new Thing created by the script. The Thing name is autogenerated. In my case, it's 0123FAA32AD40D8501.OK, the next step is to allow the device to connect to the Internet. There is another script to help you with this task. Call pio run ‐‐environment core2foraws ‐‐target menuconfig. You'll see a simple menu. Navigate to AWS IoT EduKit Configuration and set up WiFi SSID abd WiFi Password. Be aware that your network's SSID and password will be stored as plaintext in a few files in your code now.Let's build the application. Just call pio run ‐‐environment core2forawsfrom the PlatformIO terminal and then pio run ‐‐environment core2foraws ‐‐target upload ‐‐target monitorto run it on your device and monitor logs.Now you can use the MQTT test client from the AWS IoT console to send anything to <<thing name>>/blinktopic. In my case, it's 0123FAA32AD40D8501/blink. The message payload doesn't matter for this example. Just send something to start blinking and send anything again to stop it.As you can see, we have done a lot just to communicate between AWS Cloud and the controller. It was much simpler with Micropython and even more with UiFlow. However, C is much more powerful, and what's most important here, we can extend it with libraries.

TensorFlow Lite for Microcontrollers

TensorFlow is an end-to-end open-source platform for machine learning. TensorFlow Lite is a library for deploying models on mobile, microcontrollers, and other edge devices.TensorFlow Lite for Microcontrollers is just a lightweight version of TensorFlow Lite designed to run machine learning models on microcontrollers and other devices with only a few kilobytes of memory. The core runtime fits in 16 KB on an Arm Cortex M3 and can run many basic models. It doesn't require operating system support, any standard C or C++ libraries, or dynamic memory allocation.TensorFlow Lite is not designed to work on ESP32 processors, so the only one available for M5Stack is TensorFlow Lite for Microcontrollers. It has some limitations – it supports just a limited subset of TensorFlow operations and devices, it requires manual memory management in Low-level C++ API, and it doesn't support on-device training. Therefore, to build a "learning at the edge" solution, you need a more powerful IoT Edge device, e.g. Raspberry Pi.But you can still run ML models on the M5Stack controller.Now, let's try to modify our Blinky-Hello-World to add the TensorFlow Lite for the Microcontrollers library.

TensorFlow Lite for Microcontrollers in FreeRTOS

The first issue to solve is where to get the TensorFlow source code from. In the main TensorFlow repository, you can find information that it's moved to a standalone one ( https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro) even if most of the documentation and examples still point there. The standalone repository used to contain a makefile for ESP32, but it seems to be deleted when moving examples to yet another repository ( https://github.com/tensorflow/tflite-micro/commit/66cfa623cbe1c1ae3fcc8a4903e9fed1a345548a). Today, the best source seems to be this repository: https://github.com/espressif/tflite-micro-esp-examples/tree/master/components.We'll need tfline-lib, but it doesn't work without esp-nn, so you should copy both to the components directory in your Blinky-Hello-World project.Let's modify our code, starting from including tensorflow headers at the beginning of the main.c file.[code]#include &quot;tensorflow/lite/micro/all_ops_resolver.h&quot;#include &quot;tensorflow/lite/micro/micro_error_reporter.h&quot;#include &quot;tensorflow/lite/micro/micro_interpreter.h&quot;#include &quot;tensorflow/lite/schema/schema_generated.h&quot;[/code]Now we can try to use it. For example, just before the void app_main()function, let's declare TF error reporter and use it in the function.[code]tflite::MicroErrorReporter micro_error_reporter;tflite::ErrorReporter* error_reporter = &amp;micro_error_reporter;void app_main(){Core2ForAWS_Init();Core2ForAWS_Display_SetBrightness(80);ui_init();TF_LITE_REPORT_ERROR(error_reporter,&quot;Hello TensorFlow&quot;&quot;This is just a test message/n&quot;);initialise_wifi();xTaskCreatePinnedToCore(&amp;aws_iot_task, &quot;aws_iot_task&quot;, 4096 * 2, NULL, 5, NULL, 1);xTaskCreatePinnedToCore(&amp;blink_task, &quot;blink_task&quot;, 4096 * 1, NULL, 2, &amp;xBlink, 1);}}[/code]Obviously, it's not an actual usage of TensorFlow, but it proves the library is linked and can be used whatever you need.In the main directory, you must also add new libraries tflite-liband esp-nnto the required components in CMakeLists.txt[code] set(COMPONENT_REQUIRES &quot;nvs_flash&quot; &quot;esp-aws-iot&quot; &quot;esp-cryptoauthlib&quot; &quot;core2forAWS&quot; &quot;tflite-lib&quot; &quot;esp-nn&quot;) [/code] It looks good, but it won't work yet. During compilation using pio run --environment core2foraws, you'll find out that the entire Blinky-Hello-World is made in pure C, and TensorFlow Lite for Microcontrollers library requires C++. The easiest way to convert it is as follows:

  1. Rename main.c to main.cc
  2. Change main.c to main.cc in the first line of main/CMakeList.txt
  3. Create extern "C" {} section for the entire main file code except for tensorflow imports.

It should look somehow like that:[code]#include &quot;tensorflow/lite/micro/all_ops_resolver.h&quot;#include &quot;tensorflow/lite/micro/micro_error_reporter.h&quot;#include &quot;tensorflow/lite/micro/micro_interpreter.h&quot;#include &quot;tensorflow/lite/schema/schema_generated.h&quot;extern &quot;C&quot; {######original main.c content goes here######tflite::MicroErrorReporter micro_error_reporter;tflite::ErrorReporter* error_reporter = &amp;micro_error_reporter;void app_main(){#the main function code from the listing above}}[/code]

  1. In main.cc , delete TaskHandle_t xBlink ; declaration because it's already declared in another file
  2. In platform.ini , in [env:core2foraws] section add build_flags = -fpermissive to change permissive compilation errors into warnings

Now you can build the project again. When running it with the target --monitor, you'll see the "Hello TensorFlow" message in logs, which means the TensorFlow library is included and working correctly.Now, you can do whatever you want with an out-of-the-box machine learning library and AWS integration.

Arduino

As you can see, C is much more powerful but requires much more work. Let's try to connect the same blocks (tensorflow, AWS IoT, and M5Stack library) but using a more user-friendly environment.Arduino is an open-source electronic prototyping platform enabling users to create interactive electronic objects. Let's try to combine the official M5Stack Core 2 for AWS with the Arduino IDE manual ( https://docs.m5stack.com/en/quick_start/core2_for_aws/arduino) with TensorFlow Lite for Microcontrollers ( https://github.com/tanakamasayuki/Arduino_TensorFlowLite_ESP32).

Hello world!

Firstly, install Arduino IDE from the official page https://www.arduino.cc/en/software. I assume you already have the CP210x driver installed, and the USB mode selected on your device.Open the IDE, go to File -> Preferences, and add the boards' management URL: https://m5stack.oss-cn-shenzhen.aliyuncs.com/resource/arduino/package_m5stack_index.json.Then open the Boards manager from the left menu and install M5Stack-Core2. Now connect the controller to the computer and choose it from the top drop-down menu.To use the M5Stack-specific library in the code, you need to open Sketch -> Include Libraries -> Library catalog and install M2Core2.Now you can write the simple "Hello World!" code and run it with the green arrow in the IDE top menu.[code]#include &lt;M5Core2.h&gt;void setup(){M5.begin();M5.Lcd.print(&quot;Hello World&quot;);}void loop() {}[/code]Sometimes, Arduino cannot reset the controller via an RTS pin, so you need to reboot it manually after writing a new code to it.So far, so good.

TensorFlow and AWS integration

The TensorFlow-official, Arduino_TensorFlowLite_ESP32 library is not designed to be used with M5Stack. Let's adapt it. Clone the library and copy the Hello World example to another directory. You can open it from Arduino IDE now. It's a fully working example of the usage of the TensorFlow model. Let's adapt it to use the M5Core2 library. To hello_world.inoyou need to add #include <M5Core2.h>at the beginning of the file and also M5.begin();at the beginning of void setup()function. You can also add M5.Axp.SetLed(true);after this line to turn on the small green led and ensure the device is running.Now, start the application. You can see TensorFlow output in the Serial Monitor tab. Just change the baud rate to 115200 to make it human-readable.Can we mix it with AWS IoT integration? Yes, we can.We will use the PubSubClient library by Nick O'Leary, so open the library catalog in Arduino IDE and install it, and then let's connect to AWS IoT and MQTT.Using Arduino IDE, create a new file secrets.h. We need a few declarations there:[code]#define AWS_IOT_PUBLISH_TOPIC &quot; m5stack/pub&quot;#define AWS_IOT_SUBSCRIBE_TOPIC &quot; m5stack/sub&quot;#define WIFI_SSID &quot;ThisIsMyWiFiSSID&quot;#define WIFI_PASSWORD &quot;Don't use so easy passwords!&quot;int8_t TIME_ZONE = 2;#define MQTT_HOST &quot;xxxx.iot.eu-west-1.amazonaws.com&quot;#define THINGNAME &quot;UiFlow_test&quot;static const char* ca_cert = R&quot;KEY(-----BEGIN CERTIFICATE-----…-----END CERTIFICATE-----)KEY&quot;;static const char* client_cert = R&quot;KEY(-----BEGIN CERTIFICATE-----…-----END CERTIFICATE-----)KEY&quot;;static const char* privkey = R&quot;KEY(-----BEGIN RSA PRIVATE KEY-----…-----END RSA PRIVATE KEY-----)KEY&quot;;[/code] AWS_IOT_PUBLISH_TOPICand AWS_IOT_SUBSCRIBE_TOPICare our test topics we're going to use in this example. WIFI_SSIDand WIFI_PASSWORDare our WiFi credentials. TIME_ZONEis the time zone offset. MQTT_HOSTis the public AWS IoT endpoint (the same as in the first UiFlow example). THINGNAMEis the name of Thing in AWS (I've used the same as in the UiFlow example). client_certand privkey, you need to copy from the secrets generated when creating Thing for the UiFlow example. ca_certis the public key of AWS certificate authority, so you can obtain it from the Thing creation wizard (certificate step) or from https://good.sca1a.amazontrust.com/).Now it’s time to adapt the main hello_world.inofile.We should add new imports (including our secret.hfile).[code]#include &lt;WiFiClientSecure.h&gt;#include &lt;PubSubClient.h&gt;#include &quot;secrets.h&quot;#include &lt;time.h&gt;[/code]Then we need a few new fields.[code]WiFiClientSecure net;PubSubClient client(net);time_t now;time_t nowish = 1510592825;[/code]The field nowishis just some timestamp in the past.In the setup()function, we need to open a WiFi connection with our local network and the Internet, set up the time to check certificates, install the certificates, set up the MQTT client, and open the AWS IoT connection.[code]delay(3000);WiFi.mode(WIFI_STA);WiFi.begin(WIFI_SSID, WIFI_PASSWORD);WiFi.waitForConnectResult();while (WiFi.status() != WL_CONNECTED){Serial.print(&quot;.&quot;);delay(1000);}M5.Lcd.println(String(&quot;Attempting to connect to SSID: &quot;) + String(WIFI_SSID));M5.Lcd.println(WiFi.localIP());M5.Lcd.print(&quot;Setting time using SNTP&quot;);configTime(TIME_ZONE * 3600, 0 * 3600, &quot;pool.ntp.org&quot;, &quot;time.nist.gov&quot;);now = time(nullptr);while (now &lt; nowish){delay(500);Serial.print(&quot;.&quot;);now = time(nullptr);}M5.Lcd.println(&quot;done!&quot;);struct tm timeinfo;gmtime_r(&amp;now, &amp;timeinfo);M5.Lcd.print(&quot;Current time: &quot;);M5.Lcd.print(asctime(&amp;timeinfo));net.setCACert(ca_cert);net.setCertificate(client_cert);net.setPrivateKey(privkey);client.setServer(MQTT_HOST, 8883);client.setCallback(messageReceived);M5.Lcd.println(&quot;Connecting to AWS IOT&quot;);while (!client.connect(THINGNAME)){Serial.print(&quot;.&quot;);delay(1000);}if (!client.connected()) {M5.Lcd.println(&quot;AWS IoT Timeout!&quot;);return;}client.subscribe(AWS_IOT_SUBSCRIBE_TOPIC);M5.Lcd.println(&quot;AWS IoT Connected!&quot;);[/code]This is an entire code needed to set up the application, but I propose splitting it into multiple smaller and more readable functions. As you can see, I use Serial output for debugging.To receive messages, we need a new function (the name matches the declaration in client.setCallback(messageReceived);)[code]void messageReceived(char *topic, byte *payload, unsigned int length){M5.Lcd.print(&quot;Received [&quot;);M5.Lcd.print(topic);M5.Lcd.print(&quot;]: &quot;);for (int i = 0; i &lt; length; i++){M5.Lcd.print((char)payload[i]);}M5.Lcd.println();}[/code]The last thing to do is to loop the client with the entire application. To do that, just add a one-liner to the loop()function:[code]client.loop(); [/code]You need another one-liner to send something to AWS, but I've added two more to make it visible on the controller's display.[code]M5.Lcd.println(&quot;Sending message&quot;);client.publish(AWS_IOT_PUBLISH_TOPIC, &quot;{\&quot;message\&quot;: \&quot;Hello from M5Stack\&quot;}&quot;);M5.Lcd.println(&quot;Sent&quot;);[/code]The communication works both ways. You can subscribe to m5stack/pubusing the MQTT Test client in the AWS console to read messages from the controller, and you can publish to m5stack/subto send messages to the controller.As you can see, using Arduino is easier than using FreeRTOS, but unfortunately, it's a little bit babyish. Now we'll try to avoid all IDE's and use pure console only.

Espressif IoT Development Framework

Basically, there are three ways to burn software to the controller from a Linux console – Arduino, esptool.py, and ESP-IDF. When you create a new project using PlatformIO, you can pick Arduino or ESP-IDF. Now, let's try to remove the IDE from the equation and use a pure bash.First of all, you need to install a few prerequisites and then download and install the library.[code]sudo apt install git wget flex bison gperf python3 python3-venv cmake ninja-build ccache libffi-dev libssl-dev dfu-util libusb-1.0-0mkdir -p ~/espcd ~/espgit clone --recursive &lt;a href=&quot;https://github.com/espressif/esp-idf.git&quot;&gt;https://github.com/espressif/esp-idf.git&lt;/a&gt;cd ~/esp/esp-idf./install.sh esp32./export.sh[/code]Please note you need to run install and export (last two commands) whenever you open a new WSL console. With the library, you also have some examples downloaded. Run one of them to check, does everything work.[code]cd examples/get-started/hello_world/idf.py set-target esp32set ESPPORT=/dev/ttyUSB0idf.py build flash monitor[/code]You should see the output like this one.[code]Hello world!This is an esp32 chip with 2 CPU core(s), WiFi/BT/BLE, silicon revision 300, 2MB external flashMinimum free heap size: 295868 bytesRestarting in 10 seconds...Restarting in 9 seconds...Restarting in 8 seconds...Restarting in 7 seconds...Restarting in 6 seconds...Restarting in 5 seconds...Restarting in 4 seconds...Restarting in 3 seconds...Restarting in 2 seconds...Restarting in 1 seconds...Restarting in 0 seconds...Restarting now.[/code]To stop the serial port monitor, press CRTL + ]. Be aware that the application is still running on the controller. You need to power off the device by the hardware button on the side to stop it.If you want to use TensorFlow Lite for Microcontrollers with ESP-IDF, you need to create a new project and add a proper library. You can use the command idf.py create-project <<project_name>>to create a project. My project name is hello_tf. The script creates a pure C project; we need to rename hello_tf.cfile to hello_tf.cc. Then, we can copy tflite-microand esp-nnlibraries from FreeRTOS example and place them in the components directory. The main/CMakeList.txtcontent should be like that.[code]set(COMPONENT_SRCS &quot;hello_tf.cc&quot;)set(COMPONENT_REQUIRES &quot;tflite-lib&quot; &quot;esp-nn&quot;)register_component()[/code]As you can see, the default components sources definition is changed, and new libraries are added.Now, let's see the main hello_tf.cc file content.[code]#include &quot;tensorflow/lite/micro/all_ops_resolver.h&quot;#include &quot;tensorflow/lite/micro/micro_error_reporter.h&quot;#include &quot;tensorflow/lite/micro/micro_interpreter.h&quot;#include &quot;tensorflow/lite/schema/schema_generated.h&quot;extern &quot;C&quot; {tflite::MicroErrorReporter micro_error_reporter;tflite::ErrorReporter* error_reporter = &amp;micro_error_reporter;void app_main(void){TF_LITE_REPORT_ERROR(error_reporter, &quot;Hello from TensorFlow\n&quot;);}}[/code]As you can see, we had to use extern "C" block again because, by default, ESP-IDF runs the void app_main()function from C, not C++ context.To run the application run idf.py build flash monitor.In the same way, you can add other libraries needed, but without PlatformIO, dependency management is tricky, especially for the core2forAWS library with multiple dependencies. Alternatively, you can use https://github.com/m5stack/M5Stack-IDFas a library with M5Stack dependencies to control the I/O devices of the controller.

Summary

As I wrote at the beginning of this article, with C++, you can do much more; however, you are forced to manage the entire device by yourself. Yes, you can use AWS integration, M5Stack I/O interfaces, and TensorFlow (TensorFlow Lite for Microcontrollers version only) library together, but it requires a lot of code. Can we do anything to join the advantages of using Micropython and C together? Let's try to do it in the last chapter.

written by
Damian Petrecki
Our experts

How to ensure business continuity and workplace readiness in case of unexpected events

While the ongoing COVID-19 outbreak is affecting millions of people and causing numerous disruptions to the global economy, technology companies can undertake significant steps to assure business continuity for their employees and stakeholders. This demanding period is also a validation of company policies and may lead to continuous changes in the way we work and run projects.

When the whole world stops to narrow down the spread of COVID-19 and various industries suffer due to the lockdown, the technology companies should focus on providing its services in order to help those who are on the front line of the crisis and help the global economy recover to avoid unpleasant consequences of the pandemic. Now is the time that verifies strategies and preparation for working entirely in a remote mode, often without physical access to the office buildings, and at the same time delivering services at the highest level.

We share with you what we have done to prepare for the situation when our entire team has to work remotely and deliver services for companies located globally. We asked several of our colleagues - from IT and people operations to project managers and developers - how they contributed to business continuity planning and what it is like to work from home these days. And while the outbreak is a serious danger, we have to learn from the entire situation and do the homework to minimize issues in the future as no one can guarantee that something similar won't happen again.

Providing a toolset, remote access, and office coordination while the entire team is distributed

The last three weeks have shown that agile companies, building distributed teams, and using cloud technologies with distant access to proper tools are able to adjust to the fully remote model of work much easier. The current emergency cut down the numerous discussions questioning the necessity of moving enterprises to the cloud, providing employees with mobile workstations, planning scenarios anticipating a period when a company has to operate independently without physical access to the infrastructure located in headquarters. Those of businesses that have embraced that strategic business continuity plan avoided chaotic operations and distractions in service delivery.

Here we dive into the list of things necessary to guarantee the business going forward:

As a fully equipped workstation that enables employees to work effectively and focus on their tasks seems obvious, it becomes more critical where you have to back up developers and designers with highly performing devices needed to run more sophisticated software. So whenever you are planning your purchasing, take into account that the devices you’re buying may have to be used for weeks in domestic conditions.

To make sure that members of your team can smoothly move to remote work and communicate flawlessly with their peers and your customers, you should use tools accelerating collaboration and simplifying access to other people. Our typical tool gear consists of Slack, Zoom, Dropbox, Office 365 - including remote access to the mailbox, and Jira. It can be developed accordingly to a given team's needs.

Nowadays, we have to prepare to onboard and gear up our employees remotely. How does it work at Grape Up? We send a full package that consists of a laptop with the entire system configured and equipped with access to VPN and tools needed to start the job, headphones, a monitor, a keyboard.

VPN is now obligatory in order to allow everyone at the company to use databases, internal systems, network drives, and knowledge management platforms. As many people need flexibility in access to these resources, it’s highly recommended to use VPN on a daily basis, verify how it works and avoid thinking about it as something needed only in emergency circumstances - since now it’s a new normal. Among other important advantages, VPN helps your company with security, under the condition you manage access properly and monitor in case of any tries of attack.

Current circumstances and uncertainty may lead to growth in scams and phishing. And while VPN and used technologies increase our safety, we have to remember that proper communication can enhance security even more. It’s your job to make everyone aware of what they may face and how to treat it.

To sum up, in order to ensure that your business operations and service delivery perform impeccably in case of emergency you are obligated to prepare your company to work without physical access to your headquarters. It’s also fundamental to protect your business with the right backups in case of the worst scenarios.

And here appears one of the most challenging things - a human factor - make sure that your firm applied the right policy that tears down silos and assure that in case someone is unreachable or in emergence that there is a person with knowledge and accesses that can substitute that role.

Office management in a remote mode

How shifting to work from home impacts office management? In modern and agile organizations office coordination is often done remotely as many teams run projects in various locations. A situation like this happening right now shows that it is essential to build solutions that mean to provide your employees with mobile and flexible workstations. Being responsible for office management in a time of going fully remote means ensuring that every workspace is safe and well protected in case of any fraud trying to take advantage of the demanding circumstances.

By coordinating all the supplies and reducing things that are not needed when the whole team won’t be on-site for an unknown period of time, a company can gain some impressive cost savings. It is also important to have a plan to make all your workplaces ready to be opened when the situation changes so your employees could easily get back on the right track.

People operations, continuous engagement and remote learning

Security and taking care of the entire team is the number one priority. In business that can be easily run remotely, working from home is the best-case scenario. Companies that create a culture that empowers people to work independently, values open communication through various channels and encourages to be engaged even while the conditions are challenging, can avoid distraction in services.

How do we do it at Grape Up? Our company's culture is built on openness and collaboration - we value our weekly Lunch & Learn sessions designed to grow together and share some time on building relationships. The key here comes to thinking about it as a long-term process, no as a scenario for a demanding time.

While working remotely and willing to develop their skills, employees need well-documented resources - internal wiki, tutorials, guides, and knowledgebases. We at Grape Up promote learning by dealing with real problems together and the approach “try and I will assist you” over “I will tell you how to do it”. Our people continue helping each other in skills development, even when pair programming is done from distance.

Key to business continuity - service delivery management

Project managers, Product Owners, Scrum Masters and Service Delivery Managers play a vital role in providing business continuity and ensuring that customers are satisfied with the services, projects develop in the right direction, and the whole team is engaged yet have all the tools to work comfortably.

According to leaders of our project teams, their job, more than ever, comes to making sure that everyone is on the same page. How do they achieve it? By simplifying communication. Following the progress and letting everyone know how things stand during daily calls help to sustain engagement and chase common goals. But it's also important to do it carefully - spending a visible part of a day on calls and video meetings may lead to the opposite effect.

So when many things are similar to the typical working routine, what has changed? Pair programming is quite challenging now. To deal with it, we have worked out some kind of trade-off; half of a day work in pairs (of course remotely) and the second part by themselves.

What is often emphasized by our management team; the situation requires more empathy and understanding both for customers and colleagues. Many people feel confused and some may be affected or feel overwhelmed - it’s extremely important to be honest, informing about possible obstacles and inconveniences to improve what is possible and anticipate potential difficulties.

Coding from home

Working from home and being responsible for providing services that are crucial for many companies to exist, is nothing new to the development teams. What do they need to focus on building solutions that empower the entire industry to move forward?

First of all - a company that intends to perform well in a remote, distributed mode has to start with creating a culture that supports collaborative relationships between members of the projects and representatives of a customer. Understanding, trust, and open communication are the credentials of every fruitful cooperation. It’s extremely valuable when you cannot work face to face and take some time to get to know each other in a typical environment.

This leads us to the second thing - engagement. Teams that value creativity and encourage people to care about projects and motivate others to be active in chasing project goals can achieve impressive results even if the circumstances are difficult and communication among members is limited to the online channels.

In terms of the highly demanding situations, being responsive and always open to help your customers, both with planned tasks and with extraordinary issues, is something that builds a special bond and gives your business partners confidence that you assist their teams even when things are getting worse.

While working remotely, communication that enables asking questions and diving into some complicated topics is the most effective way to avoid misunderstanding, especially when it comes to task requirements and problem analysis. The role of a company leader should be focused on building a culture that supports dialogue and transparency - it has never been more important to talk about challenges, faced issues, and daily work. Every member of a team can help with making work more effective when sharing their experiences.

Along with the set of tools described above, the development teams can utilize two extremely useful apps; Pointing poker - browser extension to estimate task performance and Mural to create a table of good and bad experience during a retrospective.

It’s time for agile companies to provide business continuity and help the global economy recover

By moving to a remote work mode we can all help our authorities in fighting with the spread of COVID-19. The safety of employees and their families is a priority for the enterprises that feel responsible for people who build their organizations. This crisis reshapes the global economy and affects numerous industries. Agile companies that are designed to easily adjust to the changing conditions and can provide business continuity during difficult times, empower their partners to mitigate the struggles and recover.

written by
Szymon Kozak
Software development

How to manage an M5Stack Core2 for AWS. Part 3 – best of Micropython and C meld together

In the first part of the article , we’ve covered Micropython usage with UiFlow and VS Code environments. In the second one , we use C/C++ with more granular device control. This time, we’ll try to run Micropython with an external library added.

Micropython with interactive console

There are two projects that enable Micropython for ESP32 to run external libraries. The first one is M5Stack-official https://github.com/m5stack/Core2forAWS-MicroPython , and the second one is unofficial https://github.com/mocleiri/tensorflow-micropython-examples . The common part is – you can run an interactive Micropython console on the controller using the serial port. Unfortunately, this is the only way to go. There is no IDE and you can’t upload a complex, multi-file application.

To open the console, you can simply use Putty and connect to the proper COM port (COM3, in my case) with 115200 speed.

To run the first project, the best way is to follow the official README documentation, but there is a bug in the code here:

https://github.com/m5stack/Core2forAWS-MicroPython/blob/master/ports/esp32/makelfs2.py#L20 .

One file is opened in ‘w’ (write) mode and another in ‘rb’ (read bytes). You need to change ‘w’ to ‘wb’ to run any example from the readme. It’s a good codebase because it’s small and M5Stack official. It contains upip, so you can include more official libraries after connecting to the Internet. You can also extend the codebase with more libraries before the build (some extra libraries are available in another official repository https://github.com/m5stack/micropython-lib ). However, TensorFlow is a complex library with multiple dependencies, so using the unofficial project is easier.

The Tensorflow Micropython Examples project offers pre-built images to download directly from GitHub. For our controller, you need the ESP32 version (no ESP32 S3) for 16MB memory.

Just open the GitHub Actions page https://github.com/mocleiri/tensorflow-micropython-examples/actions/workflows/build_esp32.yml , pick the newest green build and download the latest version.

Micropython and C++

Then extract the zip package and burn it using command ‘esptool.py -p /dev/ttyUSB0 -b 460800 –before default_reset –after hard_reset –chip esp32 write_flash –flash_mode dio –flash_size detect –flash_freq 40m 0x1000 bootloader/bootloader.bin 0x8000 partition_table/partition-table.bin 0x10000 micropython.bin’ . Ensure you have passed the USB port to WSL using usbip and allowed all users to use it with chmod (see the first part for details).

With this project, you can import the microlite library, which is exactly the one you need - TensorFlow Lite for Microcontrollers. If you want to extend this project with your own libraries, you can add those to the cloned source code and build it with the following commands:

git submodule init

git submodule update --recursive

cd micropython

git submodule update --init lib/axtls

git submodule update --init lib/berkeley-db-1.xx

cd ..

source ./micropython/tools/ci.sh && ci_esp32_setup_helper v4.3.1

source ./esp-idf/export.sh #as you can see, esp-idf is already included

pip3 install Pillow

pip3 install Wave

rm -rf ./micropython-modules/microlite/tflm

cd ./tensorflow

../micropython-modules/microlite/prepare-tflm-esp.sh

cd ../micropython

make -C mpy-cross V=1 clean all

cd ../boards/esp32/MICROLITE_SPIRAM_16M

rm -rf build

idf.py clean build

Your binaries are located under the boards/esp32/MICROLITE_SPIRAM_16M/build directory.

This way, you can add more libraries or your own program to the image, but you still need to use the interactive console to run it.

Summary

This three-part workshop aimed to utilize M5Stack Core2 for AWS controller with AWS IoT connection and ML included. Let’s try to sum up all possible ways to do that.

Language Management method Pros Cons Micropython UiFlow Python Low learning curve/easy UI OTA updates No Tensorflow available Not adequate for complex use cases VS Code with vscode-m5stack-mpi plugin Python Full IDE (Visual Studio Code) No Tensorflow available Tensorflow Micropython Examples Micropython with TensorFlow Lite for Microcontrollers Capability to build the project with more libraries or custom code included Necessity to run the code from interactive Python console. C/C++ VS Code with PlatformIO and FreeRTOS All libraries available Complex code (C/C++) Complex configuration Arduino IDE All libraries available Easy and powerful enough IDE Arduino control loop limitation ESP-IDF Small solution, without even a dedicated IDE (plugins for CS Code or Eclipse available) Complex dependency Management

As you can see, we’ve tried various approaches and discovered their advantages and disadvantages. Now, you can decide if you wish to have complete control and use pure C/C++, or maybe you prefer much more friendly Python. You can choose which IDE or at least plugin you’d like to use, and whether you want to utilize OTA to update an entire firmware or only to transfer data between your devices and the cloud.

written by
Damian Petrecki
Software development
Automotive

Android Automotive OS 14 is out – build your own emulator from scratch!

Android Automotive OS 14 has arrived, and it marks a significant evolution in the way users interact with their vehicle's system. This version brings enhanced user experience, improved Android API, and better OS-level security (as well as non-automotive Android 14). In this short article, we'll walk you through a tutorial on creating your own emulator from scratch, but first, here are some of the standout features and improvements introduced in Android Automotive OS 14  !

Android Automotive 14 noteworthy new features

  •     Enhanced UI:    Now with an optional, improved home screen adaptation to the portrait mode for better vehicle compatibility.
  •     Multi-User Upgrades:    Support parallel sessions with custom sound zones and multiple displays.
  •     Remote Access:    Enables system wake-up, executes a task and then shutdown via external requests.
  •     Extended VHAL:    More ADAS and non-ADAS properties included to represent activation status and the system state.
  •     App Quick Actions:    A feature that allows applications to showcase quick actions.
  •     Infotainment Reference Design:    The starting point for developers to create apps for Android Automotive OS.
  •     New Boot Animation:    Well, as usual 😊

To learn about all new features provided in Android Automotive 14, follow this link:  https://source.android.com/docs/automotive/start/releases/u_udc_release?hl=en

Steps to building an emulator

The best operating system for building an emulator in AAOs is Ubuntu 18.04 or higher. If you use a different operating system, you must follow some extra steps. For instance, you may need to install a repo from  https://gerrit.googlesource.com/git-repo instead of using a package manager.

1)     You need first to install the required dependencies

sudo apt install git-core gnupg flex bison build-essential zip curl zlib1g-dev libc6-dev-i386 libncurses5 x11proto-core-dev libx11-dev lib32z1-dev libgl1-mesa-dev libxml2-utils xsltproc unzip fontconfig repo

2)     Then, configure Git, set your name and email address

git config --global user.name "Your name"
git config --global user.email your@email

3) After configuring Git, you can download source code from a Git repository

repo init -u https://android.googlesource.com/platform/manifest -b android-14.0.0_r54 --partial-clone --clone-filter=blob:limit=10M && repo sync

You can skip  ‐‐ partial-clone and  ‐‐ clone-filter. However, this will result in longer download times. It’s recommended to check for the latest android-14.0.0_rXX tag before downloading, which can be found on this page:  https://android.googlesource.com/platform/manifest/+refs .

Keep in mind that downloading takes a lot of time because the sources take about 150GB even with partial clone and clone-filter enabled.

4)     In the next step, set up environment variables using the script provided

. build/envsetup.sh

This method replaces your JAVA_HOME and modifies PATH, so be aware that your console may act differently now.

5)     Select the system to build

lunch sdk_car_portrait_x86_64-eng

You can create a landscape build by removing "portrait". Also, change x86_64 to arm64 if you want to run the system on Mac. For more details on building on Mac, check out  this article .

6)     Create the system and the emulator image

m && m emu_img_zip

The first command will take hours to complete. Take a break: go running, biking, hiking, or whatever drives you. You can modify threat pool usage by the build system with -j parameter, like m -j 16 – the default one is the CPU count of your machine.

7)     Copy the emulator image to Android Studio emulator directory

mkdir -p /mnt/c/Users/<user>/AppData/Local/Android/Sdk/system-images/android-34/custom_aaos_14/ && unzip -o out/target/product/emulator_x86_64/sdk-repo-Linux-system-images-eng.dape.zip -d /mnt/c/Users/<user>/AppData/Local/Android/Sdk/system-images/android-34/custom_aaos_14/

I assume you work on a Windows machine with WSL. Please adapt the above commands with your Android/SDK directory if you are working on native Linux.

Create a package.xml file in /mnt/c/Users/<user>/AppData/Local/Android/Sdk/system-images/android-34/custom_aaos_14/x86_64 directory with the  this content . The file provided bases on existing package.xml files in other emulator images.

Adjust “tag”, “vendor”, and “display name” in the upper file if needed. Make sure to match  <localPackage obsolete="false" path="system-images;android-34;custom_aaos_14;x86_64"> with the path you’d placed the emulator image.

8)     Now it’s time to create a new emulator in Android Studio

Open "Device Manager" and select "Create Virtual Device". In the left-hand menu, choose "Automotive" and add a new hardware profile using the button in the lower-left corner of the panel.

Select “Android Automotive” as a device type. Choose the correct resolution for your build. For example, I selected a resolution of 1152x1536 for a 10-inch device to create a portrait build. Next, allocate at least 1536 MB of RAM to your device. Then, choose only one supported device state - "Portrait" or "Landscape" - according to your build. Finally, disable any unnecessary sensors and skin for AAOS compatibility.

9)     Accept and select your new hardware profile. Then, move on to the next step

10) Pick your emulator image (you can find it using the tag and vendor configured in package.xml)

11) On the final screen, enter a name and complete the configuration process

12) To start the emulator, go to the "Device Manager" and launch it from there

13) You’re all set! Enjoy!

Get started on creating your very own Android Automotive OS 14 emulator by following the steps outlined in this article. Explore the possibilities of car technology and discover what the future has in store. You can find a AAOS “Hello World” example in our article  How to Build Your First App for Android Automotive OS . Start building, try out the various features, and have fun with your new setup!

written by
Damian Petrecki
Previous
Load more

Stay updated with our newsletter

Subscribe for fresh insights and industry analysis.

About UsCase studiesContactCareers
Capabilities:
Legacy ModernizationData PlatformsArtificial Intelligence
Industries:
AutomotiveFinanceManufacturing
Solutions:
DataboostrCloudboostr
Resources
BlogInsights
© Grape Up 2025
Cookies PolicyPrivacy PolicyTerms of use
Grape Up uses cookies

This website uses cookies to improve its user experience and provide personalized content for you. We use cookies for web analytics and advertising. You can accept these cookies by clicking "OK" or go to Details in order to manage your cookies preferences more precisely. To learn more, check out our Privacy and Cookies Policy

Accept allDetails
Grape Up uses cookies

Essential website cookies are necessary to provide you with services available through the website, autosave your settings and preferences, and to enhance the performance and security of the website - you have the right not to accept them through your web browser's settings, but your access to some functionality and areas of our website may be restricted.

Analytics cookies: (our own and third-party : Google, HotJar) – you can accept these cookies below:

Marketing cookies (third-party cookies: Hubspot, Facebook, LinkedIn) – you can accept these cookies below:

Ok