About us
Our services

Capabilities

Legacy Modernization
Data Platforms
AI & Advanced Analytics

Industries

Automotive
Finance
Manufacturing

Solutions

Databoostr

Data Sharing & Monetization Platform

Cloudboostr

Multicloud Enterprise Kubernetes

Looking for something else?

Contact us for tailored solutions and expert guidance.

Contact
Case studies
Resources

Resources

Blog

Read our blog and stay informed about the industry’s latest trends and technology.

Ready to find your breaking point?

Stay updated with our newsletter.

Subscribe

Insights

Ebooks

Explore our resources and learn about building modern software solutions from experts and practitioners.

Read more
Careers
Contact
Blog

Thinking out loud

Where we share the insights, questions, and observations that shape our approach.

All blog post
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Automotive

What's new in the truck industry

 64% of truck industry CEOs say the future success of their organization hinges upon the digital revolution. This should come as no surprise, as transportation as we knew it a decade or two ago is slowly fading into obscurity.

Operating standards in the industry are improving, and values such as speed, efficiency, eco-friendliness, and safety are reverberating in announcements at industry conferences and in truck industry reports.

Self-driving, fully autonomous vehicles, mainly electrically powered and  based on AI and the Internet of things , are transforming 21st-century transportation. It is well worth taking a look at examples of solutions implemented by innovators with substantial development capital.

Solutions that translate into safety and driving performance of larger vehicles

Some innovations, in particular, are shifting the industry forward. And they are literally doing so. Developments like autonomous vehicles, electric-powered trucks,  Big Data , and  cloud computing have modified the way goods and people are transported. Smart analytics allows for more efficient supply chains, but not only that.  It also enhances driving safety and the experience of traveling long distances.

High-tech trucks break down less frequently and cause fewer accidents. And self-driving technologies, which are still being developed, enable you to save time and money.

AI, Big Data, Internet of things

Better location tracking, improved ambient sensing, and  enhanced fleet management . All these benefits can be achieved by implementing IoT solutions.

Composed of devices and detectors in the vehicle and in the road infrastructure,  the network is a space for the continuous exchange of data in real-time . It provides information about the conditions on the route, but also whether the cargo is stable (tilt at the level of the pallet or package), and whether the pressure in the tires is at the right level. This facilitates the work of drivers, shippers, and management.

This solution is applied, for instance, at one of the globally leading logistics companies,  Kuehne + Nagel. The company uses IoT sensors and a cloud-based platform in its daily work. It simply works.

The use of artificial intelligence algorithms is equally important.     Advanced Big Data analytics    , coupled with AI, allows companies to make decisions based on accurate, quality data. According to Supply Chain Management World research, 64 percent of executives believe that big data and coupled technologies will empower and change the industry forever. This is because it will improve performance forecasting and goal formulation even further.

Performance indicators are measured in this way by the logistics company  Geodis. With their proprietary Neptune platform, they leverage real-time coordination of transportation activities. With one app and a few clicks, carriers and customers can manage all activities during transport.

Failure prevention

Software-based solutions in the trucking industry are eradicating a number of issues that have previously been the bane of the industry. These include breakdowns, which sometimes take a fleet's operationally significant "arsenal" out of circulation. You can find out about such incidents even before they happen.

 Drivers of the new Mercedes-Benz eActros, for example, have recently been able to make use of the intelligent Mercedes-Benz Uptime system. This service is based on more than 100 specific rules that continuously monitor processes such as charging. On top of that, they control the voltage history associated with the high-voltage battery.

All information required in terms of reliability is available to customers via a special portal in the cloud. In this way, the German manufacturer wants to keep unexpected faults to a minimum and facilitate the planning of maintenance work for the fleet.

Self-driving vehicles

Automated trucks equipped with short and long-distance radars, sensors, cameras, 3D mapping, and laser detection are poised to revolutionize the industry. They are also a solution to the problem of the driver shortage, though, as a matter of fact, we still have to wait a while for fully autonomous trucks.

However, there are many indications that there will be increased investment in such solutions. Just take a look at the proposals from tech giants in the US like  Tesla, Uber, Cruise, and Waymo.

The latter offers the original Waymo VIA solution, promising van and bus drivers an unparalleled autonomous driving experience. Waymo Driver's intelligent driving assistant, based on simulations with the most challenging driving scenarios, is capable of making accurate decisions already in the natural road environment. WD sees and detects what's happening on the road, in addition to being able to handle complex tasks of accelerating, braking, and navigating a wide turning circle.

Sustainable drive

The sustainability trend is now powering multiple industries, with the truck industry being no exception. So it should come as no surprise that a rising number of large transport vehicles are being electrified.

Tesla is investing in electric trucks, and doubly so, because in addition to making their  Semi Truck an electric vehicle, Elon Musk's brand has additionally created its own charging infrastructure - a network of superchargers under the brand, the Tesla Supercharger Network.  As a result, ST trucks are able to drive 800 km on full batteries, and an additional 600 km of range can be attained after 30 minutes of charging.

Another giant,  Volkswagen , is also following a similar approach. It is investing in electric trucks with solid-state batteries that, unlike lithium-ion batteries, provide greater safety and an improved quick-charging capability. In the long run, this is intended to lead to an increase of up to 250% in the range of kilometers covered.

The mission to reduce CO2 emissions in truck transport is also being actively promoted by  VOLTA. Their all-electric trucks are designed to reduce exhaust tailpipe emissions to 1,191,000 tonnes by 2025. A slightly smaller, but still impressive goal has been set by England's  Tevva Electric Trucks. Their vehicles are expected to reduce CO2 emissions by 10 million tons by the next decade.

Giants already know what's at stake

Companies like Tesla, Nikola Corporation, Einride, Daimler, and Volkswagen already understand the need to enter the electric vehicle market with bold proposals. Major players in the automotive market are also targeting synergistic collaborations. For instance, BMW, Daimler, Ford, and Volkswagen are teaming up to build a high-powered European charging network. Each charging point will be 350 kW and use the Combined Charging System (CCS) standard to work with most electric vehicles, including trucks.

Another major collaboration involves Volkswagen Group Research and the American company QuantumScape. The latter is conducting research on solid-state lithium metal batteries for large electric cars. This partnership is expected to enable the production of solid-state batteries on an industrial level.

Smooth energy management

Truck electrification is not all that is needed. It is also essential that electric vehicles have an adequate range and unhindered access to charging infrastructure. In addition, optimizing consumption and increasing energy efficiency is also one of the challenges.

It is with these needs in mind that Proterra has developed special  Proterra APEX  connected vehicle intelligence telematics software to assist electric fleets with real-time energy management. Electric batteries are constantly monitored and real-time alerts appear on dashboards. Fleet managers also have access to configurable reports.

Meanwhile, the Fleetboard Charge Management developed by  Mercedes offers a comprehensive view of all interactions between e-trucks and the company's charging stations. Users can see what the charging time is and monitor the current battery status. Beyond that, they can view the history of previous events. They can also adjust individual settings such as departure times and final expected battery status.

Truck Platooning

More technologically advanced trucks can be linked together.  Platooning, or interconnected lines of vehicles traveling in a single formation allows for substantial savings. Instead of multiple trucks "scattered" on the road, the idea is to have a single, predictable in many ways string of vehicles moving in a highly efficient and low-emission manner.

How is this possible? The answer is simple: telematics.  Telecommunication devices enable the seamless sending, receiving, and storing of information. Josh Switkes, a founder of Peloton, a leader in automated vehicles, explains how the system functions: We’re sending information directly from the front truck to the rear truck, information like engine torque, vehicle speed, and  brake application .

Although platooning is not yet widespread, it may soon become a permanent fixture on European roads thanks to  Ensemble . As part of this project, specialists, working with brands such as  DAF, DAIMLER, MAN, IVECO, SCANIA, and VOLVO Group, are analyzing the impact of platooning on infrastructure, road safety, and traffic flow. However, the fuel savings alone are already said to be 4.5% for the leading truck and 10% for the truck following it.

Smart sensors

Developers of automotive and truck industry technologies are focusing particularly on safety issues. These can be aided by intelligent sensors that allow a self-driving vehicle to generate alerts and take proactive action. This is how VADA works. This is  Volvo ’s active driver assistance system, already being standard on the Volvo VNR and VNL models.

The advanced collision warning system, which combines radar sensors with a camera, alerts the driver seconds before an imminent collision. If you are too slow to react, the system can implement emergency braking automatically in order to avoid a crash.

Innovative design

Changes are also taking place at the design stage of large vehicles. This is particularly emphasized by the makers of these cutting-edge models. One of the leaders in this field is  VOLTA , which advertises its ability to create "the world's safest commercial vehicles".

Their Volta Zero model provides easy and low level boarding and alighting from either side directly on the sidewalk. That's possible because the vehicle doesn't have an internal combustion engine, so the engineers were able to overhaul previously established rules.

Dynamic route mapping and smart monitoring

While GPS is nothing new, the latest software uses the technology to a more advanced degree. For instance, for so-called dynamic route mapping, i.e. selecting the shortest, most convenient route, allowing for possible congestion. Importantly, this works flexibly, adapting not only to road conditions but also, for example, to unexpected increases in loading, etc.

Volta Zero also relies on the advanced route and vehicle monitoring. Using the Sibros OTA Deep Logger, you can receive up-to-date information on individual vehicles and the entire fleet.

Shipping is not like it used to be

Apart from the passenger car market changes, a similar revolution is underway in the truck and van industry. This transformation is called for as the problem is not only a shortage of professional drivers but also reducing the cost of transportation and increasing volume. So any loss-reduction initiative is of paramount value.

As for the solutions we have mentioned in this article, they will certainly not all be widely implemented in the next few years. For example, it is difficult to expect only electric-powered autonomous trucks to be on the road as early as 2027. What can be widely rolled out now is, for example, optimization of cargo loading (by predicting when the truck will arrive), better route finding (via advanced GPS), or predictive maintenance (early repair before it generates logistics costs). It is only the second step to progress toward full electrification and autonomization.

Regardless of how the truck industry evolves over the next few or so years, it is definite that the changes will be based on the idea of a digital revolution, advanced software, and smart components.

All this is geared to enhance mobility services, bringing aspects such as driving comfort, business efficiency, and safety to a new level. This is a fact well known to the big OEM players and to the  tech and automotive companies that year after year are competing with each other in innovations.

written by
Adam Kozłowski
written by
Marcin Wiśniewski
Software development

AAOS Hello World: How to build your first app for Android Automotive OS

 Android Automotive OS is getting more recognition as automotive companies are looking to provide their customers with a more tailored experience. Here we share our guide to building the first app for AAOS.

Before you start,  read our first article about AAOS and get to know our review to be aware of what to expect. Let’s try making a simple  Hello World app for android automotive. To get an IDE, go to  Android Studio Preview | Android Developers and get a canary build:

AAOS Hello World: How to Build Your First App for Android Automotive OS

In the next step, prepare SDK, check and download the Automotive system image in SDK manager. You can get any from api32, Android 9, or Android 10, but I do not recommend the newest one as it is very laggy and crashes a lot right now. There are also Volvo and Polestar images.

For those you need to add links to SDK Update Sites:

 https://developer.volvocars.com/sdk/volvo-sys-img.xml

 https://developer.polestar.com/sdk/polestar2-sys-img.xml

Start a new project, go to  File> New Project and choose  automotive with  no activity

Android Automotive OS

A nice and clean project should be created, without any classes: Go to  build.gradle and add the car app library into dependencies, refresh the project to make it get

AAOS Hello World

our new dependency:

implementation "androidx.car.app:app-automotive:1.2.0-rc01"

Let's write some code, first our screen class. Name it as you want and make it extend Screen class from  android.car.app package and make it implement required methods:

public class GrapeAppScreen extends Screen {



  public GrapeAppScreen(@NonNull CarContext carContext) {

      super(carContext);

  }



  @NonNull

  @Override

  public Template onGetTemplate() {

      Row row = new Row.Builder()

.setTitle("Thats our Grape App!").build();



      return new PaneTemplate.Builder(

              new Pane.Builder()

                      .addRow(row)

                      .build()

      ).setHeaderAction(Action.APP_ICON).build();

  }

}

That should create a simple screen with our icon and title, now create another class extending  CarAppService from the same package and as well make it implement the required methods. From  createHostValidator() method return a static one that allows all hostnames for the purpose of this tutorial and return brand new session with our screen in  onCreateSession() , pass  CarContext using  Session class  getCarContext() method:

public class GrapeAppService extends CarAppService {



  public GrapeAppService() {}



  @NonNull

  @Override

  public HostValidator createHostValidator() {

      return HostValidator.ALLOW_ALL_HOSTS_VALIDATOR;

  }



  @NonNull

@Override

public Session onCreateSession() {

  return new Session() {

      @Override

      @NonNull

      public Screen onCreateScreen(@Nullable Intent intent) {

          return new GrapeAppScreen(getCarContext());

      }

  };

}



}

Next, move to  AndroidManifest and add various features inside the main manifest tag:

<uses-feature

  android:name="android.hardware.type.automotive"

  android:required="true" />

<uses-feature

  android:name="android.software.car.templates_host"

  android:required="true" />

<uses-feature

  android:name="android.hardware.wifi"

  android:required="false" />

<uses-feature

  android:name="android.hardware.screen.portrait"

  android:required="false" />

<uses-feature

  android:name="android.hardware.screen.landscape"

  android:required="false" />

Inside the Application tag add our service and activity, don’t forget  minCarApiLevel as lack of this will throw an exception on app start:

<application

  android:allowBackup="true"

  android:appCategory="audio"

  android:icon="@mipmap/ic_launcher"

  android:label="@string/app_name"

  android:roundIcon="@mipmap/ic_launcher_round"

  android:supportsRtl="true"

  android:theme="@style/Theme.GrapeApplication">



<meta-data android:name="androidx.car.app.minCarApiLevel"

  android:value="1"

/>



<service

      android:name="com.grapeup.grapeapplication.GrapeAppService"

      android:exported="true">

<intent-filter>

<action android:name="androidx.car.app.CarAppService" />

</intent-filter>

</service>



<activity

      android:name="androidx.car.app.activity.CarAppActivity"

      android:exported="true"

      android:label="GrapeApp Starter"

      android:launchMode="singleTask"

      android:theme="@android:style/Theme.DeviceDefault.NoActionBar">



<intent-filter>

<action android:name="android.intent.action.MAIN" />

<category android:name="android.intent.category.LAUNCHER" />

</intent-filter>

<meta-data

          android:name="distractionOptimized"

          android:value="true" />

</activity>

</application>

Now we can upload our application to the device, verify that you have an automotive emulator created, use automotive configuration, and hit run. The app is run in Google Automotive App Host, so if it is your first application on this device, it may require you to get to the play store and get it.

That’s how it looks:

Build Your First App for Android Automotive OS

The last thing, we’ll add a navigation button that will pop a  Toast . Modify  onGetTemplate() in  Screen class, add  Action and  ActionStrip :

Action action = new Action.Builder()

      .setOnClickListener(

              () -> CarToast.makeText(getCarContext(), "Hello!", CarToast.LENGTH_SHORT).show())

      .setTitle("Say hi!")

      .build();



ActionStrip actionStrip = new

Add it to  PaneTemplate:

return new PaneTemplate.Builder(

      new Pane.Builder()

              .addRow(row)

              .build()

)       .setActionStrip(actionStrip)

      .setHeaderAction(Action.APP_ICON)

      .build();

That’s our HelloWorld app:

Now you have the HelloWorld example app up and running using Car App Library. It takes care of displaying and arranging everything on the screen for us. The only responsibility is to add screens and actions we would like to have(and a bit of configuration). Check the Car app library to explore more of what can be done with it, play around with creating your app, and definitely check our blog soon for more AAOS app creation content.

written by
Grape up Expert
Automotive

Software-defined vehicle and fleet management

 With the development of artificial intelligence, the Internet of Things, and cloud solutions, the amount of data we can retrieve from a vehicle is expanding every year. Manufacturers improve efficiency in converting this data into new services and enhance their own offerings based on the information received from connected car systems. Can software-defined vehicle solutions be successfully applied to enabling fleet management systems for hundreds or even thousands of models? Of course, it can, and even should! This is what today's market, which is becoming steadily more car-sharing and micro mobility-based, expects and needs.

Netflix, Spotify, Glovo, and Revolut have taught us that entertainment, ordering food, or banking is now literally at our fingertips, available here and now, whenever we need or want it. Contactless, mobile-first processes, that reduce queues and provide flexibility, are now entering every area of the economy, including  transportation and the automotive industry .

Three things: saving time, sparing money, and ecological trends dramatically change the attitude toward owning a car or choosing means of transport. Companies such as Uber, Lyft, or Bird cater to the needs of the younger generation, preferring renting over ownership.

The data-driven approach has become a cornerstone for automotive companies - both new, emerging startups and older, decades-old business models, such as car rental companies. None of the companies operating in this market can exist without a secure and well-thought-out IT platform for fleet management. At least if they want to stay relevant and compete.

It is the software - on an equal footing, or even first before the unique offer - that determines the success of such a company and allows it to  manage a fleet of vehicles , which sometimes includes hundreds, if not thousands of models.

Depending on the purpose of the vehicles, the business model, and the scale of operations, solutions based on software will obviously vary, but they will be beneficial to both the fleet manager and the vehicle renter. They allow you to have an overall view of the situation,  extract more useful information from received data and reasonably scale costs.

Among the potential entities that should be interested in improvements in this matter, the following types of fleets can be specifically mentioned:

  •  city e-scooters, bicycles, and scooters;
  •  car rentals;
  •  city bus fleets;
  •  tour operators;
  •  transport and logistics companies;
  •  cabs;
  •  public utility vehicles (e.g., fire departments, ambulances, or police cars) and government limousines;
  •  automobile mechanics;
  •  small private fleets (e.g., construction or haulage companies)
  •  insurers' fleets;
  •  automobile manufacturers' fleets (e.g., replacement or test vehicles).

The benefits of managing your fleet with cloud software and the Internet of Things (IoT)

Real-time vehicle monitoring (GPS)

A sizeable fleet implies a lot of responsibility and potentially a ton of problems. That's why it's so important  to promptly locate each vehicle included and monitor it in real-time:

  •  the distance along the route,
  •  the place where the car was parked,
  •  place of breakdown.

This is especially useful in the context of a bus fleet, but also in the  sharing-economy group of vehicles : city e-scooters, bicycles, and scooters. In doing so, the business owner can react quickly to problems.

Recovering lost or stolen vehicles

The real-time updated location, working due to  IoT and wireless connectivity , also enables operations in emergency cases. This is because it allows you to  recover a stolen or abandoned vehicle.

These benefits will be appreciated, for example, by people in charge of logistics transport fleets. After all, vehicles can be stolen in overnight parking lots. In turn, the fight against abandoned electric 2-wheelers will certainly be of interest to owners of the startups, which often receive complaints about scooters abandoned outside the zone, in unusual places, such as in fields or ditches in areas where there is no longer a sidewalk.

Predictive maintenance

We should also mention  advanced predictive analytics for parts and components such as brakes, tires, and engines. The strength of such solutions is that you receive a warning (vehicle health alerts) even before a failure occurs.

The result? Reduced downtime, better resource planning, and streamlined decision-making. According to estimates, these are savings of $2,000 per vehicle per year.

More convenient vehicle upgrades - comprehensive OTA (Over-the-Air)

Over-the-Air (OTA) car updates are vital for safety and usability. Interconnected and networked vehicles  can be updated in one go , simultaneously. This saves the time otherwise required to manually configure each system one by one. In addition, operations can also be performed on vehicles that happen to be out of the country.

Such a facility applies to virtually all industries relying on extensive fleets, especially in the logistics, transportation, and tourism sectors.

Intermediation in renting

A growing number of services are focusing on  service that is fast, simplified, and preferably remote. For instance, many rooms or apartment rentals on Airbnb rely on self-service check-in and check-out, using special lockups and codes.

Similar features are offered by  software-defined vehicles , which can now be rented "off the street", without the need for service staff. The customer simply selects a vehicle and, via a smartphone app, unlocks access to it. Quick, easy, and instant.

Loyalty scheme for large fleets

Vehicle and software providers are well aware that new technology comes with great benefits, but also with a degree of investment. In order to make such commitments easier to decide upon, attractive loyalty schemes are being rolled out for larger fleets.

So as a business owner you reap double benefits. And at the same time you test, on lucrative terms, which solutions work best for you.

Improved fleet utilization

Cloud and IoT software enables more practical use of the entire fleet of available vehicles and accurately pinpoints bottlenecks or areas where the most downtime occurs.

This is an invaluable asset in the context of productivity-driven businesses, where even a few hours of delay can result in significant losses.

By contrast,     artificial intelligence(AI)-based predictions   (for example, information about an impending failure)  offered to commercial fleets provide fleet managers with more anticipatory data , which can significantly cut business costs. Other benefits include improved emissions control or higher environmental standards.

Increasing safety

Minimized almost to zero danger of hacking into the system contributes to the security of the fleet-based business.

    Case study: Ford Pro™ Telematics  

Revenues based on software and digital services is not a bad deal for all informed participants in the business environment. Some big players like Ford have based their entire business model, on this idea.  With their Ford Pro™ series of solutions, they want to become an accelerator for highly efficient and sustainable business. Their offering is based on market-ready commercial vehicles to suit almost any business needs and on all-electric trucks and vans. They are developing telematics in particular.

 Ford Chief Executive Jim Farley puts it bluntly: We are the Tesla of this industry.

Bold assumptions? Yes, but also an equally bold implementation. Created in May 2021, a standalone Ford Pro™ unit is to focus exclusively on commercial and government customers. The new model also serves as a prelude to expanding digital service offers for retail customers.

The objective is to increase Ford Pro's annual revenue to $45 billion by 2025, up 67% from 2019.

Streamlined vehicle repairs

Managing a large group of vehicles also necessitates regular inspections and repairs, and at different times for different vehicles. This entails the need to control each unit individually.

The risk this poses is that information about the problem may not reach decision-makers in time, and besides, instead of the service and product, the executive is constantly focused on responding to anomalies. New technologies partially eliminate this problem.

As part of the Ford Pro Telematics Essentials package, vehicle owners receive real-time alerts on vehicle status in the form of engine diagnostic codes, vehicle recalls, and more. There's also a  scheduled service tracking feature and, in the near future, remote locking/unlocking, which will further enhance fleet management.

Driver behavior insights

Human-centered technology can help improve driver performance and road safety. Various sensors and detectors inside Ford vehicles provide a lot of interesting  information about the driver's behavior. They monitor the frequency and suddenness of actions such as braking or accelerating. Knowledge of this type of behavior allows for better fleet planning and improved driver safety.

Fuel efficiency analysis

Fuel is one of the major business costs for companies managing a large number of vehicles. Ford Pro™ Telematics, therefore, approaches customers with a solution to  monitor fuel consumption and engine idle time.

This functionality is designed to  optimize performance and reduce expenses. Better exhaust control also indirectly lowers operating costs.

Manage all-electric vehicle charging with E-Telematics

Telematics also provides an efficient way to manage a fleet consisting of electric vehicles. There are many indications that due to increasingly stringent environmental standards, they will form the backbone of various operations.

That's why Ford has developed its own E-Telematics software. It enables  comprehensive monitoring of the charging status of the electric vehicle fleet. In addition, it helps drivers find and pay for public charging points and facilitates reimbursement for charging at home.

The system also offers the ability to accurately compare the efficiency and economic benefits of electric vehicles versus gas-powered ones.

Better cooperation with insurers

Cloud-based advanced telematics software not only provides a better customer experience. What also counts is a streamlined collaboration with insurance providers and the delivery of vehicle rental services to clients of such companies.

This, of course, requires a special tool that enables:

  •  remote processing of the case reported by the customer,
  •  making the information available to the rental company,
  •  allowing rental company personnel to provide a vehicle that meets the driver's needs.

The goal is to provide  replacement cars for the customers of partnering insurers .

Touchless and counter-less experience

It includes  verifying a customer and unlocking a car using a mobile app . This translates into greater customer satisfaction and the introduction of new business models. With the introduction of mobile apps in app stores,  queues can be shortened. This results in a simplified rental process. From now on, it is more intuitive and focused on user experience and benefits. Because nowadays customers expect mobile and contactless service.

    Case study: car rental  

The leading rental enterprise teamed up with Grape Up to  provide counter-less rental services and a touchless experience for their customers . By leveraging a powerful touchless platform and telematics system used by the rental enterprise, the company was able to build a more customer-friendly solution and tackle more business challenges, such as efficient stolen car recovery and car insurance replacement.

Software-defined vehicle solutions in vehicle fleets. How do implement them sensibly?

Technological changes that we are experiencing in the entertainment industry or e-commerce have also made their way into the automotive sector as well as micro-mobility and car rentals. There are many indications that there is no turning back.

Solutions such as real-time tracking, predictive maintenance, and driverless rental are the future. They help manufacturers execute their key processes more efficiently and track and manage their fleets effectively. In turn, the end customer receives an intuitive and convenient tool that fosters brand loyalty and makes life easier.

Of course, they need to be implemented properly. A large role is played by the quality of software. The key is the efficient flow of data and their cooperation with devices inside the vehicle. That is why it is worth choosing for business cooperation such a company that not only has the appropriate technological competence, but also the knowledge and experience gained during other such projects and implementations for the automotive industry.

written by
Adam Kozłowski
written by
Marcin Wiśniewski
AI
Automotive

Cloud solutions and AI software to serve the transport in cities of the future

 Cities of the future are spaces that are comfortable to live in, eco-friendly, safe, and intelligently managed. It's hard to imagine such a futuristic scenario without the use of advanced technology. Preferably one that combines various elements within one coherent data processing system. Especially great potential lies in solutions at the crossroads of automotive, telematics, and AI.  Let's dive into the transport in cities of the future.

Cloud Solutions and AI Software to Serve the Transport in Cities of the Future

GPS data

GPS technology gives developers the ability to monitor the vehicle position in real-time, and on top of that generate the data on parameters such as speed, distance, and travel time. Using this kind of  telemetry information , combined with fuel level, speed limits, traffic information, and the estimated time of arrival, the urban transportation passengers can be instantly alerted to ongoing, but also predicted delays and problems on route.

An advanced version of this system can also propose different routes, to avoid building up traffic in congested areas and reduce the average travel time of the passengers, making everyone happy.

Sensors and detectors

Installed along roads, such elements collect data on traffic volumes, vehicle speeds, and which lanes are being occupied. Sensors embedded in roads are used today by about 25% of smart city stakeholders in the United States (Otonomo study).

Additionally, the so-called agglomerative clustering algorithm helps to identify clusters of places or destinations.

Connected Vehicles (CV)

The latest generation of intelligent transportation systems works closely with the Internet of Things (IoT), specifically the  Internet of Vehicles(IoV). This allows for increased efficiency, mobility, and safety of autonomous cars.

 Wireless connectivity provides communication in indoor and outdoor environments . There is possible interaction:

  •  Vehicle-to-vehicle (V2V),
  •  Vehicle to Vehicle Sensor (V2S),
  •  Vehicle-road infrastructure (V2R),
  •  Vehicle-Internet (V2I).

In the latter case,  the vehicle couples with ITS infrastructure: traffic signs, traffic lights, and road sensors.

Digital twin (case study: Antwerp)

A  digital twin is a kind of bridge between the digital and physical worlds . It supports decision-makers in their complex decisions about the quality of life in the city, allowing them to budget even more effectively.

In the Belgian city of Antwerp a digital twin, a 3D digital replica of the city, was launched in 2018. The model  features real-time values from air quality and traffic sensors.

The city authorities can see exactly  what the concentration of CO2 emissions and noise levels are in the city center. They also notice to what extent limited car traffic in certain city areas affects traffic emissions.

Automated Highway Systems (AHS)

Connected vehicles have opened the way for further innovations. Automated highway systems will be among them. Fully autonomous cars will move along designated lanes. The flow of cars will be controlled by a central city system.

The new solution will  allow the causes of congestion on highways to be pinpointed and reduces the likelihood of collisions.

Traffic zone division

Traditional traffic zoning takes into account the social and economic factors of an area. Whereas today, this can be based on much better data downloaded in real-time from smartphones. You can see exactly where most vehicles are accumulating at any given time. These are not always "obvious" places, because, for instance, at 3 p.m. there may be heavier traffic on a small street near a popular corporation than on an exit road in the city center.

This modern categorization simplifies the city's complex road network, enabling  more efficient traffic planning without artificial division into administrative boundaries.

Possibilities vs. practice

There are some interesting findings from a study conducted in 2021 by the analytics firm Lead to Market. It aimed to determine how cities are using vehicle data to enhance urban life. Today, these are being used for:

  •  alleviating bottlenecks in cities for business travelers and tourists (36%),
  •  better management of roads and infrastructure (18%),
  •  spatial and urban planning (18%),
  •  managing accident scenes (14%),
  •  improving parking (6%),
  •  mitigating environmental impacts (2%).

Surprisingly, however, only 22% of respondents use vehicle data for real-time daily traffic management. What could be the reason behind this? Ben Wolkow, CEO of Otonomo, points to one main reason: data dispersion. Today, it comes from a variety of sources. Meanwhile:  for connected vehicle data to power smart city development in a meaningful way, they need to shift to a single connected data source .

It's good to know that  data from connected vehicles currently account for less than one-tenth of smart city analysis. But experts agree that this will be changing in favor of new solutions.

Technology that shapes the city

Vehicles are becoming increasingly intelligent and connected. Hardware, software, and sensors can now be fully integrated into the digital infrastructure. On top of that, full communication between vehicles and sensors on and off the road is made possible. Wireless connectivity,  AI , edge computing, and IoT are supporting predictive and analytical processes in larger metropolitan areas.

The biggest challenge, however, is the skillful use of data and its uninterrupted retrieval. It is, therefore, crucial  to find a partner with whom you can co-develop, e.g. some reliable traffic analysis software.

written by
Adam Kozłowski
written by
Marcin Wiśniewski
Automotive

Vehicle automation - where we are today and what problems we are facing

 Tesla has Autopilot, Cadillac has Super Cruise, and Audi uses Travel Assist. While there are many names, their functionality is essentially similar. ADAS(advanced driver-assistance systems) assists the driver while on the road and sets the path we need to take toward autonomous driving. And where does your brand rank in terms of vehicle automation?

Consumers’ Reports data shows that 92 percent of new cars have the ability to automate speed with adaptive cruise control, and 50 percent can control both steering and speed. Although we are still two levels away from a vehicle that will be fully controlled by algorithms (  see the infographic below ), which, according to independent experts, is unlikely to happen within the next 10 years (at least when it comes to traditional car traffic), ADAS systems are finding their way into new vehicles year after year, and drivers are slowly learning to use them wisely.

On the six-step scale of vehicle automation - starting at level 0, where the vehicle is not equipped with any driving technology, and ending at level 5 (fully self-driving vehicle) - we are now at level 3. ADAS systems, which are in a way the foundation for a fully automated vehicle, combine automatic driving, acceleration, and braking solutions under one roof.

However, in order for this trend to be adopted by the market and grow dynamically year by year, we need to focus on  functional software and the challenges facing the automotive industry .

The main threats facing automated driving support systems

1. The absence of a driver monitoring system

Well-designed for functionality and UX, ADAS can effectively reduce driver fatigue and stress during extended journeys. However, for this to happen it needs to be equipped with an effective driver monitoring system.

Why is this significant? With the transfer of some driving responsibility into the hands of advanced technology, the temptation to "mind their own business" can arise in the driver. And this often results in drivers scrolling through their social media feeds on their smartphones. When automating driving, it is important to involve the driver, who must be constantly aware that their presence is essential to driving.
Meanwhile, Consumer Reports, which surveyed dozens of such systems in vehicles from leading manufacturers, reports that just five of them: BMW, Ford, Tesla, GM and Subaru - have fitted ADAS with such technology.

According to William Wallace - safety policy manager at Consumer Reports, "The evidence is clear: if a car facilitates people’s distraction from the road, they will do it - with potentially fatal consequences. It's critical that active driving assistance systems have safety features that actually verify that drivers are paying attention and are ready to take action at all times. Otherwise, the safety risks of these systems may ultimately outweigh their benefits."

2. Lack of response to unexpected situations

According to the same institution, none of the systems tested reacted well to unforeseen situations on the road, such as construction, potholes, or dangerous objects on the roadway. Such deficiencies in functionality in current systems, therefore, create a potential risk of accidents, because even if the system guides the vehicle flawlessly along designated lanes (intermittent lane-keeping or sustained lane-keeping system) the vehicle will not warn the driver in time to take control of the car when it becomes necessary to readjust the route.

There are already existing solutions on the market that can effectively warn the driver of such occurrences, significantly increase driving comfort and "delegate" some tasks to intelligent software. These are definitely further elements on the list of things worth upgrading driving automation systems within the coming years.

3. Inadequate UX and non-intuitive user experience

All technological innovations at the beginning of their development breed resistance and misunderstanding. It's up to the manufacturer and the  companies developing software to support vehicle automation to create systems that are straightforward and user-friendly. Having simple controls, clear displays and transparent feedback on what the system does with the vehicle is an absolute "must-have" for any system. The driver needs to understand right from the outset in which situations the system should be used when to take control of the vehicle and what the automation has to offer.

4. Lack of consistency in symbols and terminology

Understanding the benefits and functionality of ADAS systems is certainly not made easier by the lack of market consistency. Each of the leading vehicle manufacturers uses different terminology and symbols for displaying warnings in vehicles. The buyer of a new vehicle does not know if a system named by Toyota offers the same benefits as a completely different named system available from Ford or BMW and how far the automation goes.

Sensory overload affects driver frustration, misunderstanding of automation, or outright resentment, and this is reflected in consumer purchasing decisions and, thus, in the development of systems themselves. It is challenging to track their impact on safety and driving convenience when the industry has not developed uniform naming and consistent labeling to help enforce the necessary safety features and components of such systems.

5. System errors

Automation systems in passenger cars are fairly new and still in development. It's natural that in the early stages they can make mistakes and sometimes draw the wrong conclusions from the behavior of drivers or neighboring vehicles. Unfortunately, mistakes - like the ones listed below - cause drivers to disable parts of the system or even all of it because they simply don't know how to deal with it.

  •  Lane-keeping assists freaking out in poorer weather;
  •  Steering stiffening and automatically slowing down when trying to cross the line;
  •  Sudden acceleration or braking of a vehicle with active cruise control - such as during overtaking maneuvers or entering a curve on a highway exit or misreading signs on truck trailers.

 How to avoid such errors? The solution is to develop more accurate models that detect which lanes are affected by signs or traffic lights.

Vehicle automation cannot happen automatically

Considering the number of potential challenges and risks that automakers face when automating vehicles, it's clear that we're only at the beginning of the road to the widespread adoption of these technologies. This is a defining moment for their further development, which lays the foundation for further action.

On the one hand, drivers are already beginning to trust them, use them with greater frequency, and expect them in new car models. On the other hand, many of these systems still have the typical flaws and shortcomings of "infancy," which means that with their misunderstanding or overconfidence in their capabilities, driver frustration can result, or in extreme cases, accidents. The role of  automotive OEMs and software developers is to create solutions that are simple and intuitive and to listen to market feedback even more attentively than before. A gradual introduction of such solutions to the market, so that consumers have time to learn and grasp them, will certainly facilitate automation to a greater extent and ultimately the creation of fully automated vehicles. For now, the path leading to them is still long and bumpy.

written by
Adam Kozłowski
written by
Marcin Wiśniewski
AI
Finance
Automotive

How AI is transforming automotive and car insurance

 The car insurance industry is experiencing a real revolution today. Insurers are more and more carefully targeting their offers using AI and machine learning features. Such innovations significantly enhance business efficiency, eliminate the risk of accidents and their consequences, and enable adaptation to modern realities.

Changes are needed today

Approximately $25 billion is "frozen" with insurers annually due to problems such as fraud, claims adjustment, delays in service garages, etc. However, customers are not always happy with the insurance amounts they receive and the fact that they often have to accept undervalued rates. The reason for this is that due to limited data, it is difficult to accurately identify the culprit of the incident. It is also often the case that compensation is based on rates lower than the actual value of the damage.

 Insurers today need to be aware of the ecosystem in which they operate . Clients are becoming more demanding and, according to an IBM Institute for Business Value (IBV) study, 50 percent of them prefer tailor-made products based on individual quotes. The very model of cooperation between businesses is also changing, as relations between insurance providers and car manufacturers are growing tighter. All of this is linked to the fact that  cars are becoming increasingly autonomous, allowing them to more closely monitor traffic incidents and driver behavior as well as manage risk. Estimates suggest there will be as many as one trillion connected devices by 2025, and by 2030 there will be an increasing percentage of vehicles with automated features (ADAS).

No wonder there's an increasing buzz about changes in the car insurance industry. And these are changes based on technology. The use of  artificial intelligence , machine learning, and  advanced data analytics in the cloud will allow for seamless adaptation to market expectations.

 CASE STUDY

 SARA Assicurazioni and Automobile Club Italia are already encouraging drivers to install ADAS systems in exchange for a 20% discount on their insurance premiums. Indeed, it has been demonstrated that such systems can slash the rate of liability claims for personal injury by 4-25% and by 7-22% for property damage.

Why is this so important for insurers who want to face the reality?

Artificial intelligence-based pricing models provide a significant reduction in the time needed to introduce new offerings and to make optimal decisions. The risk of being mispriced is also lowered, as is the time it takes to launch insurance products.

The new  AI-based insurance reality is happening as we speak. The digital-first companies like Lemonade, with their high flexibility in responding to market changes, are showing customers what solutions are feasible.  In doing so, they put pressure on those companies that still hesitate to test new models.

innovation in insurance

Areas of change in car insurance due to AI

Artificial intelligence and related technologies are having a huge impact on many aspects of  the insurance industry : quoting, underwriting, distribution, risk and claims management, and more.

Areas of change in car insurance due to AI

Changes in insurance distribution

Artificial intelligence algorithms smoothly create risk profiles so that the time required to purchase a policy is reduced to minutes. Smart contracts based on blockchain instantly authenticate payments from an online account. At the same time, contract processing and payment verification is also vastly streamlined, reducing insurers' client acquisition cost.

Advanced risk assessment and reliable pricing

Traditionally, insurance premiums are determined using the "cost-plus" method. This includes an actuarial assessment of the risk premium, a component for direct and indirect costs, and a margin. Yet it has quite a few drawbacks.

One of them is the inability to easily account for non-technical price determinants, as well as the inability to react quickly to shifting market conditions.

How is risk calculated? For car insurance companies, the assessment refers to accidents, road crashes, breakdowns, theft, and fatalities.

These days, all these aspects can be controlled by leveraging AI, coupled with IoT data that provides real-time insights. Customized pricing of policies, for instance,  can take into account GPS device dataon a vehicle’s location, speed, and distance traveled. This way, you can see whether the vehicle spends most of its time in the driveway or if, conversely, it frequently travels on highways, particularly at excessive speeds.

In addition, insurance companies can use a host of other  sensor and camera data, as well as reports and documents from previous claims. Having all this information gathered, algorithms are able to reliably determine risk profiles.

 CASE STUDY

 Ant Financial, a Chinese company that offers an ecosystem of merged digital products and services, specializes in creating highly detailed customer profiles. Their technology is based on artificial intelligence algorithms that assign car insurance points to each customer, similarly to credit scoring. They take into account such detailed factors as lifestyle and habits. Based on this, the app shows an individual score, assigning a product that matches the specific policyholder.

An in-depth analysis of claims

The cooperation between an insurance company and its client is based on the premise that both parties are pursuing to avoid potential losses. Unfortunately, sometimes accidents, breakdowns or thefts occur and a claims process must be implemented. Artificial intelligence, integrated IoT data, and  telematics come in handy irrespective of the type of claims we are handling.

  •  These technologies are suitable for, among other things, automatically generating not only damage information but also repair cost estimates.
  •  Machine learning techniques can estimate the average cost of claims for various client segments.
  •  Sending real-time alerts, in turn, enables the implementation of predictive maintenance.
  •  Once an image has been uploaded, an extensive database of parts and prices can be created.

The drivers themselves gain control as they can carry out the process of registering the damage from A to Z:  take a photo, upload it to the insurer's platform and get an instant quote for the repair costs. From now on, they are no longer reliant on workshop quotes, which were often highly overestimated in line with the principle: "the insurer will pay anyway".

Fraud prevention

29 billion dollars in annual losses These are losses to auto insurers that occur due to fraud. Fraudsters want to scam a company out of insurance money based on illegally orchestrated events. How to prevent this? The answer is AI.

 Analyzed data retrieved from cameras and sensors can reconstruct the details of a car accident with high precision. So, having an accident timeline generated by artificial intelligence facilitates accident investigation and claims management.

 CASE STUDY

 An advanced AI-based incident reconstruction has been tested lately on 200,000 vehicles as part of a collaboration between Israel's Project Nexar and a Japanese insurance company.

Assistance in the event of accidents

According to data from the OECD, car accident fatalities could be reduced by 44 percent if emergency medical services had access to real-time information about the injuries of involved parties.
Still, real-time assistance has great potential not only for public services but also in the context of auto insurance.

By leveraging AI to perform this,     insurers can provide drivers with quick and semi-automated responses during collisions and accidents    . For example, a chatbot can instruct the driver on how to behave, how to call for help, or how to help fellow passengers. All this is essential in the context of saving lives. At the same time, it is a way of reducing the consequences of an accident.

Transparent decision making (client perspective)

New technologies offer solutions to many problems not only for insurers but also for clients. The latter often complain about discrimination and unfair, from their point of view, calculations of policies and compensation.

"Smart automated gatekeepers" are superior in multiple ways to the imperfect solutions of traditional models. This is because, based on a number of reliable parameters, they facilitate the creation of more authoritative and personalized pricing policies. Data-rich and automated risk and damage assessments pay off for consumers because they have decision-making power based on how their actions affect insurance coverage.

The opportunities and future of AI in car insurance

McKinsey's analysis says that across functions and use cases AI investments are worth $1.1 trillion in potential annual value for the insurance industry.

The direction of changes is outlined in two ways: first by increasingly connected and  software-equipped vehicles with more sensors. Second, by the changing analytical skills of insurers. Data-driven vehicles will certainly affect more reliable and real-time consistent repair costs and, consequently, claims payments. And when it comes to planning offers and understanding the client, AI is an enabler of change for personalized, real-time service (24/7 virtual assistance) and for creating flexible policies. All signs indicate that such "abstract" parameters as education or earnings will cease to play a major role in this regard.

Tech impacting insurtech

As can be inferred from the diagram above, the greater the  impact of a given technology on an insurance company's business , the longer the time required for its implementation. Therefore, it is vital to consider the future on a macro scale, by planning the strategy not for 2 years, but for 10.

 The decisions you make today have a bearing on improving operational efficiency, minimizing costs, and opening up to individual client needs, which are becoming more and more coupled with digital technologies.

written by
Grape up Expert
AI
Automotive

Predictive transport model and automotive. How can smart cities use data from connected vehicles?

 There are many indications that the future lies in technology. Specifically, it belongs to connected and autonomous vehicles(CAVs), which, combined with 5G, AI, and machine learning, will form the backbone of the smart cities of the future. How will data from vehicles revolutionize city life as we've known it so far?

Data is "fuel" for the modern, smart cities

The UN estimates that  by 2050, about 68 percent of the global population will live in urban areas. This raises challenges that we are already trying to address as a society.

Technology will significantly support  connected, secure, and intelligent vehicle communication using vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicle-to-everything (V2X) protocols. All of this is intended to promote better management of city transport, fewer delays, and environmental protection.

This is not just empty talk, because  in the next few years 75% of all cars will be connected to the internet, generating massive amounts of data. One could even say that data will be a kind of "fuel" for  mobility in the modern city . It is worth tapping into this potential. There is much to suggest that cities and municipalities will find that such innovative traffic management, routing, and congestion reduction will translate into better accessibility and increased safety.  In this way, many potential crises related to overpopulation and excess of cars in agglomerations will be counteracted.

What can smart cities use connected car data for?

Traffic estimation and prediction systems

Data from connected cars, in conjunction with Internet of Things (IoT) sensor networks, will help forecast traffic volumes. Alerts about traffic congestion and road conditions can also be released based on this.

Parking and signalization management

High-Performance Computing and high-speed transmission platforms with industrial AI /5G/  edge computing technologies help, among other things, to efficiently control traffic lights and identify parking spaces,  reducing the vehicle’s circling time in search of a space and fuel being wasted.

Responding to accidents and collisions

Real-time processed data can also be used  to save the lives and health of city traffic users. Based on  data from connected cars , accident detection systems can determine what action needs to be taken (repair, call an ambulance, block traffic). In addition, GPS coordinates can be sent immediately to emergency services, without delays or telephone miscommunication.

Such solutions are already being used by European warning systems, with the recently launched eCall system being one example. It works in vehicles across the EU and, in the case of a serious accident, will automatically connect to the nearest emergency network, allowing data (e.g. exact location, time of the accident, vehicle registration number, and direction of travel) to be transmitted, and then dial the free 112 emergency number. This enables the emergency services to assess the situation and take appropriate action. In case of eCall failure, a warning is displayed.

Reducing emissions

Less or more sustainable car traffic  equals less harmful emissions into the atmosphere. Besides, data-driven simulations enable short- and long-term planning, which is essential for low-carbon strategies.

Improved throughput and reduced travel time

Research clearly shows that connected and automated vehicles add to the comfort of driving.  The more such cars on the streets, the better the road capacity on highways.

As this happens, the travel time also decreases. By a specific amount, about 17-20 percent. No congestion means that fewer minutes have to be spent in traffic jams. Of course, this generates savings (less fuel consumption), and also for the environment (lower emissions).

Traffic management (case studies: Hangzhou and Tallinn)

Intelligent traffic management systems (ITS) today benefit from  AI . This is apparent in the Chinese city of Hangzhou, which prior to the technology-transportation revolution ranked fifth among the most congested cities in the Middle Kingdom.

 Data from connected vehicles there helps to efficiently manage traffic and reduce congestion in the city's most vulnerable districts. They also notify local authorities of traffic violations, such as running red lights. All this without investing in costly municipal infrastructure over a large area. Plus, built-in  vehicle telematics requires no maintenance, which also reduces operating costs.

A similar model was compiled in Estonia by Tallinn Transport Authority in conjunction with German software company PTV Group. A continuously updated map illustrates, among other things, the road network and traffic frequency in the city.

Predictive maintenance in public transport

 Estimated downtime costs for high-utilization fleets, such as buses and trucks, range from     $448 to $760 daily    . Just realize the problem of one bus breaking down in a city. All of a sudden, you find that delays affect not just one line, but many. Chaos is created and there are stoppages.
Fortunately, with the trend to equip more and more vehicles with telematics systems, predictive maintenance will be easier to implement.  This will significantly increase the usability and safety of networked buses . Meanwhile, maintenance time and costs will drop.

Creating smart cities that are ahead of their time

 Connected vehicle data not only make smart cities much smarter, but when leveraged for real-time safety, emergency planning, and reducing congestion, it saves countless lives and enables a better, cleaner urban experience – said Ben Wolkow, CEO of Otonomo.

The  digitization of the automotive sector is accelerating the trend of smart and automated city traffic management.  A digital transport model can forecast and analyze the city’s mobility needs to improve urban planning.

written by
Adam Kozłowski
written by
Marcin Wiśniewski
Finance

How insurers can use telematics technology to improve customer experience and increase market competitiveness

 As technology is evolving in the car insurance and data-defined vehicles markets, the generational cross-section of people who use them is also undergoing transformation. Currently, two generations, in particular, are coming to the fore: Millennials and Gen Z. They are interested in service and product offerings that are as personalized and tailor-made as possible, rather than generic.  Adding to that the rapid development of vehicle connectivity, now is the perfect time for insurers to roll out and scale up their telematics products offer.

Based on research by Allison+Partners, consumers within the youngest generation view the car simply as yet another life-enhancing device. What's more, about 70% of Generation Z consumers don't have a driver's license, and 30% of this group have no intention or desire to get one. This makes them more interested in carpooling using an autonomous vehicle. More than 45 percent of respondents feel comfortable with this. Thus, "urban mobility" players such as Uber and Lyft, as also e-scooter providers and  on-demand car rental services are in demand . All of these are capable of filling the empty gap in the public transport infrastructure.

There is another interesting point.  Those who do decide to get a driver's license and buy a car on their own, however, face the hassle of costly insurance . The amounts are especially exorbitant for the least experienced drivers, who in fact pay more for their year of birth stamped on their ID cards. There is even talk of the "age tax" phenomenon. Interestingly, young age does not always go hand in hand with traffic violations or dangerous driving behavior.  Generations Y and Z are therefore advocating that     car insurance should be reassessed, with more emphasis on personalization    .

Telematics - technology that supports contemporary society's needs

how to use telematics technology

Consumers today, particularly younger ones, expect cars to be as innovative as possible, as this directly translates into comfort and safety. Above all, personalized experiences, reliable connection, and comfort count. All this is a recipe for success in the  future of mobility as a service .

Individual online services should be consolidated into comprehensive mobility platforms. This will ensure that the user no longer has to switch between applications, and autonomous driving will generate new opportunities for innovative business models.

 Utilizing telematics data opens the door to improving customer experience, unlocking new revenue streams, and increasing market competitiveness.

 A modular and extensible telematics architecture provides tremendous opportunities for all pro-change agents.

This allows you to organize data handling, filter it and add missing information at various stages of processing. And it will not be an overstatement to say that in a technology-oriented information society, it is data that is the most valuable resource these days.

They can be used in all business processes, interacting with the customer experience and responding to their needs. Especially those who represent the younger generations of the future, namely Millennials and Gen Z.

Telematics platforms and the future of the automotive industry

The processed information is also used to train AI/ML models and to monitor system behavior. When you add to this the fact that they are extracted and included in real-time, you gain the added value of the rapid response. This then results in service satisfaction and sustained business performance.

Telematics technology and the future of the automotive industry

On the other hand, driving data collected from various sources offer a full insight into what is actually happening on the road, how drivers behave, and what decisions they make while driving. Insurers benefit from this, but so do car manufacturers and companies that deal with  shared mobility in its broader sense.
All these advantages are seemingly speaking for themselves. And this is just the beginning because the future of telematics is looking very bright.  The results of IoT Analytics research indicate that by 2025 the total number of IoT devices will have exceeded 27 billion globally. For comparison, it is important to add that currently there are 1.06 billion passenger cars on the roads around the world. Specialists predict that in just these few years this value will increase by more than  400 million     connected vehicles    .

What will the end customer and the insurers themselves gain from applying telematics technology in automotive insurance?

Crash detection

The real-time feature of a telematics platform allows to detect accidents instantly and take proactive actions to mitigate the damages. Car location and sensor data can be used to trigger the crash alerts, coordinate emergency services dispatch, and reconstruct the crash timeline.

Such solutions are already being implemented, for example at IBM. Their Telematics Hub  enables the management of crash and accident data in real-time and with a low probability of error. The tool can distinguish a false event from a real one, generate incident reports, and evaluate driver behavior.

Roadside assistance

According to Highway England, there are over 224,000 car breakdowns a year on England's busiest roads. That's an average of 25 cars per hour. In contrast, in another Anglo-Saxon country, the U.S., there are 1.76 million calls for roadside assistance per year.

Roadside assistance is an optional add-on to drivers' personal car insurance. It’s a popular service among the drivers but it needs to be further developed which requires leveraging real-time data processing. Those insurance providers that offer remote service or that minimize time spent on the side of the road are gaining a competitive advantage.

Monitoring vehicle activity allows pinpointing its location in case of an emergency. Once notified of the breakdown, assistance can be dispatched to the customer position, and the nearest available replacement vehicle can be booked.

UBI & BBI

Behavior-based (pay-how-you-drive) and usage-based insurance - UBI - (pay-as-you-drive) are  the future of car insurance programs . Together with value-added services like automated crash detection or roadside assistance, they will determine the competitiveness and market share of insurance companies.
Handling large volumes of real-time data from every telematics device, like connected cars, mobile apps, and black boxes to extract crucial information and offer insights to customers, requires a robust and scalable telematics platform.

Experts point out many advantages of UBI schemes over the conventional solutions offered so far. The most important of these are:

  •  Potential discounts.
  •  The authorities and insurance claims adjusters have facilitated accident investigations.
  •  Drivers become motivated to improve their performance and eliminate risky driving behaviors and unsafe habits.
  •  Enhancing customer loyalty.
  •  Providing personalized, value-added services to insurance plans to serve customer interests more effectively.

Stolen vehicle recovery

The demand for targeted technologies for vehicle tracking and recovery comes in handy for insurance companies, which face the problem of issuing sizable amounts of compensation for stolen vehicles on a daily basis.

It's not true that younger generations are fickle and unwilling to take out insurance. Generation Z consumers and Millenials accounted for 39% of consumers buying auto insurance in 2018. This figure is increasing year on year and applies not only to compulsory insurance but also to additional plans. The problem is that in many cases theft insurance offers, if there are any, are based on statistical indicators rather than actual data, for instance, the high crime rate of this type in a given area. Besides, customers are often dissatisfied with amounts based on market values that are lower than expected.

So here, too, data-driven individualization is needed, and that's what telematics provides.
For instance, by gathering data about customer behavior, insurers can build driver profiles that allow them to set up alerts that are triggered by unusual or suspicious behavior. Another thing is real-time vehicle tracking. The alarm service can be activated on-demand or automatically, and the car establishes a connection to the operations center. It is also possible to document theft. Information detected by the vehicle is collected and then exported and made available for viewing by the appropriate people.

Telematics technology is an answer to a need, but also a challenge

The Millennials and Generation Z expect a holistic customer experience. Digital offerings must bring together a variety of products designed to make life easier and accommodate each individual's consumer personality.

This is exactly the task facing telematics today, which is not just an incomprehensible and distant technology. It is essentially something that allows you to adapt to society's changing service and experience-related expectations.

Telematics technology is an answer to a need, but also a challenge

However, the new expectations of shared mobility, autonomous vehicles, and  personalized data insurance offers are linked to new sacrifices that end customers must also be prepared to make. These include, for example, the need to share more and more data. Yet, the younger generations are already declaring their readiness.  According to the Majesco survey, almost half of generation Z are also willing to share data if they see value in doing so. Questions in the survey also referred to the car and driver data-based insurance industry.

There are also massive challenges for insurers themselves, where data processing is still only at an initial stage. The technological capabilities of individual insurance companies need to be continuously developed. Ideally, driving data, and the software used to collect and process it, should not be scattered but planned holistically. This ultimately leads to the conscious use of telematics and to better management of situations requiring insurance payouts.

 Grape Up helps you realize the potential of telematics by applying the automotive and insurance industry expertise to create scalable, cloud-native solutions.

written by
Grape up Expert
Automotive
AI

Mood focused car enhancement - driving experience coupled with technological sense

 Driving a car must evoke certain emotions and associations. Without them, a vehicle loses its soul, becomes a machine like any other and it is extremely hard for it to win popularity in a market filled to the brim. For years, brands have been striving to build their individual character and stand out with features such as unique design, performance, safety, or high quality of workmanship. With the proliferation of digital technologies, there is now one more element in the OEMs' toolkit: mood-building. From now on, drivers themselves can decide how they want to feel at a given moment. It's time for mood focused car enhancement .  Digital technology will allow them to attain this state.

Up until now, remarkable driving sensations have typically been achieved by manufacturers through smooth driving, luxurious interior design or high-end sound systems. With modern technology, all of these elements can be combined into one  seamless, sensory experience . In the  vehicles of the future , the installed software will enable the creation of holistic experiences in which different senses are involved, and the driver's experience addresses sensations at both the functional level of the vehicle and the emotional level. Sound, color, scents, temperature, mood lighting, or tactile experiences (such as a massaging seat for the driver) can all create a one-of-a-kind experience that would distinguish the brand and offer the driver something that other manufacturers won't be able to give.

This suggests that sensor technology will become an important distinguishing mark in the user experience and will allow brands to more effectively influence purchase decisions and build consumer loyalty to a particular brand. According to PwC research, 86 percent of buyers are willing to pay more for a better customer experience.

Contextualizing the vehicle according to driving time, who is driving, or what mood they are in is already emerging as a trend set by major car brands.

Just as we approach the personalization of our own cell phones or computer accounts, we are already beginning to approach the personalization and contextualization of our own vehicles. As the implementations outlined below show, you can already see real-life examples of this today.
Manufacturers are using  cloud solutions and  AI not only to create a new vehicle functionality but also to induce us into a specific mood to make driving more enjoyable.

BMV My Modes use case

A whole new dimension of personalization and driving experience has recently been ventured by the BMW brand. With its  BMW iX model, it is promoting a solution called  "My Modes" . It features different colors and layout of the infotainment system with a curved display and digital cockpit. The user, depending on their mood, can change the color and sound theme (BMW IconicSounds Electric) in their vehicle.

Two popular modes are worth examining, namely Expressive and Relax. The former focuses on an active driving experience. Abstract patterns and vibrant colors stimulate action, inspire, and broaden thought paths. The experience is enhanced by interior audio that reflects the context of where you are at a given moment.

The Relax mode, as the name suggests, is designed to promote tranquility and well-being. The images displayed on the screens are inspired by nature and evoke associations of bliss and harmony. This is accompanied by discrete and serene sounds in the background.

  https://youtu.be/vg6B0FY3mc4?t=266

Ford Mindfulness concept: Attention (to) safety

Mindfulness. A keyword in automotive safety in the broadest sense, but also - increasingly - in vehicle design. Focusing attention on the present and on real needs is becoming the status quo. This approach to on-board technology helps create electrified and autonomous vehicles where the driver and passengers can travel safely and pleasantly, being present in the moment. This is being developed by the Ford brand with the  Mindfulness Concept Car.

According to Mark Higbie, senior advisor, Ford Motor Company, who helped introduce mindfulness into the Ford workplace:  A car by itself is not mindful. But how a car is used and the behaviors that it supports, can be. Ford’s goal with this concept is to create experiences that encourage greater awareness. With unique features and embedded technologies, Ford is providing drivers and passengers with new ways to be mindful while in a Ford vehicle, anywhere along the road of life.

Features perfectly suited to your needs

The Mindfulness Concept Car is a vehicle that helps reduce distractions and stress, enhance travelers' well-being, and increase their level of sanitation. The latter is especially important given the pandemic reality.

 Hygienic = safe
The pilot-activated Unlock Purge air conditioning system is geared to give the cabin a shot of clean, fresh air even before you enter the car. A more hygienic environment inside the car is also guaranteed by UV-C light diodes, which stop viruses and germs from multiplying.

Clean air is facilitated by a premium filter that removes almost all dust, odors, smog, allergens and bacteria-sized particles. It's an option specifically designed with allergy sufferers in mind.

 The car that takes care of your health
Modern Ford cars prioritize individual driver characteristics, including what's going on in the driver's body that could potentially affect travel safety.

The Mindfulness Concept Car uses data from external measuring devices. These take real-time physiological data from the driver. Feedback on selected health parameters is then displayed on an in-car screen.

Additionally, an electrically activated driver seat provides a stimulating impact on breathing and heart rate.

 Relaxing"here and now"
Ford's new addition allows you to fully indulge in an experience of tranquility and harmony. Mood lighting combined with temperature climate control provides specific moods inside the cockpit, such as refreshing dawn, relaxing blue sky and starry night.

Mindful driving guides are also provided in the new car concept. For instance, when the car is parked, the driver is instructed in yoga-based mini exercises that help relax the body and mind. The Powernap function, on the other hand, comes in handy during breaks on long journeys: a reclining seat, neck support and soothing sounds help drivers to fall asleep in a less stressful environment between travel points.

Speaking of relaxation, it's also interesting to note that the adaptive air conditioning provides calming cool air and simulates deep breathing. This happens especially after a dangerous incident, such as emergency braking (which is also supported by smart technology).

 Personalized premium audio
The newly developed Ford's vehicle is a true host of new technologies to improve the existing driving experience. This applies, for instance, to the loudspeakers, including the B&O headrest speakers and the overhead speakers. Together they provide the finest possible listening experience.

The B&O Beosonic™ equalizer enables you to select sound spaces to suit your mood, such as: "Energetic", "Relaxed", "Warm". The other playlists, in turn, are tailored to fit a specific situation and location. A troublesome traffic jam? The car itself will turn on the calming tunes.

Vehicles tailored to the users' context and mood

Improving the driver's mood in the car is slowly becoming an equally important factor as safety, functionality or economy. Vehicle interiors will therefore be increasingly adapted to individual desires and moods (human context), but also to occasions and situations (driving environment context).

OEMs are already aware of this, and major automotive giants today are testing solutions that allude to almost a spa-like salon experience. They are doing so with no coincidence. Predictions from the consulting firm Walker say that customer experience will overtake product superiority and price, which so far have been the key differentiator between companies. Emotions and experience therefore have a direct impact on purchasing decisions and brand loyalty. This can be summed up by the phrase: through the senses to the mind.

written by
Adam Kozłowski
written by
Marcin Wiśniewski
Automotive

Big picture of Spring Cloud Gateway

In my life, I had an opportunity to work in a team that maintains the Api Gateway System. The creation of this system began more than 15 years ago, so it is quite a long time ago considering the rate of technology changing. The system was updated to Java 8, developed on a light-way server which is Tomcat Apache, and contained various tests: integration, performance, end-to-end, and unit test. Although the gateway was maintained with diligence, it is obvious that its core contains a lot of requests processing implementation like routing, modifying headers, converting request payload, which nowadays can be delivered by a framework like Spring Cloud Gateway. In this article, I am going to show the benefits of the above-mentioned framework.

The major benefits, which are delivered by Spring Cloud Gateway:

  • support to reactive programming model: reactive http endpoint, reactive web socket
  • configuring request processing (routes, filters, predicates) by java code or yet another markup language (YAML)
  • dynamic reloading of configuration without restarting the server (integration with Spring Cloud Config Server)
  • support for SSL
  • actuator Api
  • integration gateway to Service Discovery mechanism
  • load-balancing mechanisms
  • rate-limiting (throttling) mechanisms
  • circuit breakers mechanism
  • integration with Oauth2 due to providing security features

Those above-mentioned features have an enormous impact on the speed and easiness of creating an Api gateway system. In this article, I am going to describe a couple of those features.

Due to the software, the world is the world of practice and systems cannot work only in theory, I decided to create a lab environment to prove the practical value of the Cloud Spring Gateway Ecosystem. Below I put an architecture of the lab environment:

Building elements of Spring Cloud Gateway

The first feature of Spring Cloud Gateway I am going to describe is a configuration of request processing. It can be considered the heart of the gateway. It is one of the major parts and responsibilities. As I mentioned earlier this logic can be created by java code or by YAML files. Below I add an example configuration in YAML and Java code way. Basic building blocks used to create processing logic are:

  • Predicates – match requests based on their feature (path, hostname, headers, cookies, query)
  • Filters – process and modify requests in a variety of ways. Can be divided depending on their purpose:
  • gateway filter - modify the incoming http request or outgoing http response
  • global filter - special filters applying to all routes so long as some conditions are fulfilled

Details about different implementations of getaway building components can be found in docs: https://cloud.spring.io/spring-cloud-gateway/reference/html/ .

Example of configuring route in Java DSL:

Configuration same route with YAML:

Spring Cloud Config Server integrated with Gateway

Someone might not be a huge fan of YAML language but using it here may have a big advantage in this case. It is possible to store configuration files in Spring Cloud Config Server and once configuration changes it can be reloaded dynamically. To do this process we need to use the Actuator Api endpoint.
Dynamic reloading of gateway configuration shows the picture below. The first four steps show request processing consistent with the current configuration. The gateway passes requests from the client to the “employees/v1” endpoint of the PeopleOps microservice (step 2). Then gateway passes the response back from the PeopleOps microservice to the client app (step 4). The next step is updating the configuration. Once Config Server uses git repository to store configuration, updating means committing recent changes made in the application.yaml file (step 5 in the picture). After pushing new commits to the repo is necessary to send a GET request on the proper actuator endpoint (step 6). These two steps are enough so that client requests are passed to a new endpoint in PeopleOps Microservice (steps 7,8,9,10).

Reactive web flow in Api Gateway

As the documentation said Spring Cloud Gateway is built on top of Spring Web Flux. Reactive programming gains popularity among Java developers so Spring Gateway offers to create fully reactive applications. In my lab, I created Controller in a Marketing microservice which generates article data repetitively every 4 seconds. The browser observes this stream of requests. The picture below shows that 6 chunks of data were received in 24 seconds.

I do not dive into reactive programming style deeply, there are a lot of articles about the benefits and differences between reactive and other programming styles. I just put the implementation of a simple reactive endpoint in the Marketing microservice below. It is accessible on GitHub too: https://github.com/chrrono/Spring-Cloud-Gateway-lab/blob/master/Marketing/src/main/java/com/grapeup/reactive/marketing/MarketingApplication.java

Rate limiting possibilities of Gateway

The next feature of Spring Cloud Gateway is the implementation of rate-limiting (throttling) mechanisms. This mechanism was designed to protect gateways from harmful traffic. One of the examples might be distributed denial-of-service (DDoS) attack. It consists of creating an enormous number of requests per second which the system cannot handle.
The filtering of requests may be based on the user principles, special fields in headers, or other rules. In production environments, mostly several gateways instance up and running but for Spring Cloud Gateway framework is not an obstacle, because it uses Redis to store information about the number of requests per key. All instances are connected to one Redis instance so throttling can work correctly in a multi-instances environment.
Due to prove the advantages of this functionality I configured rate-limiting in Gateway in the lab environment and created an end-to-end test, which can be described in the picture below.

The parameters configured for throttling are as follows: DefaultReplenishRate = 4, DefaultBurstCapacity = 8. It means getaways allow 4 Transactions (Request) per second (TPS) for the concrete key. The key in my example is the header value of “Host” field, which means that the first and second clients have a limit of 4TPS separately. If the limit is exceeded, the gateway replies by http response with 429 http code. Because of that, all requests from the first client are passed to the production service, but for the second client only half of the requests are passed to the production service by the gateway, and another half are replied to the client immediately with 429 Http Code.

If someone is interested in how I test it using Rest Assured, JUnit, and Executor Service in Java test is accessible here: https://github.com/chrrono/Spring-Cloud-Gateway-lab/blob/master/Gateway/src/test/java/com/grapeup/gateway/demo/GatewayApplicationTests.java

Service Discovery

The next integration subject concerns the service discovery mechanism. Service discovery is a service registry. Microservice starting registers itself to Service Discovery and other applications may use its entry to find and communicate with this microservice. Integration Spring Cloud Gateway with Eureka service discovery is simple. Without the creation of any configuration regarding request processing, requests can be passed from the gateway to a specific microservice and its concrete endpoint.

The below Picture shows all registering applications from my lab architecture created due to a practical test of Spring Cloud Gateway. “Production” microservice has one entry for two instances. It is a special configuration, which enables load balancing by a gateway.

Circuit Breaker mention

The circuit breaker is a pattern that is used in case of failure connected to a specific microservice. All we need is to define Spring Gateway fallback procedures. Once the connection breaks down, the request will be forwarded to a new route. The circuit breaker offers more possibilities, for example, special action in case of network delays and it can be configured in the gateway.

Experiment on your own

I encourage you to conduct your own tests or develop a system that I build, in your own direction. Below, there are two links to GitHub repositories:

  1. https://github.com/chrrono/config-for-Config-server (Repo for keep configuration for Spring Cloud Config Server)
  2. https://github.com/chrrono/Spring-Cloud-Gateway-lab (All microservices code and docker-compose configuration)

To establish a local environment in a straightforward way, I created a docker-compose configuration. This is a link for the docker-compose.yml file: https://github.com/chrrono/Spring-Cloud-Gateway-lab/blob/master/docker-compose.yml

All you need to do is install a docker on your machine. I used Docker Desktop on my Windows machine. After executing the “docker-compose up” command in the proper location you should see all servers up and running:

To conduct some tests I use the Postman application, Google Chrome, and my end-to-end tests (Rest Assured, JUnit, and Executor Service in Java). Do with this code all you want and allow yourself to have only one limitation: your imagination 😊

Summary

Spring Cloud Gateway is a huge topic, undoubtedly. In this article, I focused on showing some basic building components and the overall intention of gateway and interaction with others spring cloud services. I hope readers appreciate the possibilities and care by describing the framework. If someone has an interest in exploring Spring Cloud Gateway on your own, I added links to a repo, which can be used as a template project for your explorations.

written by
Łukasz Chronowski
Finance

How to choose a technological partner to develop data-driven innovation

 Insurance companies, especially those focused on life and car insurance, in their offers are placing more and more emphasis on big data analytics and driving behavior-based propositions. We should expect that this trend will only gain ground in the future. And this raises further questions. For instance, what should be taken into account when choosing a technological partner for insurance-technology-vehicle cooperation?

Challenges in selecting a technology partner

The potential of  telematics insurance programs encourages auto insurers to move from traditional car insurance and build a competitive advantage on collected data.

No wonder technology partners are sought to support and develop increasingly innovative projects. Such synergistic collaboration brings tangible benefits to both parties.

As we explained in the article     How to enable data-driven innovation for the mobility insurance   , the right technology partner will ensure:

  •  data security;
  •  cloud and IoT technology selection;
  •  the reliability and scalability of the proposed solutions.

Finding such a partner, on the other hand, is not easy, because it must be a company that efficiently navigates in as many as three areas:  AI/cloud technology, automotive, and insurance . You need a team of specialists who operate naturally in the  software-defined vehicle ecosystem , and who are familiar with the characteristics of the P&C insurance market and the challenges faced by insurance clients.

Aim for the cloud. The relevance of AI and data collection and analytics technologies

Information is the most important asset of the 21st century. The global data collection market in 2021 was valued at $1.66 billion.  No service based on the Internet of Things and AI could operate without a space to collect and analyze data.  Therefore,     the ideal insurance industry partner must deliver proprietary and field-tested cloud solutions   . And preferably those that are dependable. Cloud services offered these days by insurance partners include:

  •  cloud R&D,
  •  cloud and edge computing,
  •  system integration,
  •  software engineering,
  •  cloud platforms development.

Connectivity between the edge device and the cloud must be stable and fast. Mobility devices often operate in limited connectivity conditions, therefore car insurance businesses should leverage multiple methods to ensure an uninterrupted connection. Dynamic switching of cellular, satellite, and Wi-Fi communications combined with globally distributed cloud infrastructure results in reliable transmission and low latency.

A secure cloud platform is capable of handling an increasing number of connected devices and providing them all with the required APIs while maintaining high observability.

As a result, the data collected is precise, valid,  and reliable . They provide full insight into what is happening on the road, allowing you to better develop insurance quotes. No smart  data-driven automation is possible without it.

Data quality, on the other hand, also depends on the technologies implemented inside the vehicle ( which we will discuss further below) and on all intermediate devices, such as the smartphone.  The capabilities of a potential technology partner must therefore reach far beyond basic IT skills and most common technologies.

Telemetry data collection

Obviously, data acquisition and collection is not enough, because information about what is happening on the road, usage and operation of components in itself is just a "record on paper". But to make such a project a reality, you still need to implement  advanced analytical tools and telematics solutions.
Real-time data streaming from telematics devices, mobile apps, and  connected car systems gives access to driving data, driver behavior analysis, and car status.  It enables companies to     provide insurance policies based on customer driving habits    .

Distributed AI

AI models are an integral part of modern vehicles. They predict front and rear collision, control damping of the suspension based on the road ahead, recognize road signs, or lanes.  Modern infotainment applications suggest routes and settings depending on driver behavior and driving conditions.

Empowering the automotive industry to build software-defined vehicles. Automotive aspect

 Today it is necessary to take into consideration a strategy towards modern, software-defined vehicles. According to Daimler AG, this can be expressed by the letters “CASE”:

  •     C    onnected.
  •     A    utonomous
  •     S    hared.
  •     E    lectric.

This idea means the major focus is going to be put on making the cars seamlessly connected to the cloud, support or advancements in autonomous driving based on electric power.
Digitalization and evolution of the computer hardware caused a natural evolution of the vehicle. New SoC’s (System on a Chip, integrated board containing CPU, memory, and peripherals) are multipurpose and powerful enough to handle not just a single task but multiple, simultaneously. It would not be an exaggeration to say that the cars of the future are smart spaces that combine external solutions (e.g. cloud computing, 5G) with components that work internally (IoT sensors).  Technology solution providers must therefore work in two directions, understanding the specifics of both these ecosystems. Today, they cannot be separated.

The partner must be able to operate at the intersection of cloud technologies, AI and telemetry data collection.  Ideally, they should know how these technologies can be practically used in the car. Such a service provider should also be aware of the so-called bottlenecks and potential discrepancies between the actual state and the results of the analysis. This knowledge comes from experience and implementation of complex software-defined vehicle projects.

Enabling data-driven innovation for mobility insurance. Insurance context

There are companies on the market that are banking on the innovative combination of automotive and automation. Although  you have to separate the demand of OEMs and drivers from the demand of the insurance industry.

It's vital that the technology partner chosen by an insurance company is aware of this. This, naturally, involves experience supported by a portfolio for similar clients and specific industry know-how.  The right partner will understand the insurer's expectations and correctly define their needs, combining them with the capabilities of a     software-defined vehicle    .
From an insurer's standpoint, the key solutions will be the following:

  •     Roadside assistance    . For accurately determining the location of an emergency (this is important when establishing the details of an incident with conflicting stakeholders’ versions).
  •     Crash detection.    To take proactive measures geared toward mitigating the consequences.
  •     UBI and BBI.    The data gathered from mobile devices, plug-in dongles, or the vehicle embedded onboard systems can be processed and used to build risk profiles and tailored policies based on customers’ driving styles and patterns.

Technology and safety combined

The future of  technology-based insurance policies is just around the corner. Simplified roadside assistance, drive safety support, stolen vehicle identification, personalized driving feedback, or crash detection- all of these enhance service delivery, benefit customers, and increase profitability in the insurance industry.

Once again, it is worth highlighting that the real challenge, as well as opportunity, is  to choose a partner that can handle different, yet consistent, areas of expertise.

 If you also want to develop data-driven innovation in your insurance company, contact GrapeUp. Browse our portfolio of     automo    tive & insurance projects .

written by
Grape up Expert
AI
Automotive

Parking is plain sailing... Provided that your car is equipped with automated valet parking

 Among the many vehicle functions that intelligent software increasingly performs for us, parking is certainly the one that the majority of us would be most willing to leave to algorithms. While a ride on the highway can be seamless or a long road trip can be smooth, it is also the moment when the engine slows down and the search for a parking space, for a significant number of drivers, becomes a real test of skills. How about getting it automated? This would be beneficial not only for the driver but also for OEM-s, who can use such technology in factories and when loading and unloading vehicles onto ships or trains. Automated Valet Parking developed in BMW iX shows that this process has already started.

Parking difficulties are influenced not only by the dynamically changing circumstances of each parking operation and the large number of factors that must be monitored but also by overloaded parking lots and the endless chase for a time. According to statistics, it is in parking lots that the highest number of collisions and accidents occur, and it is this element that drivers often point out as causing them the most trouble.

According to the National Safety Council statistics, over 60,000 people are injured in parking lots every year. What is more, there are more than 50,000 crashes in parking lots and garages annually. In contrast, according to insurer Moneybarn, 60 percent of drivers found parallel parking to be stressful.

Leaving security in the hands of technology

It's no wonder that car companies around the world are looking for a foothold in exactly this part of automation, which could allow them to convince users to place their confidence in fully autonomous vehicles.

Increased safety - which can definitely be influenced by the introduction of such solutions - has always been at the forefront of all ratings showing driver approval of  SVD (software-defined vehicle) technology . With automatic parking, the driver additionally receives  time-savings, convenience, and reduced stress, because they do not have to waste energy on searching for a free spot, nor think about where they parked their vehicle. An algorithm and a system of networked sensors make the parking decisions for the driver. All the driver has to do is leave the car in a special drop-off/pick-up zone and confirm parking in the application. After shopping at the mall or a meeting, the user again confirms the vehicle pick-up in the app and proceeds to the zone where their vehicle is already parked.

This stress-free handover of the car into the trusted hands of a "digital butler", opens up new service opportunities also for OEMs and  companies cooperating with the automotive industry . While the driver can go shopping or go to the movies in peace, the vehicle can be serviced during this time. Among the potential applications are services such as:

  •  automatic charging in the case of an electric vehicle;
  •  OTA-based software upgrade;
  •  vehicle washing and waxing
  •  changing summer/winter tires;
  •  minor repair work - such as replacing lights or wipers.

Let's take a look at two of the most impressive use cases in this area that have appeared on the market recently. The first one is the Automated Valet Parking project, implemented in partnership with top car manufacturers and technology providers, with BMW leading the way. The second one is the offer of Nvidia, which managed to start cooperation with Mercedes-Benz in this field.

BMW Autonomous Valet Parking

Futurists of the 20th century predicted that the next century would bring us an era of robots able to perform most daily human activities on their own, in an intelligent, autonomous, and efficient way. Although this vision was a gross exaggeration, today on the market there are solutions that can clearly be described as innovative or ahead of their time.

An example? BMW and their all-electric flagship SUV, BMW iX, which communicates with external infrastructure and parks 100 percent without the driver’s input. The owner of the vehicle simply steps out of the car, handing it over to the "technological guardian".

The data exchange here takes place in three tracks: vehicle, smartphone app, and underground garage parts (cameras + sensors). The driver activates the Autonomous Valet Parking (AVP) option in the application, thanks to which the vehicle is able to maneuver independently around the garage without his participation. And all this with maximum safety, both in terms of collision-avoidance and protection of expensive items inside the vehicle.

This project would be much harder without the modern 5G network equipment provided by Deutsche Telekom. Why a fifth-generation network? Because compared to traditional WLAN solutions, it allows to dynamically enable, disable and update capabilities through API.

The flexible configuration and very low latency allow to shape the bandwidth and prioritize the vehicle connectivity traffic, making the connection stable, fast and reliable. This is one of the key requirements for any  Connected Car system which is coupled with Autonomous Vehicle capabilities - if the connection is not reliable, latency is too high, or another device takes over the bandwidth, it may result in jerk, stuttering ride, as the data from external sensors is transferred late.

However, these are not all the surprises that the BWM Group has in store for their customers. In addition to parking, the driver can also benefit from other automated service functions such as washing or intelligent refueling. The solution is universal and can also be used by other OEMs.

  https://youtu.be/iz_yKaa8QgM

Nvidia cooperate with Mercedes-Benz

There are many indications that Voice Assistant will be growing. For example, in 2020 in the U.S. alone, about 20 million people will make purchases via smartphone using voice-activated features [statista.com]. This trend isn't sparing the automotive industry, either, with technology providers racing to create software that would revolutionize such cumbersome tasks as parking. One of the forerunners is the semiconductor giant Nvidia, which created the  Nvidia Drive Concierge service . It's an artificial intelligence-based software assistant that - literally - gives the floor to the driver, but also lets technology come to the fore.

"Hey Nvidia!" What does this voice command remind you of? Most often it is associated with another conversational voice assistance system, namely Siri. You are on the right track, because NDC works on a similar principle. The driver gives a command, and the assistant is able to recognize a specific voice, assign it to the vehicle owner and respond.
By far the most interesting functionality is the ability to integrate the software with Nvidia Drive AV autonomous technology, or on-demand parking. This works in a very intuitive way. All you have to do is get out of the vehicle, activate the function and watch as the "four wheels" steer themselves towards a parking space. And they do it in a collision-free manner, regardless of whether it's parallel, perpendicular or angled parking. It will work the same way in the reverse direction. If you want to leave a parking space, you simply hail the car, it pulls up on its own and is ready to continue its journey.

Sounds like total abstraction? It's already happening. Nvidia has teamed up with one of the world's leading OEMs, Mercedes-Benz. Starting in 2024, all next-generation Benz vehicles will be powered by Nvidia Drive AGX Orin technology, along with sensors and software. For the German company, automated parking services will therefore soon become common knowledge.

This is what Jensen Huang, founder and CEO of Nvidia, said about the collaboration:  Together, we're going to revolutionize the car ownership experience, making the vehicle software programmable and continuously upgradable via over-the-air updates. Every future Mercedes-Benz with the Nvidia Drive system will come with a team of expert AI and software engineers continuously developing, refining and enhancing the car over its lifetime.

Automated Valet Parking: innovation at the cutting edge of technology

Vehicle automation and the resulting cooperation between OEMs and suppliers of new technologies is now entering new dimensions. Also in this area that many drivers associate with something very cumbersome, which often generates anxiety.

The integration of Nvidia Orin systems at Mercedes-Benz or the comprehensive AVP at BMW are prime examples of how new solutions at the intersection of  AI , IoT, and 5G are becoming, to some extent, guardians of safety and guarantors of comfort from start to finish. It's also a good springboard to talk about fully automated vehicles.

written by
Adam Kozłowski
written by
Marcin Wiśniewski
Software development

How to get effective computing services: AWS Lambda

In the modern world, we are constantly faced with the need not only to develop applications but also to provide and maintain an environment for them. Writing scalable, fault-tolerant, and responsive programs is hard, and on top of that, you’re expected to know exactly how many servers, CPUs, and how much memory your code will need to run – especially when running in the Cloud. Also, developing cloud native applications and microservice architectures make our infrastructure more and more complicated every time.

So, how not worry about underlying infrastructure while deploying applications? How do get easy-to-use and manage computing services? The answer is in serverless applications and AWS Lambda in particular.

What you will find in this article:

  1.  What is Serverless and what we can use that for?
  2.  Introduction to AWS Lambda
  3.  Role of AWS Lambda in Serverless applications
  4.  Coding and managing AWS Lambda function
  5.  Some tips about working with AWS Lambda function

 What is serverless?

Serverless computing is a cloud computing execution model in which the cloud provider allocates machine resources on-demand, taking care of the servers on behalf of their customers. Despite the name, it does not involve running code without servers, because code has to be executed somewhere eventually. The name “serverless computing” is used because the business or person that owns the system does not have to purchase, rent, or provision servers or virtual machines for the back-end code to run on. But with provided infrastructure and management you can focus on only writing code that serves your customers.

Software Engineers will not have to take care of operating system (OS) access control, OS patching, provisioning, right-sizing, scaling, and availability. By building your application on a serverless platform, the platform manages these responsibilities for you.

The main advantages of AWS Serverless tools are :

  •  No server management – You don’t have to provision or maintain any servers. There is no software or runtime to install or maintain.
  •  Flexible scaling – You can scale your application automatically.
  •  High availability – Serverless applications have built-in availability and fault tolerance.
  •  No idle capacity – You don't have to pay for idle capacity.
  •  Major languages are supported out of the box - AWS Serverless tools can be used to run Java, Node.js, Python, C#, Go, and even PowerShell.
  •  Out of the box security support
  •  Easy orchestration - applications can be built and updated quickly.
  •  Easy monitoring - you can write logs in your application and then import them to Log Management Tool.

Of course, using Serverless may also bring some drawbacks:

  •  Vendor lock-in - Your application is completely dependent on a third-party provider. You do not have full control of your application. Most likely, you cannot change your platform or provider without making significant changes to your application.
  •  Serverless (and microservice) architectures introduce additional overhead for function/microservice calls - There are no “local” operations; you cannot assume that two communicating functions are located on the same server.
  •  Debugging is more difficult - Debugging serverless functions is possible, but it's not a simple task, and it can eat up lots of time and resources.

Despite all the shortcomings, the serverless approach is constantly growing and becoming capable of more and more tasks. AWS takes care of more and more development and distribution of serverless services and applications. For example, AWS now provides not only Lambda functions(computing service), but also API Gateway(Proxy), SNS(messaging service), SQS(queue service), EventBridge(event bus service), and DynamoDB(NoSql database).

Moreover, AWS provides Serverless Framework which makes it easy to build computing applications using AWS Lambda. It scaffolds the project structure and takes care of deploying functions, so you can get started with your Lambda extremely quickly.

Also, AWS provides the specific framework to build complex serverless applications - Serverless Application Model (SAM). It is an abstraction to support and combine different types of AWS tools - Lambda, DynamoDB API Gateway, etc.

The biggest difference is that Serverless is written to deploy AWS Lambda functions to different providers. SAM on the other hand is an abstraction layer specifically for AWS using not only Lambda but also DynamoDB for storage and API Gateway for creating a serverless HTTP endpoint. Another difference is that SAM Local allows you to run some services, including Lambda functions, locally.

 AWS Lambda concept

AWS Lambda is a Function-as-a-Service(FaaS) service from Amazon Web Services. It runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring, and logging.

AWS Lambda has the following conceptual elements:

  •  Function - A function is a resource that you can invoke to run your code in Lambda. A function has code to process the events that you pass into the function or that other AWS services send to the function. Also, you can add a qualifier to the function to specify a version or alias.
  •  Execution Environment - Lambda invokes your function in an execution environment, which provides a secure and isolated runtime environment. The execution environment manages the resources required to run your function. The execution environment also provides lifecycle support for the function's runtime. At a high level, each execution environment contains a dedicated copy of function code, Lambda layers selected for your function, the function runtime, and minimal Linux userland based on Amazon Linux.
  •  Deployment Package - You deploy your Lambda function code using a deployment package. AWS Lambda currently supports either a zip archive as a deployment package or a container image that is compatible with the Open Container Initiative (OCI) specification.
  •  Layer - A Lambda layer is a .zip file archive that contains libraries, a custom runtime, or other dependencies. You can use a layer to distribute a dependency to multiple functions. With Lambda Layers, you can configure your Lambda function to import additional code without including it in your deployment package. It is especially useful if you have several AWS Lambda functions that use the same set of functions or libraries. For example, in a layer, you can put some common code about logging, exception handling, and security check. A Lambda function that needs the code in there, should be configured to use the layer. When a Lambda function runs, the contents of the layer are extracted into the     /opt    folder in the Lambda runtime environment. The layer need not be restricted to the language of the Lambda function. Layers also have some limitations: each Lambda function may have only up to 5 layers configured and layer size is not allowed to be bigger than 250MB.
  •  Runtime - The runtime provides a language-specific environment that runs in an execution environment. The runtime relays invocation events, context information, and responses between Lambda and the function. AWS offers an increasing number of Lambda runtimes, which allow you to write your code in different versions of several programming languages. At the moment of this writing, AWS Lambda natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby. You can use runtimes that Lambda provides, or build your own.
  •  Extension - Lambda extensions enable you to augment your functions. For example, you can use extensions to integrate your functions with your preferred monitoring, observability, security, and governance tools.
  •  Event - An event is a JSON-formatted document that contains data for a Lambda function to process. The runtime converts the event to an object and passes it to your function code.
  •  Trigger - A trigger is a resource or configuration that invokes a Lambda function. This includes AWS services that you can configure to invoke a function, applications that you develop, or some event source.

So, what exactly is behind AWS Lambda?

From an infrastructure standpoint, every AWS Lambda is part of a container running Amazon Linux (referenced as Function Container). The code files and assets you create for your AWS Lambda are called Function Code Package and are stored on an S3 bucket managed by AWS. Whenever a Lambda function is triggered, the Function Code Package is downloaded from the S3 bucket to the Function container and installed on its Lambda runtime environment. This process can be easily scaled, and multiple calls for a specific Lambda function can be performed without any trouble by the AWS infrastructure.

The Lambda service is divided into two control planes. The  control plane is a master component responsible for making global decisions about provisioning, maintaining, and distributing a workload. A second plane is a data  plane that controls the Invoke API that runs Lambda functions. When a Lambda function is invoked, the data plane allocates an execution environment to that function, chooses an existing execution environment that has already been set up for that function, then runs the function code in that environment.

Each function runs in one or more dedicated execution environments that are used for the lifetime of the function and then destroyed. Each execution environment hosts one concurrent invocation but is reused in place across multiple serial invocations of the same function. Execution environments run on hardware virtualized virtual machines (microVMs). A micro VM is dedicated to an AWS account but can be reused by execution environments across functions within an account. MicroVMs are packed onto an AWS-owned and managed hardware platform (Lambda Workers). Execution environments are never shared across functions and microVMs are never shared across AWS accounts.

Even though Lambda execution environments are never reused across functions, a single execution environment can be reused for invoking the same function, potentially existing for hours before it is destroyed.

Each Lambda execution environment also includes a writeable file system, available at  /tmp . This storage is not accessible to other execution environments. As with the process state, files are written to  /tmp remain for the lifetime of the execution environment.

 Cold start VS Warm start

When you call a Lambda Function, it follows the steps described above and executes the code. After finishing the execution, the Lambda Container stays available for a few minutes, before being terminated. This is called a Cold Start.

If you call the same function and the Lambda Container is still available (haven’t been terminated yet), AWS uses this container to execute your new call. This process of using active function containers is called Warm Container and it increases the response speed of your Lambda.

Cold start VS Warm start

 Role of AWS Lambda in serverless applications

There are a lot of use cases you can use AWS Lambda for, but there are killer cases for which Lambda is best suited:

  1.     Operating serverless back-end  

The web frontend can send requests to Lambda functions via API Gateway HTTPS endpoints. Lambda can handle the application logic and persist data to a fully-managed database service (RDS for relational, or DynamoDB for a non-relational database).

  1.     Working with external services  

If your application needs to request services from an external provider, there's generally no reason why the code for the site or the main application needs to handle the details of the request and the response. In fact, waiting for a response from an external source is one of the main causes of slowdowns in web-based services. If you hand requests for such things as credit authorization or inventory checks to an application running on AWS Lambda, your main program can continue with other elements of the transaction while it waits for a response from the Lambda function. This means that in many cases, a slow response from the provider will be hidden from your customers, since they will see the transaction proceeding, with the required data arriving and being processed before it closes.

  1.     Near-realtime notifications  

Any type of notifications, but particularly real-time, will find a use case with serverless Lambda. Once you create an SNS, you can set triggers that fire under certain policies. You can easily build a Lambda function to check log files from Cloudtrail or Cloudwatch. Lambda can search in the logs looking for specific events or log entries as they occur and send out notifications via SNS. You can also easily implement custom notification hooks to Slack or another system by calling its API endpoint within Lambda.

  1.     Scheduled tasks and automated backups  

Scheduled Lambda events are great for housekeeping within AWS accounts. Creating backups, checking for idle resources, generating reports, and other tasks which frequently occur can be implemented using AWS Lambda.

  1.     Bulk real-time data processing  

There are some cases when your application may need to handle large volumes of streaming input data, and moving that data to temporary storage for later processing may not be an adequate solution.If you send the data stream to an AWS Lambda application designed to quickly pull and process the required information, you can handle the necessary real-time tasks.

  1.     Processing uploaded S3 objects  

By using S3 object event notifications, you can immediately start processing your files by Lambda, once they land in S3 buckets. Image thumbnail generation with AWS Lambda is a great example for this use case, the solution will be cost-effective and you don’t need to worry about scaling up - Lambda will handle any load.

 AWS Lambda limitations

AWS Lambda is not a silver bullet for every use case. For example, it should not be used for anything that you need to control or manage at the infrastructure level, nor should it be used for a large monolithic application or suite of applications.

Lambda comes with a number of “limitations”, which is good to keep in mind when architecting a solution.

There are some “hard limitations” for the runtime environment: the disk space is limited to 500MB, memory can vary from 128MB to 3GB and the execution timeout for a function is 15 minutes. Package constraints like the size of the deployment package (250MB) and the number of file descriptors (1024) are also defined as hard limits.

Similarly, there are “limitations” for the requests served by Lambda: request and response body synchronous event payload can be a maximum of 6 MB while an asynchronous invocation payload can be up to 256KB. At the moment, the only soft “limitation”, which you can request to be increased, is the number of concurrent executions, which is a safety feature to prevent any accidental recursive or infinite loops from going wild in the code. This would throttle the number of parallel executions.

All these limitations come from defined architectural principles for the Lambda service:

  •  If your Lambda function is running for hours, it should be moved to EC2 rather than Lambda.
  •  If the deployment package jar is greater than 50 MB in size, it should be broken down into multiple packages and functions.
  •  If the request payloads exceed the limits, you should break them up into multiple request endpoints.

It all comes down to preventing deploying monolithic applications as Lambda functions and designing stateless microservices as a collection of functions instead. Having this mindset, the “limitations” make complete sense.

 AWS Lambda examples

Let’s now take a look at some AWS Lambda examples. We will start with a dummy Java application and how to create, deploy and trigger AWS Lambda. We will use AWS Command Line Interface(AWS CLI) to manage functions and other AWS Lambda resources.

 Basic application

Let’s get started by creating the Lambda function and needed roles for Lambda execution.

This trust policy allows Lambda to use the role's permissions by giving the service principal lambda.amazonaws.com permission to call the AWS Security Token Service AssumeRole action. The content of  trust-policy.json is the following:

Then let’s attach some permissions to the created role. To add permissions to the role, use the  attach-policy-to-role command. Start by adding the  AWSLambdaBasicExecutionRole managed policy.

 Function code

As an example, we will create Java 11 application using Maven.

For Java AWS Lambda provides the following libraries:

  •  com.amazonaws:aws-lambda-java-core – Defines handler method interfaces and the context object that the runtime passes to the handler. This is a required library.
  •  com.amazonaws:aws-lambda-java-events – Different input types for events from services that invoke Lambda functions.
  •  com.amazonaws:aws-lambda-java-log4j2 – An appender library for Apache Log4j 2 that you can use to add the request ID for the current invocation to your function logs.

Let’s add Java core library to Maven application:

Then we need to add a Handler class which will be an entry point for our function. For Java function this Handler class should implement  com.amazonaws.services.lambda.runtime.RequestHandler interface. It’s also possible to set generic input and output types.

Now let’s create a deployment package from the source code. For Lambda deployment package should be either .zip or .jar. To build a jar file with all dependencies let’s use  maven-shade-plugin .

After running  mvn package command, the resulting jar will be placed into  target folder. You can take this jar file and zip it.

Now let’s create Lambda function from the generated deployment package.

Once Lambda function is deployed we can test it. For that let’s use invoke-command.

 out.json means the filename where the content will be saved. After invoking Lambda you should be able to see a similar result in your  out.json :

 More complicated example

Now let’s take a look at a more complicated application that will show the integration between several AWS services. Also, we will show how Lambda Layers can be used in function code. Let’s create an application with API Gateway as a proxy, two Lambda functions as some back-end logic, and DynamoDB as data storage. One Lambda will be intended to save a new record into the database. The second Lambda will be used to retrieve an object from the database by its identifier.

Let’s start by creating a table in DynamoDB. For simplicity, we’ll add just a couple of fields to that table.

Now let’s create a Java module where some logic with database operations will be put. Dependencies to AWS DynamoDB SDK should be added to the module.

Now let’s add common classes and models to work with the database. This code will be reused in both lambdas.

Model entity object:

Adapter class to DynamoDB client.

Implementation of DAO interface to provide needed persistent operations.

Now let’s build this module and package it into a jar with dependencies. From that jar, a reusable Lambda Layer will be created. Compress fat jar file as a zip archive and publish it to S3. After doing that we will be able to create a Lambda Layer.

Layer usage permissions are managed on the resource. To configure a Lambda function with a layer, you need permission to call  GetLayerVersion on the layer version. For functions in your account, you can get this permission from your user policy or from the function's resource-based policy. To use a layer in another account, you need permission on your user policy, and the owner of the other account must grant your account permission with a resource-based policy.

 Function code

Now let’s add this shared dependency to both Lambda functions. To do that we need to define a provided dependency in  pom.xml.

After that, we can write Lambda handlers. The first one will be used to persist new objects into the database:

 NOTE : in case of subsequent calls AWS may reuse the old Lambda instance instead of creating a new one. This offers some performance advantages to both parties: Lambda gets to skip the container and language initialization, and you get to skip initialization in your code. That’s why it’s recommended not to put the creation and initialization of potentially reusable objects into the handler body, but to move it to some code blocks which will be executed once - on the initialization step only.

In the second Lambda function we will extract object identifiers from request parameters and fetch records from the database by id:

Now create Lambda functions as it was shown in the previous example. Then we need to configure layer usage for functions. To add layers to your function, use the  update-function-configuration command.

You must specify the version of each layer to use by providing the full Amazon Resource Name (ARN) of the layer version. While your function is running, it can access the content of the layer in the  /opt directory. Layers are applied in the order that's specified, merging any folders with the same name. If the same file appears in multiple layers, the version in the last applied layer is used.

After attaching the layer to Lambda we can deploy and run it.

Now let’s create and configure API Gateway as a proxy to Lambda functions.

This operation will return json with the identifier of created API. Save the API ID for use in further commands. You also need the ID of the API root resource. To get the ID, run the  get-resources command.

Now we need to create a resource that will be associated with Lambda to provide integration with functions.

Parameter  --integration-http-method is the method that API Gateway uses to communicate with AWS Lambda. Parameter  --uri is a unique identifier for the endpoint to which Amazon API Gateway can send requests.

Now let’s make similar operations for the second lambda(  get-by-id-function ) and deploy an API.

Note. Before testing API Gateway, you need to add permissions so that Amazon API Gateway can invoke your Lambda function when you send HTTP requests.

Now let’s test our API. First of all, we’ll try to add a new product record:

The result of this call will be like this:

Now we can retrieve created object by its identifier:

And you will get a similar result as after POST request. The same object will be returned in this example.

 AWS Lambda tips

 Debugging Lambda locally

You can use AWS SAM console with a number of AWS toolkits to test and debug your serverless applications locally. For example, you can perform step-through debugging of your Lambda functions. The commands  sam local invoke and  sam local start-api both support local step-through debugging of your Lambda functions. To run AWS SAM locally with step-through debugging support enabled, specify  --debug-port or  -d on the command line. For example:

Also for debugging purposes, you can use AWS toolkits which are plugins that provide you with the ability to perform many common debugging tasks, like setting breakpoints, executing code line by line, and inspecting the values of variables. Toolkits make it easier for you to develop, debug, and deploy serverless applications that are built using AWS.

 Configure CloudWatch monitoring and alerts

Lambda automatically monitors Lambda functions on your behalf and reports metrics through Amazon CloudWatch. To help you monitor your code when it runs, Lambda automatically tracks the number of requests, the invocation duration per request, and the number of requests that result in an error. Lambda also publishes the associated CloudWatch metrics. You can leverage these metrics to set CloudWatch custom alarms. The Lambda console provides a built-in monitoring dashboard for each of your functions and applications. Each time your function is invoked, Lambda records metrics for the request, the function's response, and the overall state of the function. You can use metrics to set alarms that are triggered when function performance degrades, or when you are close to hitting concurrency limits in the current AWS Region.

 Beware of concurrency limits

For those functions whose usage scales along with your application traffic, it’s important to note that AWS Lambda functions are subject to concurrency limits. When functions reach 1,000 concurrent executions, they are subject to AWS throttling rules. Future calls will be delayed until your concurrent execution averages are back below the threshold. This means that as your applications scale, your high-traffic functions are likely to see drastic reductions in throughput during the time you need them most. To work around this limit, simply request that AWS raise your concurrency limits for the functions that you expect to scale.

Also, there are some widespread issues you may face working with Lambda:

 Limitations while working with database

If you have a lot of reading/writing operations during one Lambda execution, you may probably face some failures due to Lambda limitations. Often the case is a timeout on Lambda execution. To investigate the problem you can temporarily increase the timeout limit on the function, but a common and highly recommended solution is to use batch operations while working with the database.

 Timeout issues on external calls

This case may occur if you call a remote API from Lambda that takes too long to respond or that is unreachable. Network issues can also cause retries and duplicated API requests. To prepare for these occurrences, your Lambda function must always be idempotent. If you make an API call using an AWS SDK and the call fails, the SDK automatically retries the call. How long and how many times the SDK retries is determined by settings that vary among each SDK. To fix the retry and timeout issues, review the logs of the API call to find the problem. Then, change the retry count and timeout settings of the SDK as needed for each use case. To allow enough time for a response to the API call, you can even add time to the Lambda function timeout setting.

 VPC connection issues

Lambda functions always operate from an AWS-owned VPC. By default, your function has full ability to make network requests to any public internet address — this includes access to any of the public AWS APIs. You should configure your functions for VPC access when you need to interact with a private resource located in a private subnet. When you connect a function to a VPC, all outbound requests go through your VPC. To connect to the internet, configure your VPC to send outbound traffic from the function's subnet to a NAT gateway in a public subnet.

written by
Viktar Reut
Software development

Cloud development overview for non-cloud developers

 Introduction

This article covers basic concepts of web applications that are designed to be run in Cloud environment and are intended for software engineers who are not familiar with  Cloud Native development but work with other programming concepts/technologies. The article gives an overview of the basics from the perspective of concepts that are already known to non-cloud developers including mobile and desktop software engineers.

 Basic concepts

Let’s start with something simple. Let’s imagine that we want to write a web application that allows users to create an account, order the products and write reviews on them. The simplest way is to have our backend app as a single app combining UI and code. Alternatively, we may split it frontend and into the backend, which just provides API.

Let’s focus on the backend part. The whole communication between its components happens inside of a single app, on a code level. From the executable file perspective, our app is a monolithic piece of code: it’s a single file or package. Everything looks simple and clean: the code is split into several logical components, each component has its own layers. The possible overall architecture may look as follows:

But as we try to develop our app we'll quickly figure out that the above approach is not enough in the modern world and modern web environment. To understand what's wrong with the app architecture we need to figure out the key specificity of web apps compared to desktop or mobile apps. Let’s describe quite simple yet very important points. While being obvious to some (even non-web) developers the points are crucial for understanding essential flaws of our app while running in the modern server environment.

Desktop or mobile app runs on the user's device. This means that each user has their own app copy running independently. For web apps, we have the opposite situation. In a simplified way, in order to use our app user connects to a server and utilizes an app instance that runs on that server. So, for web apps, all users are using a single instance of the app. Well, in real-world examples it's not strictly a single instance in most cases because of scaling. But the key point here is that the number of users, in a particular moment of time is way greater than the number of app instances. In consequence, app error or crash has incomparably bigger user impact for web apps. I.e., when a desktop app crashes, only a single user is impacted. Moreover, since the app runs on their device they may just restart the app and continue using it. In case of a web app crash, thousands of users may be impacted. This brings us to two important requirements to consider.

  1.  Reliability and testability  
     Since all the code is placed in a single (physical) app our changes to one component during development of the new features may impact any other existing app component. Hence, after implementing a single feature we have to retest the whole app. If we have some bug in our new code that leads to a crash, once the app crashes it becomes unavailable to all the users. Before we figure out the crash we have some downtime when users cannot use the app. Moreover to prevent further crashes we have to roll back to a previous app version. And if we delivered some fixes/updates along with the new feature we’ll lose those improvements.
  2.  Scalability  
     Consider the number of users is increased during a short period. In case of our example app, this may happen due to, e.g., discounts or new attractive products coming in. It quickly turns out that one app instance running is not enough. We have too many requests and app “times out” requests it cannot handle. We may just increase the number of running instances of the app. Hence, each instance will independently handle user orders. But after a closer look, it turns out that we actually don’t need to scale the whole app. The only part of the app that needs to handle more requests is creating and storing orders for a particular product. The rest of the app doesn’t need to be scaled. Scaling other components will result in unneeded memory growth. But since all the components are contained in a monolith (single binary) we can only scale all of them at once by launching new instances.

The other thing to consider is network latency which adds important limitations compared to mobile or desktop apps. Even though the UI layer itself runs directly in the browser (javascript), any heavy computation or CRUD operation requires http call. Since such network calls are relatively slow (compared to interactions between components in code) we should optimize the way we work with data and some server-side computations.

Let’s try to address the issues we described above.

Microservices

Let’s make a simple step and split our app into a set of smaller apps called microservices. The diagram below illustrates the general architecture of our app rethinks using microservices.

This helps us solve the problems of monolithic apps and has some additional advantages.

• Implementing a new feature (component) results in adding a new service or modifying the existing one. This reduces the complexity of the development and increases testability. If we have a critical bug we will simply disable that service while the other app parts will still work (excluding the parts that require interaction with the disabled service) and contain any other changes/fixes not related to the new feature.

• When we need to scale the app we may do it only for a particular component. E.g., if a number of purchases increase we may increment the number of running instances of Order Service without touching other ones.

• Developers in a team can work fully independently while developing separate microservices. We’re also not limited by a single language. Each microservice may be written in a different language.

• Deployment becomes easier. We may update and deploy each microservice independently. Moreover, we can use different server/cloud environments for different microservices. Each service can use its own third-party dependency services like a database or message broker.

Besides its advantages, microservice architecture brings additional complexity that is driven by the nature of microservice per se: instead of a single big app, we now have multiple small applications that have to communicate with each other through a network environment.

In terms of desktop apps, we may bring up here the example of inter-process communication, or IPC. Imagine that a desktop app is split into several smaller apps, running independently on our machine. Instead of calling methods of different app modules within a single binary we now have multiple binaries. We have to design a protocol of communication between them (e.g., based on OS native IPC API), we have to consider the performance of such communication, and so on. There may be several instances of a single app running at the same time on our machine. So, we should find out a way to determine the location of each app within the host OS.

The described specificity is very similar to what we have with microservices. But instead of running on a single machine microservice apps run in a network which adds even more complexity. On the other hand, we may use already existing solutions, like http for communicating between services (which is how microservices communicate in most cases) and RESTful API on top of it.

The key thing to understand here is that all the basic approaches described below are introduced mainly to solve the complexity resulting from splitting a single app into multiple microservices.

Locating microservices

Each microservice that calls API of another microservice (often called client service) should know its location. In terms of calling REST API using http the location consists of address and port. We can hardcode the location of the callee in the caller configuration files or code. But the problem is that can be instantiated, restarted, or moved independently of each other. So, hardcoding is not a solution as if the callee service location is changed the caller will have to be restarted or even recompiled. Instead, we may use Service Registry pattern.

To put it simply, Service Registry is a separate application that holds a table that maps a service id to its location. Each service is registered in Service Registry on startup and deregistered on shutdown. When client service needs to discover another service it gets the location of that service from the registry. So, in this model, each microservice doesn’t know the concrete location of its callee services but just their ids. Hence, if a certain service changes its location after restart the registry is updated and its client services will be able to get this new location.

Service discovery using a Service registry may be done in two ways.

1. Client-side service discovery. Service gets the location of other services by directly querying the registry. Then calls discovered the service’s API by sending a request to that location. In this case, each service should know the location of the Service Registry. Thus, its address and port should be fixed.

2. Server-side service discovery. Service may send API call requests along with service id to a special service called Router. Router retrieves the actual location of the target service and forwards the request to it. In this case, each service should know the location of the Router.

Communicating with microservices

So, our application consists of microservices that communicate. Each has its own API. The client of our microservices (e.g., frontend or mobile app) should use that API. But such usage becomes complicated even for several microservices. Another example, in terms of desktop interprocess communication, imagines a set of service apps/daemons that manage the file system. Some may run constantly in the background, some may be launched when needed. Instead of knowing details related to each service, e.g., functionality/interface, the purpose of each service, whether or not it runs, we may use a single facade daemon, that will have a consistent interface for file system management and will internally know which service to call.

Referring back to our example with the e-shop app consider a mobile app that wants to use its API. We have 5 microservices, each has its own location. Remember also, that the location can be changed dynamically. So, our app will have to figure out to which services particular

requests should be sent. Moreover, the dynamically changing location makes it almost impossible to have a reliable way for our client mobile app to determine the address and port of each service.

The solution is similar to our previous example with IPC on the desktop. We may deploy one service at a fixed known location, that will accept all the requests from clients and forward each request to the appropriate microservice. Such a pattern is called API Gateway.

Below is the diagram demonstrating how our example microservices may look like using Gateway:

Additionally, this approach allows unifying communication protocol. That is, different services may use different protocols. E.g., some may use REST, some AMQP, and so on. With API Gateway these details are hidden from the client: the client just queries the Gateway using a single protocol (usually, but not necessarily REST) and then the Gateway translates those requests into the appropriate protocol a particular microservice uses.

Configuring microservices

When developing a desktop or mobile app we have several devices the app should run on during its lifecycle. First, it runs on the local device (either computer or mobile device/simulator in case of mobile app) of the developers who work on the app. Then it’s usually run on some dev device to perform unit tests as part of CI/CD. After that, it’s installed on a test device/machine for either manual or automated testing. Finally, after the app is released it is installed on users’ machines/devices. Each type of device

(local, dev, test, user) implies its own environment. For instance, a local app usually uses dev backend API that is connected to dev database. In the case of mobile apps, you may even develop using a simulator, that has its own specifics, like lack or limitation of certain system API. The backend for the app’s test environment has DB with a configuration that is very close to the one used for the release app. So, each environment requires a separate configuration for the app, e.g., server address, simulator specific settings, etc. With a microservices-based web app, we have a similar situation. Our microservices usually run in different environments. Typically they are dev, test, staging, and production. Hardcoding configuration is no option for our microservices, as we typically move the same app package from one environment to another without rebuilding it. So, it’s natural to have the configuration external to the app. At a minimum, we may specify a configuration set per each environment inside the app. While such an approach is good for desktop/mobile apps it has provides a limitation for a web app. We typically move the same app package/file from one environment to another without recompiling it. A better approach is to externalize our configuration. We may store configuration data in database or external files that are available to our microservices. Each microservice reads its configuration on startup. The additional benefit of such an approach is that when the configuration is updated the app may read it on the fly, without the need for rebuilding and/or redeploying it.

Choosing cloud environment

We have our app developed with a microservices approach. The important thing to consider is where would we run our microservices. We should choose the environment that allows us to take advantage of microservice architecture. For cloud solutions, there are two basic types of environment: Infrastructure as a Service, or IaaS, and Platform as a Service, or PaaS. Both have ready-to-use solutions and features that allow scalability, maintainability, reliability which require much effort to achieve on on-premises. and Each of them has advantages compared to traditional on-premises servers.

Summary

In this article, we’ve described key features of microservices architecture for the cloud-native environment. The advantages of microservices are:

- app scalability;

- reliability;

- faster and easier development

- better testability.

To fully take advantage of microservice architecture we should use IaaS or PasS cloud environment type.

written by
Andrii Biehunov
Automotive

Over-The-Air upgrade: How to develop the concept successfully

July 2021, Porsche recalls 43 000 of its newest EVs: Taycan and Taycan Cross. Why? Due to software issues resulting in power loss. How could this have been prevented while reducing costs and fixing the defects in one go on all cars? The answer is short and comes from the mouths of everyone working in the automotive industry: Over-The-Air Upgrade.

Although hard to implement correctly, the cost of not having the ability to remotely upgrade software and firmware in the vehicle is huge. Today it’s not the question of „IF” and „WHEN”, (since the automotive industry has long known the answers to these questions), today it’s the question of „HOW”.

Upgrading a GPS or infotainment application is one thing, but upgrading the vehicle's firmware is another. And it does not matter whether it's a car, an e-scooter, or a smartphone. The principles are always the same. We will try to outline them in this article.

Let's start from the beginning - what are the core benefits of the over-the-air upgrade.

OTA allows for remote diagnosis. Initial diagnosis done remotely helps with better planning of repairs, as well as with predictive maintenance – both giving a better customer experience and reducing the cost for the OEMs, especially during the warranty period.

The upgrade can also happen on the production line while waiting for shipment. The vehicle always has the newest stable version of the firmware and software, reducing the amount of manual work required for the whole vehicle lifecycle.

The only part of the car life cycle where the Over-The-Air Upgrade is not really useful is aftersales.

Key benefits of implementing an over-the-air upgrade are:

  • An ability to remain compliant with evolving industry standards through vehicle lifetime.
  • It helps to reduce warranty and recall costs by reducing service center visits or help desk calls for the vehicle (it also works on the production line, while waiting for shipment).
  • The vehicle always has the newest stable version of the firmware and software, reducing the amount of manual work required for the whole vehicle lifecycle.
  • An ability to resolve issues remotely, so the customer doesn’t have to waste time traveling on-site.
  • An ability to update multiple vehicles simultaneously, reducing time required to update the whole fleet.

SOTA - the most common implementation of over-the-air upgrade

SOTA is used widely by almost every OEM to update navigation systems (maps, POIs) and sometimes other infotainment applications, like voice assistance. As opposed to the firmware update, the failure of the software update is rarely critical to vehicle operations. It can result in inconvenience when due to update failure, the navigation system crashes or fails to display a map.

This is also the part that makes the customer experience bad if SOTA is done without due diligence because the software makes the infotainment appealing and responsive . And yet no one likes slow or difficult-to-use applications or services. Especially when they're intended to boost driving satisfaction.

Firmware over-the-air-upgrade is a different beast

With FOTA, we play a much more demanding game. That’s why it’s important to separate software updates from firmware updates.

First, it’s just easier for a developer to focus on his part of the job, the specific application. Secondly, the firmware part is riskier and more complex, and the update might not be required that often.

The complication comes partially from the idea of replacing the Operating System of the ECUSoC and partially from the criticality of the systems. Computers controlling engine operations, ESPTC, gearbox, or electronic chassis controller are required for safe and reliable operations of the vehicle.

Firmware Over-The-Air Update Failure in the update process, resulting in critical fault of this kind of subsystem, in most cases, makes the vehicle inoperable, beyond repair capabilities of regular users. The cost of restoring the vehicle to an operational state is fully on the manufacturer’s side. This is obviously the scenario that should be avoided at all costs.

Key requirements for implementation of (F)OTA successfully

  • Automatic recovery corrupted updates

Firmware updates should be atomic. The whole process should be successful, or the system should automatically roll back to the previous/ existing version of the software. The problem does not have to be caused by a bug in the original image – the package can be corrupted in transit, or the transfer might be interrupted and result in a partial package being in the process.

  • Internet connectivity consistency

Parts of the firmware being updated, especially ones regarding device to network connectivity, should never break away if the SoC is connected to the internet – otherwise, the next version might be never installed automatically. It’s important especially if the device does not have a way to notify the user about the problem or allow them to reconfigure the network settings.

  • Code provenance, code identity, code compatibility and code integrity – security of the executed program

Firmware update in most cases regards critical systems . The wireless update is tempting, but it must be secure, especially regarding verifying the identity of authors of change and source of the update – as well as if the code was not replaced or altered during transit. If the edge device can cryptographically confirm code signs, it can be installed. Additionally, there should be a way for the update system to confirm if the package is built for that specific it’s being installed on.

  • Secure communication medium for package transport

All channels used for the update should be secure. Ideally, it should be a mutual TLS, but even a regular secure TLS connection is sufficient as long as the whole path is secure (both local connection and in the cloud).

  • [NICE-TO-HAVE] Sending OTA firmware updates in chunks and partial updates support

It’s easier to handle updates that are sent in chunks. When the connection is unstable, the whole download process does not have to be repeated. Additionally, if partial updates are supported, a small update takes less time to install and less bandwidth to transfer.

  • [NICE-TO-HAVE] Separate base system layer from the installed software

If the application and data layer is not part of the firmware update, it’s easier to develop the applications, safely update the system without breaking the data, and securely update the system without breaking the applications. Combined with partial updates, it also helps with making updates faster.

Opposite to the chip flashing using a wired connection, the failure is not really an option – if the device cannot boot, even to some basic OS functions, it is bricked – unless you are an expert with specialistic hardware, it may be really hard to directly write new firmware to the chip to overwrite the faulty or broken version.

And what if a broken package is written to the device?

Does not matter if it was a human error, device issue, or just really bad luck – in the end, the important part is to make sure the user does not end up with a broken vehicle. The battle-tested solution for this problem is AB filesystems – or AB slots.

The idea is rather simple – system areas in storage are duplicated. Graphically speaking, there are two fully operational versions of the system being installed simultaneously on the single device, and there is a programmatical switch in the bootloader which selects the OS to start.

In regular operation, a single system, let’s call it “A”, is being continuously used while the other one, “B”, is the exact copy of the “A”, but works as a backup. If the “A” fails to start, the bootloader switches to the other version. During the update, the inactive partition is overwritten with the update packages – either whole partition or subset of files, depending on the type of update. If the update finishes and the checksum of the result is correct, as the last step, the bootloader configuration is changed to run from the “B” slot, and the device restarts.

As previously stated – if something fails, the bootloader, after a failed attempt, will switch back to the previous, working version. This makes this approach safe, allowing us to retry the upgrade process. Otherwise, the update is successful and there are two approaches:

  • Leave the old version on the other partition and remain to boot from the slot selected after the update process.
  • Copy the contents of the upgraded partition to the other slot t o have two copies of the same version .

The same approach is used in modern smartphones, and as a direct continuation, the same approach was selected for Android Automotive OS – which is a Google Android Open-Source Project (AOSP) implementation-specific for the automotive industry.

Currently, both Volvo (including, of course, Polestar) and General Motors use AAOS for their newest vehicles as an infotainment system. Being an open system, a lot of applications can be developed for cars from different OEMs and leverage the bigger, open market – plus of course, the code is open source, and a lot of work on things like upgrade system (OTA), application delivery, connection to subsystems (air conditioning, navigation, interior buttons) is already finished and can be reused.

Building using open and tested frameworks and code is just easier – and a proven way to update both application and system is an asset when starting from scratch with new infotainment firmware and software.

written by
Adam Kozłowski
written by
Marcin Wiśniewski
Previous
Load more

Stay updated with our newsletter

Subscribe for fresh insights and industry analysis.

About UsCase studiesContactCareers
Capabilities:
Legacy ModernizationData PlatformsArtificial Intelligence
Industries:
AutomotiveFinanceManufacturing
Solutions:
DataboostrCloudboostr
Resources
BlogInsights
© Grape Up 2025
Cookies PolicyPrivacy PolicyTerms of use
Grape Up uses cookies

This website uses cookies to improve its user experience and provide personalized content for you. We use cookies for web analytics and advertising. You can accept these cookies by clicking "OK" or go to Details in order to manage your cookies preferences more precisely. To learn more, check out our Privacy and Cookies Policy

Accept allDetails
Grape Up uses cookies

Essential website cookies are necessary to provide you with services available through the website, autosave your settings and preferences, and to enhance the performance and security of the website - you have the right not to accept them through your web browser's settings, but your access to some functionality and areas of our website may be restricted.

Analytics cookies: (our own and third-party : Google, HotJar) – you can accept these cookies below:

Marketing cookies (third-party cookies: Hubspot, Facebook, LinkedIn) – you can accept these cookies below:

Ok