
Artificial Intelligence seems to be a quite overused term in recent years, yet it is hard to argue that it is definitely the greatest technological promise of current times. Every industry strongly believes that AI will empower them to introduce innovations and improvements to drive their businesses, increase sales, and reduce costs.
But even though AI is no longer a new thing, companies struggle with adopting and implementing AI-driven applications and systems . That applies not only to large scale implementations (which are still very rare) but often to the very first projects and initiatives within an organization. In this article, we will shed some light on how to successfully adopt AI and benefit from it.
How to start then? The answer might sound trivial, but it goes like this: start small and grow incrementally. Just like any other innovation, AI cannot be rolled out throughout the organization at once and then harvested across various business units and departments. The very first step is to start with a pilot AI adoption project in one area, prove its value and then incrementally scale up AI applications to other areas of the organization.
But how to pick the right starting point? A good AI pilot project candidate should have certain characteristics:
At Grape Up, we help our customers choose the initial AI project candidate by following a proven process. The process consists of several steps and eventually leads to implementing a single pilot AI project in production.
We start with identifying possible areas in the organization that might be enhanced with AI, e.g., parts of processes to improve, problems to solve, or tasks to automate. This part of the process is the most essential as it becomes the baseline for all subsequent phases. Therefore it is crucial to execute it together with the customer but also ensure the customer understands what AI can do for their organization. To enable that, we explain the AI landscape, including basic technology, data, and what AI can and cannot do. We also show exemplary AI applications to a customer-specific industry or similar industries.
Having that as a baseline, we move on to the more interactive part of that phase. Together with customer executives and business leaders, we identify major business value drivers as well as current pain points & bottlenecks through collaborative discussion and brainstorming. We try to answer questions such as:
This step results in a list of several (usually 5 to 10) ideas ready for further investigation on where to potentially start applying AI in the organization.
The next step aims at detailing the previously selected ideas. Again, together with the customer, we define detailed business cases describing how problems identified in step 1 could be solved and how these solutions can create business value.
Every idea is broken down into a more detailed description using the Opportunity Canvas approach - a simple model that helps define the idea better and consider its business value. Using filled canvas as the baseline, we analyze each concept and evaluate against the business impact it might deliver, focusing on business benefits and user value but also expected effort and cost.
Eventually, we choose 4-8 ideas with the highest impact and the lowest effort and describe detailed use cases (from business and high-level functional perspective).
In this phase, we evaluate the technical feasibility of previously identified business cases – in particular, whether AI can address the problem, what data is needed, whether the data is available, what is the expected cost and timeframe, etc.
This step usually requires technical research to identify AI tools, methods, and algorithms that could best address the given computational problem, data analysis – to verify what data is needed vs. what data is available and often small-scale experiments to better validate the feasibility of concepts.
We finalize this phase with a list of 1-3 PoC candidates that are technically feasible to implement but more importantly – are verified to have a business impact and to create business value.
Implementation of the PoC project is the goal of this phase and involves data preparation (to create data sets whose relationship to the model targets is understood), modeling (to design, train, and evaluate machine learning models), and eventual deployment of the PoC model that best addresses the defined problem.
It results in a working PoC that creates business value and is the foundation for the production-ready implementation.

Once the customer is satisfied with PoC results, they want to productionize the solution to fully benefit from the AI-driven tool. Moving pilots to production is also a crucial part of scaling up AI adoption. If the successful projects remain still just experiments and PoCs, then it is demanding for a company to move forward and apply AI to other processes within the organization .
To summarize the most important aspects of a successful AI adoption:

Read our blog and stay informed about the industry's latest trends and solutions.
If we look at the claims adjustment domain from a high-level perspective, we will surely notice it is a very complex set of elements: processes, data, activities, documents, systems, and many others, depending on each other. There are many people who are involved in the process and in many cases, they struggle with a lot of inefficiency in their daily work. This is exactly where AI comes to help. AI-based solutions and mechanisms can automate, simplify, and speed up many parts of the claims adjustment process, and eventually reduce overall adjustment costs.
Let's look at the claims adjustment process in more detail. There are multiple steps on the way: when an event that causes a loss for the customer occurs, the customer notifies the insurance company about the loss and files a claim. Then the company needs to gather all the information and documentation to understand the circumstances, assess the situation, and eventually be able to validate their responsibility and estimate the loss value. Finally, the decision needs to be made, and appropriate parties, including the customer, need to be notified about the result of the process.

At each step of this process, AI can not only introduce improvements and optimizations but also enable new possibilities and create additional value for the customer .
Let’s dive into a few examples of potential AI application to claims adjustment process in more detail.
The incoming correspondence related to claims is very often wrongly addressed. Statistics show that on average, 35% of messages is incorrectly addressed. A part of them is sent to a generic corporate inbox, next ones to wrong people, or sometimes even to entirely different departments. That causes a lot of confusion and requires time to reroute the message to the correct place.
AI can be very helpful in this scenario - an algorithm can analyze the subject and the content of the message, look for keywords such as claim ID, name of the customer, policy number , and automatically reroute the message to the correct recipient. Furthermore, the algorithm can analyze the context and detect if it is a new claim report or a missing attachment that should be added to an already-filed claim. Such a solution can significantly improve the effectiveness and speed up the process.
The automation of processing of incoming documents and messages could be taken one step further. What if we used an AI algorithm to analyze the content of the message? A claim report can be sent using an official form, but also as a plain email message or even as a scanned paper document – the solution could analyze the document and extract the key information about the claim so that it can be automatically added to the claim registry system. Simultaneously the algorithm could check if all the needed data, documents, and attachments are provided and if not, notify the reporter appropriately. In a "traditional" approach, this part is often manual and thus takes a lot of time. Introducing an AI-based mechanism here would drastically reduce the amount of manual work, especially in the case of well-defined and repeatable causes, e.g., car insurance claims.
Appraisal of the filed claim and verification of reported damage is another lengthy step in the claim adjustment process. The adjuster needs to verify if the reported damage is true and if the reported case includes those that occurred previously. Computer vision techniques can be used here to automate and speed up the process - e.g., by analyzing pictures of the car taken by the customer after the accident or analyzing satellite or aerial photos of a house in case of property insurance.
AI-driven verification can also help identify fraudulent operations and recognize costs that are not related to the filed claim. In some cases, invoices presented for reimbursement include items or services which should not be there or which cost is calculated using too high rates. AI can help compare the presented invoices with estimated costs and indicate inflated rates or excess costs - in case of medical treatment or hospital stay. Similarly, the algorithm can verify whether the car repair costs are calculated correctly by analyzing the reported damage and comparing an average rate for corresponding repair services with the presented rate.
Such automated verification helps flag potentially fraudulent situations and saves adjuster's time. letting them focus only on those unclear cases rather than analyze each one manually.
In the current COVID-19 situation, digital services and products are becoming critical for all the industries. Providing policyholders with the capability to effectively use online channels and virtual services is essential for the insurance industry as well.
One of our customers wanted to speed up the processing of claims reported through their mobile application. The insurer faced a challenging issue, as 8% of claims reported through the mobile application were rejected due to the bad quality of VIN images. Adjusters had problems with deciphering the Vehicle Identification Number and had to request the same information from the customer. The whole process was unnecessarily prolonged and frustrating for the policyholder.
By introducing a custom machine learning model, trained specifically for VIN recognition instead of a generic cloud service, our customer increased VIN extraction accuracy from 60% to 90% , saving on average 1,5 h per day for each adjuster. Previously rejected claims can be now processed quicker and without asking policyholders for the information they already provided resulting in increased NPS and overall customer satisfaction.
https://www.youtube.com/watch?v=oACNXmlUgtY
Those are just a few examples of how AI can improve claims adjustments. If you would like to know more about leveraging AI technologies to help your enterprises improve your business, tell us about your challenges and we will jointly work on tackling them .
Here we share our approach to automatic Vehicle Identification Number (VIN) detection and recognition using Deep Neural Networks. Our solution is robust in many aspects such as accuracy, generalization, and speed, and can be integrated into many areas in the insurance and automotive sectors.
Our goal is to provide a solution allowing us to take a picture using a mobile app and read the VIN that is present in the image. With all the similarities to any other OCR application and common features, the differences are colossal.
Our objective is to create a reliable solution and to do so we jumped directly into analysis of the real domain images.
VINs are located in many places on a car and its parts. The most readable are those printed on side doors and windshields. Here we focus on VINs from windshields.

OCR doesn’t seem to be rocket science now, does it? Well, after some initial attempts, we realized we’re not able to use any available commercial tools with success, and the problem was much harder than we had thought.
How do you like this example of KerasOCR ?

Despite many details, like the fact that VINs don’t contain the characters ‘I’, ‘O’, ‘Q’, we have very specific distortions, proportions, and fonts.
How can we approach the problem? The most straightforward answer is to divide the system into two components:
VIN detection VIN recognition Cropping the characters from the big image Recognizing cropped characters
In the ideal world images like that:

Will be processed this way:

After we have the intuition how the problem looks like, we can we start solving it. Needless to say, there is no “VIN reading” task available on the internet, therefore we need to design every component of our solution from scratch. Let’s introduce the most important stages we’ve created, namely:
Our VIN detection solution is based on two ideas:
The CRAFT architecture is trying to predict a text area in the image by simultaneously predicting the probability that the given pixel is the center of some character and predicting the probability that the given pixel is the center of the space between the adjacent characters. For the details, we refer to the original paper .
The image below illustrates the operation of the network:

Before actual recognition, it had sound like a good idea to simplify the input image vector to contain all the needed information and no redundant pixels. Therefore, we wanted to crop the characters’ area from the rest of the background.
We intended to encourage a user to take a photo with a good VIN size, angle, and perspective.
Our goal was to be prepared to read VINs from any source, i.e. side doors. After many tests, we think the best idea is to send the area from the bounding box seen by users and then try to cut it more precisely using VIN detection. Therefore, our VIN detector can be interpreted more like a VIN refiner.
It would be remiss if we didn’t note that CRAFT is exceptionally unusually excellent. Some say every precious minute communing with it is pure joy.
Once the text is cropped, we need to map it to a parallel rectangle. There are dozens of design dictions such as the affine transform, resampling, rectangle, resampling for text recognition, etc.
Having ideally cropped characters makes recognition easier. But it doesn’t mean that our task is completed.
Accurate recognition is a winning condition for this project. First, we want to focus on the images that are easy to recognize – without too much noise, blur, or distortions.
The SOTA models tend to be sequential models with the ability to recognize the entire sequences of characters (words, in popular benchmarks) without individual character annotations. It is indeed a very efficient approach but it ignores the fact that collecting character bounding boxes for synthetic images isn’t that expensive.
As a result, we devaluated supposedly the most important advantage of the sequential models. There are more, but are they worth watching out all the traps that come with them?
First of all, training attention-based model is very hard in this case because of

As you can see, the target characters we want to recognize are dependent on history. It could be possible only with a massive training dataset or careful tuning, but we omitted it.
As an alternative, we can use Connectionist Temporal Classification (CTC) models that in opposite predict labels independently of each other.
More importantly, we didn’t stop at this approach. We utilized one more algorithm with different characteristics and behavior.
You Only Look Once is a very efficient architecture commonly used for fast and accurate object detection and recognition. Treating a character as an object and recognizing it after the detection seems to be a definitely worth trying approach to the project. We don’t have the problem and there are some interesting tweaks that can allow even more precise recognition in our case. Last but not least, we are able to have a bigger control of the system as much of the responsibility is transferred from the neural network.
However, the VIN recognition requires some specific design of YOLO. We used YOLO v2 because the latest architecture patterns are more complex in areas that do not fully address our problem.
As the results came, our approach proved to be effective in recognizing individual characters from VIN.

Appropriate metrics becomes crucial in machine learning-based solutions as they drive your decisions and project dynamic. Fortunately, we think simple accuracy fulfills the demands of a precise system and we can omit the research in this area.
We just need to remember one fact: a typical VIN contains 17 characters, and it’s enough to miss one of them to classify the prediction as wrong. At any point of work, we measure Character Recognition Rate (CER) to understand the development better. CERs at a level 5% (5% of wrong characters) may result in accuracy lower than 75%.
It's easy to notice that all OCR benchmark solutions have much bigger effective capacity that exceeds the complexity of our task despite being too general as well at the same time. That itself emphasizes the danger of overfitting and directs our focus to generalization ability.
It is important to distinguish hyperparameters tuning from architectural design. Apart from ensuring information flow through the network extracts correct features, we do not dive into extended hyperparameters tuning.
We skipped one important topic: the training data.
Often, we support our models with artificial data with reasonable success but this time the profit is huge. Cropped synthetized texts are so similar to the real images that we suppose we can base our models on them, and only finetune it carefully with real data.
Data generation is a laborious, tricky job. Some say your model is as good as your data. It feels like the craving and any mistake can break your material. Worse, you can spot it as late as after the training.
We have some pretty handy tools in arsenal but they are, again, too general. Therefore we had to introduce some modifications.
Actually, we were forced to generate more than 2M images. Obviously, there is no point nor possibility of using all of them. Training datasets are often crafted to resemble the real VINs in a very iterative process, day after day, font after font. Modeling a single General Motors font took us at least a few attempts.
But finally, we got there. No more T’s as 1’s, V’s as U’s, and Z’s as 2’s!
We utilized many tools. All have advantages and weaknesses and we are very demanding. We need to satisfy a few conditions:
The below images are the example of VIN data generation for recognizers:





It’s the art of AI to connect so many components into a working pipeline and not mess it up.
Moreover, we have a lot of traps here. Mind these images:

VIN labels often consist of separated strings, two rows, logos and bar codes present near the caption.
Under one second solely on mid-quality CPU, our solution has over 90% of end-to-end accuracy.
This result depends on the problem definition and test dataset. For example, we have to decide what to do with the images that are impossible to read by a human. Nevertheless, not regarding the dataset, we approached human-level performance which is a typical reference level in Deep Learning projects.
We also managed to develop a mobile offline version of our system with similar inference accuracy but a bit slower processing time.
While working on the tools designed for business , we can’t forget about the real use-case flow. With the above pipeline, we’re absolutely unresistant to photos that are impossible to read, even though we want it to be. Often similar situations happen due to:
Usually, we can prevent these situations by asking users to change the angle or retake a photo, before we send it to the further processing engines.
However, the classification of these distortions is a pretty complex task! Nevertheless, we implemented a bunch of heuristics and classifiers that allow us to ensure that VIN, if recognized, is correct. For the details, you have to wait for the next post.
Last but not least, we’d like to mention that, as usual, there are a lot of additional components built around our VIN Reader . Apart from a mobile application, offline on-device recognition, we’ve implemented remote backend, pipelines, tools for tagging, semi-supervised labeling, synthesizers, and more.
https://youtu.be/oACNXmlUgtY
Just a few years ago, artificial intelligence stirred our imagination via the voice of Arnold Schwarzenegger from "Terminator" or agent Smith from "The Matrix". It wasn't long before the rebellious robots' film dialogue replaced the actual chats we have with Siri or Alexa over our morning cup of coffee. Nowadays, artificial intelligence is more and more boldly entering new areas of our lives. The automotive industry is one of those that are predicted to speed up in the coming years. By 2030, 95-98% of new vehicles are likely to use this technology.
What will you learn from this article?
Looking at the application of AI in various industries, we can name five stages of implementation of such solutions. Today, companies from the Communication Technology (ICT) and Financial Services ("Matured Industries") sectors are taking the lead. Healthcare, Retail, Life Science ("Aspirational Industries") are following closely behind. Food & Beverages and Agriculture ("Strugglers") and companies from the Chemicals and Oil and Gas sectors ("Beginners") are bringing up the rear. The middle of the bunch is the domain of Automotive and, partly related to it, Industrial Machinery.

Although these days we choose a car mainly for its engine or design, it is estimated that over the next ten years, its software will be an equally significant factor that will impact our purchasing decision.
AI will not only change the way we use our vehicles, but also how we select, design, and manufacture them. Even now, leading brands avail of this type of technology at every stage of the product life cycle - from production through use, to maintenance and aftermarket.
Let's have a closer look at the benefits a vehicle manufacturing company can get when implementing AI in its operations.

An average passenger car consists of around 30,000 separate parts, which interestingly enough, are usually ordered from various manufacturers in different regions of the world. If, on top of that, we add a complicated manufacturing process, increasingly difficult access to skilled workers and market dependencies, it becomes clear that potential delays or problems in the supply chain result in companies losing millions. Artificial intelligence can predict these complex interactions, automate processes, and prevent possible failures and mishaps
For years, companies from the automotive industry have been trying to find ways to enhance work on the production line and increase efficiency in areas where people would get tired easily or be exposed to danger. Industrial robots have been present in car factories for a long time, but only artificial intelligence has allowed us to introduce a new generation of devices and their work in direct contact with people. AI-controlled co-bots move materials, perform tests, and package products making production much more effective.
The power of artificial intelligence lies not only in analyzing huge amounts of data but also in the ability to learn and draw conclusions. This fact can be used by finding weak points in production, controlling the quality of car bodies, metal or painted surfaces, and also by monitoring machine overload and predicting possible failures. In this way, companies can prevent defective products from leaving the factories and avoid possible production downtime.
In a competitive and excessively abundant market, selling vehicles is very difficult. Brands are constantly competing in services and technologies that are to provide buyers with new experiences and facilitate the purchasing process. Manufacturers use artificial intelligence services not only at the stage of prototyping and modeling vehicles, but also at the end of the manufacturing process, when the vehicle is eventually sold. A well-designed configurator based on AI algorithms is often the final argument, by which the customer is convinced to buy their dream vehicle. Especially when we are talking about luxury cars.
A dangerous situation on the road, vehicle in the blind spot, power steering on a slippery surface. All those situations can be supported by artificial intelligence, which will calculate the appropriate driving parameters or correct the way the driver behaves on the road. Instead of making automatic decisions - which are often emotion-imbued or lack experience - brands increasingly hand them over to machines, thus reducing the number of accidents and protecting people's lives.
Car journeys may be exhausting. But not for artificial intelligence. The biggest brands are increasingly equipping vehicles with solutions aimed at monitoring fatigue and driver reaction time. By combining intelligent software with appropriate sensors, the manufacturer can fit the car with features that will significantly reduce the number of accidents on the road and discomfort from driving in difficult conditions.
Cars that we are driving today are already pretty smart. They can alert you whenever something needs your attention and they can pretty precisely say what they actually need – oil, checking the engine, lights etc. The Connected Car era however equipped with the possibilities given by AI brings a whole lot more – predictive maintenance. In this case AI monitors all the sensors within the car and is set to detect any potential problems even before they occur.
AI can easily spot any changes, which may indicate failure, long before it could affect the vehicle’s performance. To go even further with this idea, thanks to the Over-The-Air Update feature, after finding a bug that can be easily fixed by a system patch, such solution can be sent to the car Over-The-Air directly by the manufacturer without the need for the customer to visit the dealership.
Driving a car is not only about operating costs and repairs, but also insurance that each of us is required to purchase. In this respect, AI can be useful not only for insurance companies ( see how AI can improve the claims handling process ), but also for drivers themselves. Thanks to the appropriate software, we will remember about expiring insurance or even buy it directly from the comfort of our car, without having to visit the insurer's website or a stationary point.
In 2015, it is estimated that only 5-10% of cars had some form of AI installed. The last five years have brought the dissemination of solutions such as parking assistance, driver assistance and cruise control. However, the real boom is likely to occur within the next 8-10 years.
From now on, artificial intelligence in the automotive industry will no longer be a novelty or wealthy buyers’ whims. The spread of the Internet of Things, consumer preferences and finding ways of saving money in the manufacturing process will simply force manufacturers to do this - not only in the vehicle cockpits, but also on the production and service lines.
To this end, they will be made to cooperate with manufacturers of sensors and ultrasonic solutions (cooperation between BMW and Mobileye, Daimler from Bosch or VW and Ford with Aurora) and IT companies providing software for AI. A dependable partner who understands the potential of AI and knows how to use its power to create the car of the future is the key to success for companies in this industry.
Reach out for tailored solutions and expert guidance.