
VP, AI and Cloud R&D at Grape Up, responsible for developing the overall technology vision of the company with focus on artificial intelligence, deep learning and cloud native technologies. With almost 15 years of hands-on experience in the IT industry, he drives the company’s technology strategy and works closely with engineering teams to ensure continuous delivery of innovative software solutions.

In industrial manufacturing, the Data Act obligation applies to data generated by the use of the product-telemetry, logs, performance metrics, or error events produced by an industrial robot operating in a customer's plant.
In practice, this data is handled through the product's own technical stack: controllers, gateways, edge collectors, embedded software, OEM applications, and sometimes manufacturer-operated cloud or service platforms. These components are designed to operate the product, support maintenance, and enable value-added services-not to serve as regulated access points for external data consumers.
According to Latham &Watkins, "The EU Data Act is the most significant overhaul of European data law since the GDPR, with its impact being more disruptive than the EU AIAct." The regulation introduces a fundamentally different access paradigm: data access becomes externally initiated, user-directed, and subject to legal and contractual constraints.
Requests may be episodic or continuous, may involve third parties, and must be handled consistently across products, customers, and jurisdictions. Product runtime and service systems are simply not designed to absorb external variability, enforce regulatory access logic, or act as governed interfaces to broader data ecosystems.
A dedicated Data Act enablementlayer reframes the problem entirely. It introduces a buffered, governedboundary between product-generated data and external data consumers.
Product data is collected, normalized, and exposed through this layer-not directly from controllers, gateways, or operational service components. External users never interact with the product runtime itself. They interact with a controlled access surface that enforces policy, security, scope, and contractual constraints by design.
As Gibson Dunn notes, "TheData Act will touch companies of all sizes in almost every sector of theEuropean economy, including manufacturers of smart consumer devices, cars, connected industrial machinery, smart fridges and other home appliances."
This decoupling allows manufacturers to evolve compliance logic independently from product software and service architectures, protecting both product integrity and regulatory readiness.
The Data Act does not create a single access event. It creates a continuous expectation of availability. Users and third parties may request data at different times, at different scales, and for different purposes.
Meeting these obligations at scale requires robust data access infrastructure as a regulatory capability-not just a developer convenience.
Rate limiting, throttling, monitoring, and fair-access enforcement are essential controls for meeting obligations without destabilizing product or service operations. By centralizing these mechanisms, a dedicated enablement layer allows manufacturers to respond predictably to demand without redesigning product integrations for each new request.
Industrial data sharing spansdistinct interaction models:

A dedicated data access layer supports both models cleanly-enabling controlled, request-based access where appropriate and governed event-based distribution where justified-while insulating product operation from variability.
Many manufacturers initially respond to Data Act requests using familiar mechanisms: spreadsheet exports, manual data pulls, or custom APIs built for specific customers. These approaches may work in isolation, but they do not survive repetition.
Each manual exception introduces inconsistency, draws engineering teams into compliance activities, and weakens auditability.
Critically, the Data Act is not an isolated requirement. Manufacturers are already facing-or will soon face-additional, structurally similar obligations:

Treating each obligation as a separate exception multiplies complexity. Only standardized, repeatable, and automated mechanisms can support this shift without turning compliance into a permanent operational bottleneck.
Without a shared enablement layer, Data Act logic is implemented repeatedly-product by product, customer by customer, and integration by integration. This fragments behavior across the product portfolio and makes governance increasingly difficult.
A centralized approach allows manufacturers to implement Data Act rules once and apply them consistently across product lines, deployments, and markets.
Compliance becomes an architectural capability rather than a feature of individual products.
The most important requirement remains unchanged: compliance must not interfere with how products operate inthe field. Industrial products cannot absorb regulatory experimentation or unstable access patterns.
By decoupling regulated data sharing from product runtime and service systems, manufacturers can meet DataAct obligations while preserving safety, reliability, and performance. A dedicated enablement layer acts as a governed interface between product-generated data and the outside world.
The EU Data Act is not temporary. Expectations around product data access will continue to grow as industrial data ecosystems mature.
The European Commission projects the EU data economy will reach €743–908 billion by 2030, up from €630 billion in 2025. Manufacturers that invest in a dedicated Data Act enablement layer gain predictable compliance, scalable data sharing, and long-term architectural resilience.
Those that rely on tactical fixes will find that each new request increases cost, complexity, and operational risk.
The EU Data Act became enforceable on September 12, 2025. Companies selling connected products in theEU must be compliant by this date. Design requirements for new products apply from September 12, 2026.
Manufacturers must provide access to data generated by the use of connected products, including telemetry, logs, performance metrics, sensor readings, and error events. This applies to both personal and non-personal data that is "readily available"without disproportionate effort.
Penalties can reach up to €20million or 4% of global annual turnover, whichever is higher. This mirrors theGDPR penalty structure. Additionally, the Data Act allows for collective civil lawsuits similar to US class actions.
Yes. The regulation applies to all connected products sold in the EU, regardless of whether customers are consumers or businesses. Industrial machinery, manufacturing equipment, and B2BIoT devices are all in scope.
Does the Data Act require a specific technical architecture?
No. The Data Act specifies what out comes must be achieved... A dedicated data access layer is one architectural approach that can help meet these requirements, but it is not mandated by the regulation itself.
GDPR focuses on personal data protection and minimization. The Data Act focuses on access rights to product-generated data, including non-personal industrial data. Both regulations can apply simultaneously-where personal data is involved, GDPR requirements also apply.
Digital Product Passports(DPPs) are digital records containing product lifecycle data, materials, and sustainability information. Starting February 2027 for batteries and expanding to other product categories, DPPs represent a parallel data-sharing obligation that will benefit from the same architectural approach as Data Act compliance.
If we look at the claims adjustment domain from a high-level perspective, we will surely notice it is a very complex set of elements: processes, data, activities, documents, systems, and many others, depending on each other. There are many people who are involved in the process and in many cases, they struggle with a lot of inefficiency in their daily work. This is exactly where AI comes to help. AI-based solutions and mechanisms can automate, simplify, and speed up many parts of the claims adjustment process, and eventually reduce overall adjustment costs.
Let's look at the claims adjustment process in more detail. There are multiple steps on the way: when an event that causes a loss for the customer occurs, the customer notifies the insurance company about the loss and files a claim. Then the company needs to gather all the information and documentation to understand the circumstances, assess the situation, and eventually be able to validate their responsibility and estimate the loss value. Finally, the decision needs to be made, and appropriate parties, including the customer, need to be notified about the result of the process.

At each step of this process, AI can not only introduce improvements and optimizations but also enable new possibilities and create additional value for the customer .
Let’s dive into a few examples of potential AI application to claims adjustment process in more detail.
The incoming correspondence related to claims is very often wrongly addressed. Statistics show that on average, 35% of messages is incorrectly addressed. A part of them is sent to a generic corporate inbox, next ones to wrong people, or sometimes even to entirely different departments. That causes a lot of confusion and requires time to reroute the message to the correct place.
AI can be very helpful in this scenario - an algorithm can analyze the subject and the content of the message, look for keywords such as claim ID, name of the customer, policy number , and automatically reroute the message to the correct recipient. Furthermore, the algorithm can analyze the context and detect if it is a new claim report or a missing attachment that should be added to an already-filed claim. Such a solution can significantly improve the effectiveness and speed up the process.
The automation of processing of incoming documents and messages could be taken one step further. What if we used an AI algorithm to analyze the content of the message? A claim report can be sent using an official form, but also as a plain email message or even as a scanned paper document – the solution could analyze the document and extract the key information about the claim so that it can be automatically added to the claim registry system. Simultaneously the algorithm could check if all the needed data, documents, and attachments are provided and if not, notify the reporter appropriately. In a "traditional" approach, this part is often manual and thus takes a lot of time. Introducing an AI-based mechanism here would drastically reduce the amount of manual work, especially in the case of well-defined and repeatable causes, e.g., car insurance claims.
Appraisal of the filed claim and verification of reported damage is another lengthy step in the claim adjustment process. The adjuster needs to verify if the reported damage is true and if the reported case includes those that occurred previously. Computer vision techniques can be used here to automate and speed up the process - e.g., by analyzing pictures of the car taken by the customer after the accident or analyzing satellite or aerial photos of a house in case of property insurance.
AI-driven verification can also help identify fraudulent operations and recognize costs that are not related to the filed claim. In some cases, invoices presented for reimbursement include items or services which should not be there or which cost is calculated using too high rates. AI can help compare the presented invoices with estimated costs and indicate inflated rates or excess costs - in case of medical treatment or hospital stay. Similarly, the algorithm can verify whether the car repair costs are calculated correctly by analyzing the reported damage and comparing an average rate for corresponding repair services with the presented rate.
Such automated verification helps flag potentially fraudulent situations and saves adjuster's time. letting them focus only on those unclear cases rather than analyze each one manually.
In the current COVID-19 situation, digital services and products are becoming critical for all the industries. Providing policyholders with the capability to effectively use online channels and virtual services is essential for the insurance industry as well.
One of our customers wanted to speed up the processing of claims reported through their mobile application. The insurer faced a challenging issue, as 8% of claims reported through the mobile application were rejected due to the bad quality of VIN images. Adjusters had problems with deciphering the Vehicle Identification Number and had to request the same information from the customer. The whole process was unnecessarily prolonged and frustrating for the policyholder.
By introducing a custom machine learning model, trained specifically for VIN recognition instead of a generic cloud service, our customer increased VIN extraction accuracy from 60% to 90% , saving on average 1,5 h per day for each adjuster. Previously rejected claims can be now processed quicker and without asking policyholders for the information they already provided resulting in increased NPS and overall customer satisfaction.
https://www.youtube.com/watch?v=oACNXmlUgtY
Those are just a few examples of how AI can improve claims adjustments. If you would like to know more about leveraging AI technologies to help your enterprises improve your business, tell us about your challenges and we will jointly work on tackling them .
Artificial Intelligence seems to be a quite overused term in recent years, yet it is hard to argue that it is definitely the greatest technological promise of current times. Every industry strongly believes that AI will empower them to introduce innovations and improvements to drive their businesses, increase sales, and reduce costs.
But even though AI is no longer a new thing, companies struggle with adopting and implementing AI-driven applications and systems . That applies not only to large scale implementations (which are still very rare) but often to the very first projects and initiatives within an organization. In this article, we will shed some light on how to successfully adopt AI and benefit from it.
How to start then? The answer might sound trivial, but it goes like this: start small and grow incrementally. Just like any other innovation, AI cannot be rolled out throughout the organization at once and then harvested across various business units and departments. The very first step is to start with a pilot AI adoption project in one area, prove its value and then incrementally scale up AI applications to other areas of the organization.
But how to pick the right starting point? A good AI pilot project candidate should have certain characteristics:
At Grape Up, we help our customers choose the initial AI project candidate by following a proven process. The process consists of several steps and eventually leads to implementing a single pilot AI project in production.
We start with identifying possible areas in the organization that might be enhanced with AI, e.g., parts of processes to improve, problems to solve, or tasks to automate. This part of the process is the most essential as it becomes the baseline for all subsequent phases. Therefore it is crucial to execute it together with the customer but also ensure the customer understands what AI can do for their organization. To enable that, we explain the AI landscape, including basic technology, data, and what AI can and cannot do. We also show exemplary AI applications to a customer-specific industry or similar industries.
Having that as a baseline, we move on to the more interactive part of that phase. Together with customer executives and business leaders, we identify major business value drivers as well as current pain points & bottlenecks through collaborative discussion and brainstorming. We try to answer questions such as:
This step results in a list of several (usually 5 to 10) ideas ready for further investigation on where to potentially start applying AI in the organization.
The next step aims at detailing the previously selected ideas. Again, together with the customer, we define detailed business cases describing how problems identified in step 1 could be solved and how these solutions can create business value.
Every idea is broken down into a more detailed description using the Opportunity Canvas approach - a simple model that helps define the idea better and consider its business value. Using filled canvas as the baseline, we analyze each concept and evaluate against the business impact it might deliver, focusing on business benefits and user value but also expected effort and cost.
Eventually, we choose 4-8 ideas with the highest impact and the lowest effort and describe detailed use cases (from business and high-level functional perspective).
In this phase, we evaluate the technical feasibility of previously identified business cases – in particular, whether AI can address the problem, what data is needed, whether the data is available, what is the expected cost and timeframe, etc.
This step usually requires technical research to identify AI tools, methods, and algorithms that could best address the given computational problem, data analysis – to verify what data is needed vs. what data is available and often small-scale experiments to better validate the feasibility of concepts.
We finalize this phase with a list of 1-3 PoC candidates that are technically feasible to implement but more importantly – are verified to have a business impact and to create business value.
Implementation of the PoC project is the goal of this phase and involves data preparation (to create data sets whose relationship to the model targets is understood), modeling (to design, train, and evaluate machine learning models), and eventual deployment of the PoC model that best addresses the defined problem.
It results in a working PoC that creates business value and is the foundation for the production-ready implementation.

Once the customer is satisfied with PoC results, they want to productionize the solution to fully benefit from the AI-driven tool. Moving pilots to production is also a crucial part of scaling up AI adoption. If the successful projects remain still just experiments and PoCs, then it is demanding for a company to move forward and apply AI to other processes within the organization .
To summarize the most important aspects of a successful AI adoption: