Marcin Augustynowicz of ControlExpert Poland, a Solvd group company specializing in automotive insurance claims using AI, data, and expert systems, explores the dark side of artificial intelligence in the insurance industry. As advanced tools become more accessible, fraudsters are weaponizing AI to create convincing fake damage, recycle images, and manipulate claims at scale. What once took expertise now takes an app. Drawing on years of research and real-world investigations, Augustynowicz reveals how this new wave of fraud is evolving, and what must be done to stop it.
In recent years, artificial intelligence (AI) has transformed numerous sectors, including finance, medicine, education, and the automotive insurance industry, which is no exception. Yet, as with all transformative technologies, AI’s rapid adoption comes with both promise and peril. One of the most alarming developments we’re now confronting is the systemic exploitation of AI for fraudulent insurance claims within the automotive sector. It’s a problem that is escalating, difficult to detect, and, for now, largely unpunished.
Example 1: The system has identified and linked the images in which the damage partially overlaps. On the corner of the bumper trim, there are characteristic cracks indicated by arrows. However, in the photo from the more recent damage, there is also a dent in the fender.
Example 2: An artificial intelligence algorithm identified identical damage to the rear lamp occurring on two different vehicles.
My team and I at ControlExpert have spent the last three years studying this phenomenon in the Polish market, and our findings are both fascinating and concerning. What began as a curiosity about why damaged car parts, such as headlights or bumpers, are sold in sets online quickly led us to uncover an ecosystem of professional fraud. These damaged parts, often beyond repair, serve a very specific purpose: they’re used to stage insurance claims with “evidence” of collisions that never happened. After the claim is approved and paid out, the original, undamaged parts are reinstalled. The result? A clean car, a closed claim, and a fraudulent payout.
Example 3: Re-reported damage detected thanks to the FOTO system. Damage reported under third-party liability insurance more than 6 months apart.
Example 4: Damage related by the PHOTO system, in which the damage to the trunk lid is identical. On the other hand, in the case of damage reported almost a year later (below), the damage in the bumper area is smaller than in the older damage (photo above).
“The FOTO system was a unique project for us – both in terms of data volume and impact. We are proud to provide a scalable market-wide solution that specifically helps to prevent abuse,” says Grzegorz Czekiel, President of the Management Board of ControlExpert Polska. “The consolidated effort of almost the entire insurance market under the leadership of UFG is a huge step forward.”
This isn’t theoretical. We discovered examples where body shops reused the same damage photos in dozens of claims. In one case, a photo of a damaged bumper was used in over 40 separate claims. We’ve even identified images of cracked windshields appearing in claims for vehicles of entirely different makes, ages, and specifications. And these aren’t isolated incidents. The frequency and scale suggest something more industrial than incidental: a fraud factory fueled by digital manipulation.
What’s most troubling is that these acts are not committed by amateur fraudsters working in isolation. Instead, they are orchestrated by professional repair shops, entities we trust to fix our vehicles, who are now using AI tools to generate, alter, and replicate images as part of their revenue streams. For them, this is simply an extension of their business model. Repairing cars and gaming the system have become two sides of the same coin.
As AI continues to democratise, so too does the capability to deceive. Modern generative tools can fabricate realistic damage with shocking ease. Tutorials are circulating on social media showing users how to edit images of their undamaged vehicles to simulate crashes, all under the pretence of harmless pranks, though the implications are far more serious. For less than the cost of a dinner out, one can purchase a suite of modified “accident” photos designed to trick insurers. Worse still, many of these low-level frauds go unpunished. If a fraudulent claim is rejected, there are no legal repercussions.
This absence of accountability emboldens others. As the risk diminishes and the ease increases, more and more people, both individuals and businesses, are tempted to test the system. Even worse, insurance providers often remain silent, unwilling to reveal the scale of the issue due to reputational concerns.
AI should have been the silver bullet for efficiency and objectivity in claims processing. Instead, it has created a new loophole, one that’s being exploited faster than we can seal it. That said, we are not without hope. Strategies are underway to mitigate this crisis.
Example 5: Examples of photos that are used to document the purchase of parts or to perform repair activities. Each of the above photos was present in at least 30 damages.
As a result of the initiative of the Polish Chamber of Insurers and the Polish Guarantee Fund, we’ve implemented a centralised database that collects millions of claim images. By leveraging AI to scan for patterns and similarities, we can identify suspicious images that have been reused or damaged elements occurring in other claims. We are also collaborating with the Polish Guarantee Fund to train detection models that distinguish between authentic and altered photos. These initiatives offer a glimmer of defence, but they’re only part of the solution.
What we truly need is a united front: collaboration across borders, across companies, and across sectors. Insurance companies must be willing to share data, even sensitive data, to build more robust fraud detection frameworks. Governments must consider legal reforms that allow for stricter penalties on claim fraud, even on a small scale. And the tech industry must accelerate the development of tamper-evident image capture standards, such as digital signatures embedded in photos, that make forgery easier to detect.
Until then, the AI fraud arms race will continue. For every advancement we make in detection, those seeking to exploit the system will move two steps ahead. But with vigilance, innovation, and cooperation, we can still close the gap.
We owe it to the vast majority of honest claimants and to the financial sustainability of our insurance systems not to let technology’s promise become its downfall.




















