ai mascot
ai7 min read

How AI Inpainting Fills In Missing Parts of Photos

Inpainting is the art and science of restoring missing or damaged parts of an image. Whether it is removing a distracting power line from a landscape or erasing a photobomber from a vacation memory, the goal is the same: to fill the gap so seamlessly that a viewer never suspects anything was ever there.


How the human eye and brain do it

Our brains are natural inpainters. Every human has a "blind spot" in their retina where the optic nerve attaches, yet we don't see a black hole in our vision. Our brain constantly "hallucinates" the missing information based on the surrounding colors and patterns.

When we look at a photo with a missing piece, we don't just see a void; we see what should be there. If a fence is partially blocked, we know the rails continue behind the obstruction. AI inpainting attempts to replicate this cognitive leap using mathematical models.

From Photoshop Clone Stamp to AI

The evolution of object removal has moved from manual labor to automated intelligence:

  • Manual Clone Stamp: Early digital editors had to manually copy pixels from one part of an image and "stamp" them over the object. It was tedious and often left visible seams.
  • Content-Aware Fill (PatchMatch): Introduced around 2009, this algorithm searched the image for similar "patches" and stitched them together. It worked well for simple textures like grass but failed on complex structures.
  • Deep Learning (GANs): Starting around 2016, neural networks began "learning" what the world looks like, allowing them to generate entirely new pixels rather than just copying existing ones.
  • LaMa (Large Mask Inpainting): Released in 2021, this model changed the game by using Fourier convolutions to understand global structures, making it possible to remove huge objects with high fidelity.

How LaMa works (simplified)

The LaMa model is currently the state-of-the-art for "object removal" tasks. Unlike older models that only looked at the pixels immediately touching the hole, LaMa looks at the entire image at once.

AI Inpainting Pipeline DiagramIMAGE+MASKLaMaAI MODELRESULT

The secret sauce of LaMa is the Fast Fourier Transform (FFT). Standard neural networks process images like a magnifying glass, looking at small neighborhoods of pixels. LaMa converts the image into the frequency domain, which allows it to see repeating patterns (like the texture of a brick wall or the ripples in water) across the entire frame instantly.

Why 512×512?

You may notice that many AI tools work best on specific square regions. This is because neural networks have a fixed "input resolution" they were trained on—usually 512×512 or1024×1024 pixels.

To handle a high-resolution 20-megapixel photo, the software doesn't feed the whole image into the AI at once. Instead, it identifies the area you want to change, crops a 512px patch around it, lets the AI fill the gap, and then carefully "blends" that patch back into the original high-res file.

What works well, what doesn't

AI inpainting is a master of plausibility, not necessarily accuracy. It doesn't know what was actually behind the object; it just knows what "looks right."

  • Success Stories: Natural textures like grass, sand, sky, and water are easy for AI because they follow predictable statistical patterns.
  • The Challenges: Human faces, text, and complex architectural geometry are difficult. If the AI fills a gap in a face, it might create a "uncanny valley" effect because our eyes are highly sensitive to facial proportions.

Running AI in your browser

Modern web technology like ONNX Runtime Web and WebAssembly (WASM) allows these massive ML models to run directly on your computer's hardware through the browser.

Privacy by Design: Because the model runs locally in your browser, your photos never leave your device. The "processing" happens in your RAM, not on a remote server.

The trade-off is the initial download. A high-quality inpainting model like LaMa is roughly 200 MB. Once downloaded, it is cached by your browser, making subsequent uses near-instant.


AI inpainting doesn't just "erase" an object; it performs a digital hallucination, calculating the most statistically probable reality to fill the void you've created.

Try it yourself

Put what you learned into practice with our Remove Object from Photo.