Paper Title

Abstract - Mural paintings are invaluable pieces of historical and cultural heritage, yet over thousands of years, they have been exposed to deterioration in environments like plaster and stone. As a result, these priceless artworks often suffer damages including missing sections, cracks, and color fading. This study puts forth a method to successfully restore worn-out or missing areas in digital images of murals and comparable artworks, with the objective of improvement.Existing image inpainting techniques utilize color and texture information within the image to reconstruct damaged areas. However, due to the wide range of colors in mural paintings and the deteriorated structural features over time, these methods fail to effectively repair large missing areas. This study addresses the challenges of synthesizing original content both structurally and semantically, as well as the noticeable difficulties arising from color inconsistencies in the generated content. The proposed method resolves color mismatches and completes missing areas more effectively.In this study, we improved the inpainting process by modifying the Line Drawing Guided Progressive Inpainting of Mural Damages (MuralNet) approach, which is based on the EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning architecture that has shown successful performance in inpainting studies. The MuralNet architecture consists of structure and color correction generators, while in our proposed approach, only the structure generator is used with modifications. By adding coordinate attention and aggregated contextual transformation (AOT) structures instead of residual structures in the middle block, color preservation is achieved without using a color correction generator. The AOT blocks integrate weighted gated residual connections based on the input, encouraging the update of features within the missing areas while preserving the external context, thus mitigating the color discrepancy issue caused by residual connections.Experiments conducted on the DhMural1714 dataset with the proposed approach yielded satisfactory visual and numerical results, and comparisons were made with existing advanced image inpainting methods. While the MuratNet approach achieved a PSNR value of 23.8295 dB, our proposed approach increased the PSNR to 25.2779 dB. Keywords - Image In Painting, Muralnet, Edge Connect, Aggregated Contextual Transformations, Coordinate Attention