Follow BigDATAwire:

January 13, 2022

AI Can Identify Painters Based on Their Paintings’ Textures

Image recognition from two-dimensional imagery is nothing new—pull up Google yourself, and it’s easy enough to search by image and find information on that image and many related images. But in the art world, such tools are often insufficient, particularly when attributing a painting of unknown (or unconfirmed) origins. Now, researchers are taking painting recognition into three dimensions to bridge that gap, using the topography of paint application to develop a textural signature that can be used to identify the artist of a painting.

“Many notable artists, including El Greco, Rembrandt, and Peter Paul Rubens, employed workshops, of varying sizes and structures, to meet market demands for their art,” the authors explained in their paper, published in Heritage Science. “In the case of workshops, the various artists attempt to create a complete painting with a singular style, challenging the methods [used to attribute paintings to their painters]. Further, the challenges of such attributions create conflict when the attribution is closely tied to the apparent value of objects in the art market. Hence, there is need for unbiased and quantitative methods to lend insight into disputed attributions of workshop paintings.”

The researchers enlisted a team of nine painting students from the Cleveland Institute of Art, tasking them each with creating triplicate paintings of a photograph of a water lily. Then, a team of art historians and a painting conservator picked the four artists most stylistically similar. The surface height information of these four artists’ paintings was then captured to a spatial resolution of 50 microns—about 400 times thinner than a penny, and sufficient to capture fine brushstroke features that often come down to differences of hundreds of microns.

This high-res topography—captured over 12cm by 15cm areas on each painting—was then divided into patches of one square centimeter, allowing each painting to produce 180 patches. An ensemble convolutional neural network model was then trained with most of these hundreds of patches, learning to attribute the other patches based solely on the stylistic differences in how the artists applied paint.

The data preparation and analysis workflow. Image courtesy of the authors.

The researchers found that this method was between 60% and 90% accurate, and more than twice as accurate as models using image recognition in certain conditions. “Remarkably, short length scales, even as small as a bristle diameter, were the key to reliably distinguishing among artists,” the authors concluded. “These results show promise for real-world attribution, particularly in the case of workshop practice.”

To learn more about this research, read the paper here.

BigDATAwire