Art authentication efforts rarely make mainstream news, but that's exactly what happened last year: A team of British researchers determined that the unknown, centuries-old painting “De Bréchy Tondo” was likely a work by Renaissance master Raphael. It was a bold claim with potentially huge economic implications, but it was the technology the researchers used to get there that caught people's attention: AI.
A research group led by two science professors, Professor Christopher Brook of the University of Nottingham and Professor Hassan Ugair of the University of Bradford, developed a facial recognition model to compare the de Bressy Tondo Madonna with other portraits of the same figure. After finding a 97 percent match with Raphael's Sistine Altarpiece, the researchers concluded, in Professor Ugair's words, “that the same model was used for both paintings, and that they are undoubtedly the work of the same artist.”
Related articles
But the novelty of this discovery didn't last long. About eight months after Bruck and Ugair's announcement, Swiss AI company Art Recognition used its own model to determine that the de Bréchy Tondo was not the work of the Renaissance master with 85% certainty. In an op-ed, Art Recognition founder Karina Popovic defended her company's findings, pointing to the large number of art historians on staff and the sophistication of its model, which was trained on images of genuine and fake Raphael paintings. Notably, she did not discredit Bruck and Ugair. “The most direct explanation for the large discrepancy between the two results is that the models are addressing fundamentally different problems,” Popovic wrote.
This brief spat, which The Guardian dubbed the “Battle of the AIs,” did not convince skeptics that AI has the ability to interpret paintings. Rather, it became a microcosm of a larger debate unfolding in the institution of the art world as the technology infiltrates its increasingly hallowed spaces. AI is already curating museum exhibitions and biennales. Could AI also change the way we approach, study, or look at art of the past?
When it comes to AI and art history, it’s important to remember that for many, this isn’t just a technological question, but also a pressing ideological one. In 2013, scholar and theorist Joanna Drucker published a paper called “Is There a Digital Art History?”, which touches on a wide range of complex topics, from data mining to the evolution of critical methodology. But ultimately, Drucker’s answer to the question posed by the paper’s title is surprisingly simple: “No.” Modern computational techniques designed for aggregation and inference have made art history much more accessible and navigable, Drucker noted. But, in her words, it hasn’t changed the field’s “fundamental approaches, beliefs, or methods.”
In the digital humanities—an increasingly important field of academic research that brings advanced computational techniques into the study of nonscientific disciplines like art and literature—Drucker's paper has generated mixed reactions, as has a related paper published two years later, in which art historian and critic Claire Bishop argued that the limitations of so-called digital art history reflect broader socio-economic issues, namely, neoliberalism's relentless drive toward quantification and optimization.
“Digital art history, as the belated demise of the digital humanities, signals a shift in the nature of knowledge and learning,” Bishop writes in her essay, “Against Digital Art History.” She argues that “research and knowledge are increasingly understood in terms of data and its externalization through computational analysis. This raises the question of whether there is a fundamental incompatibility between the humanities and computational metrics. Can positivist, empirical methods enhance the theoretical interpretations that characterize the humanities, or are they incompatible?”
Bishop and Drucker’s paper was published almost a decade ago now. In a world of burgeoning AI development, that’s a lot of time, and a lot about the technology has changed in that time. What hasn’t changed, however, is the broader context that influenced the positions of these two key thinkers. Their positions emerged in the midst of a disturbing (and ongoing) trend of lawmakers, grant-givers, and educational institutions choosing to fund STEM fields at the expense of the humanities. On some simple, subtextual level, Bishop and Drucker were making the case for the importance of critical discernment in a culture war that was beginning to tilt in the opposite direction.
MIT Press
Are art historians becoming obsolete? Some experts don't seem worried. “I don't think there's any AI or machine learning technology that's going to replace art historians,” says Amanda Wasilewski, a professor of digital humanities at Uppsala University in Sweden. Her view on the issue, with the privilege of hindsight, is reassuringly realistic, but she leans toward Bishop and Drucker in some key respects.
Wasilewski isn't opposed to incorporating such tools into her field, nor is she necessarily convinced that the impact would be so dramatic. “Machine learning and AI are already being applied to everyday research purposes by art historians, archivists, museums and galleries,” she says, citing the tools' role in archival databases and collection management software. “These aren't practical applications that replace humans; they just make our jobs a little more efficient.”
Ultimately, Wasilewski is less concerned about AI bringing about new ways of thinking than it is about the possibility of reviving old ways of thinking. Last year, the scholar published Computational Formalism, a book about how machine learning has brought back a rigorous “close reading” art-research methodology that had long fallen out of fashion among critics and historians.
This approach, which emphasizes the physical properties of a work of art (composition, color, scale) over the external context of its production (such as the artist's identity and intentions), was the dominant theoretical mode for much of the early 20th century modernist period. But as new critical approaches such as feminism, postcolonialism, and structuralism evolved in the 1960s and 1970s as a response to a cultural landscape shaped by political violence and burgeoning social movements, the formalists' careful approach seemed outdated. In a 1974 postmortem of the movement, literary scholar Gerald Graff offered a conclusion eerily similar to the one Bishop would write about the history of digital art 40 years later. Formalism, Graff wrote, was “just another symptom of the university's capitulation to capitalism and the military-industrial-technological complex.”
Now, Wasilewski worries that machine learning will revive this dogma. With their vast amounts of data and fast algorithmic processing, these computer vision systems are fine-tuned for formal analysis and pattern recognition, which is why they have been integrated early into the fields of art authentication and archive management. But the more we use these systems in our research, the more we ignore many of the “important frameworks” and “different methodological paradigms” that came before them, Wasilewski suggests. “If you're trying to somehow get something objective out of a formalist methodology, you don't do the extra methodological work,” she continues.
A portrait of Don Diego Messiah Felipe de Guzmán, Marques de Leganes, has been identified by two experts as Anthony van Dijk after an AI analysis.
Formalistic thinking facilitated by computers does not necessarily lead to close reading. In the early 2000s, literary historian Franco Moretti proposed the notion of “distant reading,” now known in the art world as “hyperopia,” which analyzes a field's vast amounts of formal data to reveal broad patterns and trends across time, place, and style. A recent project by digital humanities professors Leonardo Impett of Cambridge University and Fabian Offert of the University of California, Santa Barbara, illustrates the results of this approach.
Last year, researchers tried to re-examine the ideas outlined in Drucker's book, “Is There a Digital Art History?”, using Moretti's method and a new transformer-based visual model that can “learn” relationships between different data. “These systems can tell us much more about paintings than traditional computational methods could ever hope to achieve,” Offert said. He cited Diego Velázquez's 1656 painting “Las Meninas,” a masterpiece shortlisted for the most studied artwork in history, which he and Impett analyzed using their own model and found to have a striking similarity in composition to two 20th-century photographs by Robert Doisneau and Joel Meyerowitz.
The straightforward title of Impett and Offert's paper hints at their intentions—”There is a History of Digital Art”—but it also comes with an asterisk.
“This generation of models brings us closer to actual art historical research using machine learning, provided that we accept whatever these models are trained on,” Offert explained. In other words, these new machine learning models are only as accurate as what they are fed, and what is fed to them is only what art historians choose to digitize with all their human biases. “We can benefit from these new models, but at the same time,” Offert continued, “we must constantly criticize[them]and understand the limitations of that strangely mechanical visual culture.”
“These aren't magic machines; they come from somewhere,” Wasilewski said of these models. “We need to ask not only how these tools are applied, but where they come from, what data they were trained on, and what biases they might contain.”
The brief spat with Brook and Ugair wasn't the only time Popovic's company's efforts were questioned: Last fall, about a month after Art Recognition published its findings on de Bressy Tondo, German art history professor Nils Büttner published a paper targeting the company's efforts and laying out a broader argument for the limitations of AI trained on digital images.
Popovich called Büttner's essay “very aggressive” and said its tone reflects the anxiety felt by many traditional experts when confronted with AI. “They feel that the technology is going to displace them, take their jobs.” But Popovich has come to see incidents like this as opportunities for dialogue, not as slander. “We've really made an effort to talk to them, because that's not true,” she said. “You need images to train AI, and those images come from catalogues raisonnés created by experts like Büttner. Their knowledge is “absolutely crucial,” Popovich added.
Earlier this year, Popovic teamed up with Büttner to write a research paper examining an old painting attributed to the 17th-century painter Anthony Van Dyck. The two approached the task in different ways. Büttner, a conventional historian, examined the painting himself and, after observing it against previous research, determined that the work was not made by the Baroque master, but by one of his apprentices in his workshop. Popovic, an AI expert with commercial interests, gathered images related to the artist, fed them into his model, and determined that there was a 79% chance that the painting was not made by the artist.
Two different methods, two similar conclusions. Traditionalists and technologists reached across ideological boundaries in the name of normal, productive dialogue. It was a simple gesture, but a meaningful one too. As Wasilewski points out, it is humans who design these systems, not the other way around. And the dialogue about how to use them is really just beginning. When it comes to the development of AI, she said, “we in the arts and art history fields need to be part of this conversation.”