AI as a Support for Curatorial Practice in Museums
By: Di Liu
introduction
The merging of artificial intelligence (AI), big data, and human cognitive systems has catalyzed a profound digital transformation of our lifestyles and production methods. This shift is particularly noticeable in the art field, where the integration of AI has sparked innovative ideas and prompted new assessments of traditional practices. Art museums are investigating the potential of AI in collection management, improving visitor experiences, and ticketing attendance data. Of these, the field of art curation is one that is undergoing an evolution.
Where does the human end and when does the AI begin in the curatorial practice?
On September 9 this year, the Nasher Museum of Art at Duke University unveiled ACT AS IF YOU ARE A CURATOR: AN AI-GENERATED EXHIBITION, an innovative exhibit predominantly created by OpenAI's ChatGPT. This experiment explores AI's potential and boundaries in the nuance of museum's curatorial practice, sparked my interest in delving into the evolution of curatorial methods, examining the ways AI has been incorporated into museums and the implications of these technological integrations.
The Evolution of Curatorial Practice
One of the foundational steps in the curatorial process is contextualization and provenance research. This involves an exhaustive analysis of the selected artworks' background, the artist's background, the artistic genre in which they are located, and the historical context. As a curator, guided by the acquisition standards set by the American Alliance of Museums and the Association of Art Museum Directors, verifying the history of potential acquisitions is imperative. Additionally, artworks must be cross-referenced with the stolen art database overseen by the Art Loss Register (Reed 366).
However, in the digital and AI age, the challenge of provenance research is alleviated by the accessibility of extensive data on museum collections. One notable example is the Carnegie Museum of Art (CMOA) Art Track initiative: Standardizing Digital Provenance Documentation for Cultural Objects. The project was to create a tool that would be willingly used by people and organizations outside of the development effort. Provenance data is organized to make it easier for researchers, software developers, and curators to produce visualizations that address issues that would be challenging or impossible to resolve without computer support (National Endowment for the Humanities).
The team transformed textual provenance information into structured Linked Data. Building upon their foundational work, the team developed a user-interface tool, enabling individuals to visualize and comprehend the data seamlessly. The Elysa software enhances this framework by allowing researchers to update and incorporate new details. Specifically, Named Entity Recognition, a subfield of Natural Language Processing (NLP) that involves machines recognizing and classifying named entities in text, is used to create the effective tool. The program looks at the artwork's provenance, the main event it describes, and any mentions of people, places, or URLs in the text. This lets curators and researchers search for artworks using different names they might be known by and connect to online sources about them. This software ensures that the provenance data is automatically assimilated, producing a correct and enhanced timeline visualization for the artwork (Newbury, Art Tracks). This tool gives researchers and curators a thorough grasp of an artwork's background, journey, and significance. It guarantees data accuracy, expedites the curatorial workflow, and expands the scope of art research and interpretation.
AI as Curator
Museums have been harnessing the potential of AI creatively and critically for over a decade. As we've delved deeper into the realm of technological advancement, we find ourselves transitioning to "AI 2.0." Instead of the initial ambitions of emulating broad human intelligence, this new wave of AI is distinguished by its emphasis on "narrow" applications, addressing specific tasks and distinct challenges with precision (Zylinska 26). This shift is evident in the collaborative efforts of the Nasher team and ChatGPT, which together show a fresh perspective on the curatorial process.
Museums often have vast amounts of data. AI can be particularly useful for extensive collections, ensuring consistency and efficiency, as we can see in the example of CMOA's example provenance research. AI can also help curators identify patterns or similarities between artworks, artists, and art movements that might be difficult for humans to detect. These details included the artist, title, date, medium, cultural group, keywords, and description (Behind the Scenes).
To further explore AI's potential in curatorial creativity and logic, the Nasher Museum of Art's curators collaborated with Mark Olson, an Associate Professor of the Practice of Art, Art History and Visual Studies at Duke University, to build a custom ChatGPT interface that created statistical representations of "relatedness" for the more than 14,000 pieces in the Nasher's collection using tools supplied by OpenAI. As a result, this customized ChatGPT variant trained on the Nasher collection, enabling it to navigate through, select out, and interconnect specific artworks (Behind the Scenes). Throughout its operation, ChatGPT effectively identified artworks and recognized overarching themes. Furthermore, Nasher's team asked for assistance from the AI to generate exhibition titles and annotate artworks.
AI Curation that Brings Exhibition Possibilities
The ACT AS IF YOU ARE A CURATOR: AN AI-GENERATED EXHIBITION at Nasher Museum offers a new way of thinking about the selection and presentation of art, but it can challenge and extend traditional curatorial boundaries. The Nasher Museum might be the first to use ChatGPT to put on a physical exhibition. On the other side of the globe, the Helsinki Art Museum in Finland attempted to deploy machine learning, challenge the visitor's experience, and learn its exhibition collection with the museums and city history through AI curation (Meistere, the Term Curator).
The Helsinki Art Museum (HAM) 's second Biennial (12 June to 17 September), themed "New Directions May Emerge," emphasizes environmental and technological consciousness. The highlight was an AI curator offering insights via a dynamic 3D map of HAM's vast collection, consisting of over 10,000 pieces predominantly showcasing Finnish contemporary art and some classical pieces. Aside from the indoor collection, the museum also consists of various public artworks scattered throughout Helsinki, offering a diverse overview of the country's rich artistic heritage.
The museum has used 360° Google Maps views and machine learning to analyze data to further immerse visitors. They created simulated panoramas by arranging artworks into fictional locations, allowing them to seamlessly fit in with the cityscape. This innovative visual-textual modeling effectively turns the city into a grand canvas, giving visitors an augmented reality-like experience akin to navigating via Google Maps, enhancing their understanding and appreciation of Helsinki's history. Inspired by technologies like CLIP, the integration reimagines Helsinki's urban landscape, urging the public to see and engage with their surroundings in a new way. AR, GPS triangulation, and QR codes enhance this immersive experience, forging deeper emotional and cultural connections. The aim is clear: spotlight local artists, draw global attention, and remain faithful to the artworks' original essence and cultural significance (Schaerf, AI Art).
Biases Created by AI in Curatorial Practices
Integrating AI into the curation process is reshaping traditional curatorial practices, inviting us to reconsider how exhibitions are planned and presented: it involves crafting datasets, ensuring their quality, supervising them, and training AI models for continuous improvement. Integrating machines and data into this process prompts questions: How do we define these abstract concepts in art? How can we bridge visual symbols with abstract notions? It is crucial that these datasets remain intact. When biased data is fed to AI, there is a chance that the biases will be reinforced or even made worse, leading to negative stereotypes.
The Nasher Museum's experience with ChatGPT while creating ACT AS IF YOU ARE A CURATOR: AN AI-GENERATED EXHIBITION illustrates the complexities of using general AI tools in the curatorial process. The museum encountered several challenges with the standard version of ChatGPT, including the inability to generate floor plans for the exhibition, issues with inappropriate artwork selections that did not align with the intended theme, and misleading titles. For example, ChatGPT was unable to access the museum’s publicly accessible database of artworks because the museum’s database was not initially fed into ChatGPT. Instead, ChatGPT did a mass search among select websites published before 2021 and selected artworks from other museum collections. These obstacles highlight the shortcomings of AI in art curation (Behind the Scenes).
The Future of AI in Curation
As museums navigate the challenges of feeding their private data into public AI tools, they face the potential risks of leaking their private information to the general AI tools and getting unclear and biased results. Whether these be artworks or conceptual concepts, “you get out what you put in” is particularly relevant because poor input might jeopardize integrity and aesthetic appeal.
In August, OpenAI launched ChatGPT Enterprise, offering “enterprise-grade security and privacy, advanced data analysis capabilities, and customization options.” According to OpenAI, 80% of Fortune 500 companies have registered ChatGPT accounts in the nine months since the original ChatGPT was unveiled. With its significant adoption by major corporations, it’s evident that AI as a work assistant is becoming mainstream. In the art world, enterprise AI has been steadily tested. Collaborations like Salesforce’s Veevart with the Institute of Contemporary Art (ICA) in Miami and the Pinacoteca de Sao Paulo Museum’s partnership with IBM Watson exemplify this trend. The ICA Miami uses Veevart to document its collection for condition reports and conservation management tasks (Nilsen, How the Institute). The curators of Pinacoteca de Sao Paulo Museum in Brazil worked with IBM Brazil’s team, utilizing the IBM Bluemix cloud platform to train the museum’s collection data alongside sources from newspapers, books, and art critiques on Brazilian art. They created a cognitive chatbot to interact with the museum visitors and answer their questions about the artwork on display in real-time (Latam, Brazilian Museum).
The cases in this research demonstrate that the integration of AI in curatorial practices represents an approach to redefining the relationship between art and technology, individual and society. In that case, the developments of enterprise AI encourage us to view AI curation beyond the lens of its technological limitations. These developments of AI in curation represent an era of change, and looking ahead, I believe the new standard for future curatorial practices is for each museum to have its own enterprise AI explicitly trained on its specific collections.