For the past several months, the Beall Center has featured an exhibit called “Emergence,” which displays interactive virtual art forms developed by various artists and engineers. The sculptures, installations and projections employ different forms of Artificial Intelligence (AI) including machine vision, semi-autonomous agents, genetic algorithms and cellular automata. Observers of this exhibit will notice that many of the interactive works adapt and change over time to respond in different ways to human behavior. One such work, a digital 3D image of the resident guard dog called “Sniff,” by Karolina Sobecka, chases and barks at entering visitors. The work uses a program that contains a database of a set number of gestures that the dog can respond and react to. However, the program, part of a computer’s visual system, can also keep track of each person who interacts with it, helping to build more complex relationships, allowing the dog to react on a more individualized basis. Like a real dog, it responds to body position and movement, noticing how friendly or aggressive your body language appears. It knows when someone gets too close: it has infrared cameras along the top of the wall, prompting it to growl, follow you around suspiciously, bark, jump and wait for your next move.
Similarly, “Performative Ecologies” by Ruairi Glynn, an installation piece consisting of two robotic sculptures hanging from the ceiling, uses the robots’ AI to facilitate learning and growing from interactions within their environment. The two robots look like small puppies with tails that twirl and change color in response to different external stimuli. The robots use a genetic algorithm to evolve performances, which look like animated dances, and facial recognition software to assess how well they’re keeping their audiences’ attention. The more positive feedback they receive the more they learn about how to maintain a person’s attention, which is its primary goal. When it captures someone’s attention, the robot spins excitedly and its tail flickers and changes colors from vibrant shades of blues and greens to reds and back.
With or without human interaction, the artworks are always completely engaged with their surroundings. “Propogations” by Leo Nuñez is a prime example. “Propogations” is a system of cellular automatons, made up by about 50 robots sculptures, that shows the interactions that can occur between robots placed in close proximity. Each robot is constructed with the same simple parts on top of a disc that allows the robots to spin freely if hit with a stream of light. With a tiny bulb of bright light near the top of each robot, one robot can presumably cause a chain reaction and force each of its neighbors – all 50 robots – to begin spinning. Any observer can take one of the offered flashlights and shine it on top of one the rows of robots to set them spinning.
Exploring the “Emergence” collection is such a pleasurable experience because of the playful and lively energy radiating from each of the works. Every piece is a work of art that can interact and engage with you in many ways. When we engage with the different sculptures and digital images, we rarely ever act in one way or move in only one direction, which gives the AI inside each of these pieces the ability to create new and exciting responses to all our various actions.What the previously discussed works also share in common are basic solid frameworks that have transcended dull simplicity through the addition of Artificial Intelligence. The AI allows the works to come alive, interact with the public and communicate new unpredictable and unique results that resemble that of a real-life social interaction. These works force us to constantly reinterpret our knowledge of our surroundings and make us rethink how we use our senses to perceive the world around us, employing advancing technologies to make us question how we think about both art and artificial intelligence.
“Emergence” will be at the Beall Center through May 7.