AI for Art & Music

published on 06 May 2022

Cognitive AI will impact every decision made.

Ginni Rometty

AutoML AI (which they call “AI inception”) is better at creating AI than humans are. 

Sundar Pichai

Art is about human subjectivity and essential human aspects like empathy and mortality. 

AI isn’t going to replace human artists. AI will augment the capabilities of human artists.

Collaboration between AI and human artists will lead to more creative art. So, it’s not AI creativity vs human creativity. AI and humans will be in a symbiotic relationship for creativity. Each pushing the other to achieve what they couldn’t achieve on their own. 

Both art and music follow rules. Without boundaries, without structure, it's simply noise.

The new breed of an artist is a artist or a musician rolled into a data scientist. 

Instead of asking can AI produce art? The real question is can we appreciate the art generated by AI.

1st computer-produced art

In 1965, at Bell Labs, an engineer was producing numeric solutions to complex equations. The output numbers were transmitted to a plotter, which laid them out on a graph. One day the plotter malfunctioned and drew a collection of random lines. The engineer ran down the hall claiming that the computer had produced art.

CANs for artists

Creative adversarial networks (CANs) are a form of GANs. Like GANs, CANs have two parts: a discriminator (D) and a generator (G). 

Imagine D is trained on a wiki art dataset and learns to differentiate between art and non-art.

D is not trained on the images themselves but on the images converted into numbers that it manipulates in the multi-dimensional spaces of its many layers. 

To begin, G is untrained in a state of pure noise in latent space. G generates random images from the latent space of noise; a place where the possibilities are endless and sends them to D.

D rejects the initial images because they're bloby and amorphous. D sends the feedback and G slowly begins to learn what D doesn't like and adjusts its weights to generate images similar to the ones in the wiki art dataset that D is trained on. 

D has an art-style classification function that determines the style of an image. When D notices that an image fits a particular style that's present in its wiki art dataset, it starts a function called style ambiguity. This function pushes the generator to create images that differ from all those in the wiki art dataset. In other words, something altogether new and original. 

You can also at some point let the generator free; no longer trying to generate images to satisfy D. But just leave it to daydream. It creates abstract expressions as paintings. 

CANs don’t understand the subject matter of art or music but only styles of art and music. 

Markov chains for music

Markov chains are statistical processes guiding a chain of events. For music, Markov chains are used to predict the next musical notes depending on the previous ones. 

Markov chains are faster than neural networks and are also more creative especially when AI is improvising music with AI.

Pix2Pix--an interface for humans

Philip Isola built conditional GANs (CGANs). They are conditional because instead of starting the generator network from noise they condition it using an actual image.

Pix2Pix is a CGAN that learns how to map an input image to an output image. With Pix2Pix, you can transfer the style of an image into another. Translating between one image and another is like translating between languages. They are two different representations of the same world. 

Rather than training the discriminator on a large corpus of images, CGANs use pairs of images such as a black and white image of a scene and the same scene in color. 

As compared to CANs, Pix2Pix requires a much smaller set of training data. It can transform sketches into photos. It can color sketches drawn by humans. You can also morph faces from one to another. 

Pix2Pix allows people without the requisite motor skills to express their creativity. It’s also a user-friendly interface for artists. 

Android Lloyd Webber--AI for musicals

Could AI put together a musical? One that’s a commercial hit. A group of writers, composers, and AI researchers decided to try. What followed was the world’s first AI-composed musical–Beyond the Fence. Staged by Android Lloyd Webber and Friends. The software was called Android Lloyd Webber.

What makes a musical a hit? The AI analyzed 1696 musicals and 946 synopsis. It identified the 4 most popular themes–journey, aspiration, love, and a lost king. It found that cast, backdrop, and emotional structure also plays a part. The protagonist, AI concluded, had to be female and it chose the setting as Europe in the 1930s and 1980s. It also recommended adding in death, dancing, and a happy ending. 

Though the actual musical though was far removed from what the computer had generated. It’s a great example of AI-based creativity. 

Examples of AI art

  • Mike Tyka uses GANs to create portraits of imaginary people: https://www.miketyka.com/?s=faces.
  • Anna Ridler trained a GAN on her own drawings to generate a 12 minute animated movie called “The Fall of the House of Usher.”
  • Jun-Yan Zhu built CycleGAN to transform images of horses to zebras.. Ahmed Elgammal built a community for AI artists - https://aiartists.org/.
  • Pindar Van Arman built an AI robot - https://www.cloudpainter.com/.
  • Simon Colton built “The painting fool”: It draws a portrait of you. On the back is a commentary that describes your mood.
  • Etsy has built a product recommendation system based on aesthetics. Artists school the AI system on subjective notions of style. 

Examples of AI music

AI art exhibitions

Is the art market ready to embrace art made in collaboration with AI? 

Artists and Machines Intelligence (AMI) is a program at Google that has the goal of combining machine intelligence with art. This program includes engineers who want to combine programming with visual art. 

The training sets this group used encompassed the whole of western art. The outputs tended to be abstract. The AI in some sense captured the direction of art history which is toward abstraction. 

AMI exhibited their AI-generated art in a show called “DeepDreams” at the San Francisco gallery and arts foundation.

In 2018, Christie’s in New York auctioned an artwork created by AI. It sold to an anonymous bidder for an amazing $4,32,500. This was the first piece of art that was sold for such a high bid.

The AI artworks are not as bizarre as we might imagine them to be. They stand right up there with the artwork of the greatest artists of all time.

Elgammal performed a visual Turing test to see whether a work of art was made by an AI or a human artist. It was between abstract artists and contemporary artists – the pinnacle of the modern art world with AI artworks in front of a panel of 18 viewers. The viewers concluded that 53% of the AI-generated artwork was generated by human artists. 

Copyright laws

The U.S. copyright office rules that AI-generated art can’t be copyrighted because the copyright law only offers protection to “the fruits of intellectual labor” that “are founded in the creative powers of the [human] mind.”

The copyrighted work “must be created by a human being” and the office says it won't register works “produced by a machine or mere mechanical process” that lack creative input from a human.

Even art work generated by human and AI collaboration can't be copyrighted. The relationship between AI and a human is more like that of a teacher and a student. The teacher guides the students but can’t take credit for the work of the student. 

On the other hand, creativity isn’t just for humans. Does art have to be made by exclusively by a human to qualify as art?  

Conclusion

Read more