Artificial Intelligence “Creation” Gives a Glimpse of What May Follow Human Thought
Knowing your Keynote Speakers – Blaise Agüera Y Arcas
Artificial Intelligence “Creation” Gives a Glimpse of What May Follow Human Thought
Nightmarish “Creations” by Google’s Artificial Intelligence
Everyone has had that moment when a particular piece at the museum snatches their heart away.
Perhaps they felt beauty in the chosen colors or a connection to the historical truth of the artist’s work. And we humans hold the creators of such artwork, the artists, in high reverence.
Creativity lies only in the hands of humans. That is common sense. It was common sense. Now, that belief is changing. Artificial intelligence is learning to create.
In June 2015, a team of researchers at Google released a series of strange images titled Inceptionism. As the images spread through the Internet, they reminded people of cubist paintings from the early 20th century. Actually, these images were “painted” by Google’s artificial intelligence.
https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ/photo/AF1QipOw1KUnLSaeHtoxRcGlnfXszuY4Z5WzPl2YFlH7?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB
This is Blaise Aguera y Arcas, head of the machine intelligence research team that makes forms the core of Google’s artificial intelligence research. His research group is doing widespread exploration on machine perception that uses deep neural networks. This research involves deep investigation into the human brain.
The innovative machine perception algorithms produced by their research team now allow pictures inside Google Photos to be searched using voice. Representing pictures with words is normal for human brains, but just a few years ago, this feat was thought to be impossible for computers.
Google’s artificial intelligence models our brains with “artificial neural networks” that grant perceptive abilities nearly on the same level as humans. In other words, in it now possible for smartphones to take visual information—a beak, hair all over the body, the existence of many varieties all over the world, and a living thing that flies—to create a concept (bird) and use it to search for information.
Aguera y Arcas believes that, “Any creature, any being that is able to do perceptual acts is also able to create.” This includes both our brains and artificial intelligence. At the start of his TED Talk titled “How computers are learning to be creative,” he introduces a quote by Michelangelo:
“Every block of stone has a statue inside it, and it is the task of the sculptor to discover it.”
The work of a sculptor is undoubtedly the creation of statues, but the discovery of the sculpture that lies within a block of stone is the work of perception. Aguera y Arcas hints with this quote that perception and creation are two sides of the same coin. And showing this are the images from Google’s Inceptionism project that are produced using artificial neural networks.
The artificial neural networks that produced these images were trained to find patterns that could identify things like human faces and animals.
Training is done by “showing” images for learning to the artificial neural network. Given many images, the artificial network “learns” by extracting the essence and removing the rest. Through this process, artificial neural networks can recognize the perceptual information of this world.
By reversing this perceptual process, Aguera y Arcas’ research team was able to conduct the act of creation.
Namely, the strange images produced by Inceptionism were created by providing arbitrary images to a trained artificial neural network and having the network take those parts detected as “images similar to those that have been learned,” emphasizing through feedback. In other words, these images were drawn by artificial neural networks.
These drawings have stimulated the imagination of many, earning them the nickname of “nightmares.” This is certainly creativity in the hands of artificial intelligence.
You can try this out yourself using the Deep Dream - Online Generator http://deepdreamgenerator.com).
Drawing Masterpieces With Masterpieces: Rembrandt’s “New Painting”
Artificial intelligence is now making a variety of new creative possibilities.
2016 went down in history as the year with Rembrandt’s first “new painting” in four centuries. However, it was not Rembrandt himself, not a descendant and not a zombie that painted it—it was artificial intelligence.
The painting shows bearded, middle-aged white man wearing a black hat, black clothes and a white collar. It looks just like one of Rembrandt’s, but this portrait is not a copy. There’s only one of this piece in the entire world.
The subject, coloring, and touch are all characteristic of Rembrandt. Is this piece a work of “creation” or a mere fake?
The Maritshuis art museum and the Rembrandt House Museum in the Netherlands collaborated with the Delft University of Technology and Microsoft to paint this portrait.
The team used “deep learning” made possible by multi-layered artificial neural networks combined with 3D scans of 364 of Rembrandt’s paintings to analyze the characteristics of his paintings. They then selected a “Rembrandt-like” subject for production.
The portrait produced by a 3D printer reproduces Rembrandt’s composition and even his characteristic touch.
This piece may indeed be considered a fake. However, unlike past fakes made through human intervention, this piece of artwork was made with cutting-edge artificial intelligence, effectively produced from a “soup” of Rembrandt information.
What’s more, this painting combines the beauty of superhuman learning and the beauty of computation. What can deny that this is indeed a creative work?
Computing is Solving the Problems of Its Pioneers
Aguera y Arcas joined Google in 2013, becoming one of its most important researchers. However, in his career at Microsoft from 2010 to 2013, he also contributed to a service that impacted the Internet world. Representatives of that include the construction of Bing Maps and Bing Mobile.
Using technology from Seadragon (bought by Microsoft in 2006) and Photosynth that he built, Aguera y Arcas made it possible to include 3D representations of past images, current images and real-time video on top of a map, expanding the horizons of what online maps were capable of.
Currently, Aguera y Arcas is advancing R&D related to machine intelligence for mobile devices. One impressionable piece of news was a smartphone announced in early 2016 made through collaboration with Google and semiconductor startup Movidius. This smartphone aims to implement deep learning functionality inside a mobile device.
What kind of future will this bring? Aguera y Arcas hints at a possible one in the final message of his TED Talk:
“Computing began as an exercise in designing intelligent machinery. It was very much modeled after the idea of how could we make machines intelligent. And we finally are starting to fulfill now some of the promises of those early pioneers, of Turing and von Neumann and McCulloch and Pitts. And I think that computing is not just about accounting or playing Candy Crush or something. From the beginning, we modeled them after our minds. And they give us both the ability to understand our own minds better and to extend them.”
Blaise Agüera Y Arcas
Principle Scientist, Google
Blaise leads a team at Google focusing on Machine Intelligence for mobile devices—including both basic research and new products. His group works extensively with deep neural nets for machine perception, distributed learning, and agents, as well as collaborating with academic institutions on connectomics research. Until 2014 he was a Distinguished Engineer at Microsoft, where he worked in a variety of roles, from inventor to strategist, and led teams with strengths in interaction design, prototyping, computer vision and machine vision, augmented reality, wearable computing and graphics. Blaise has given TED talks on Seadragon and Photosynth (2007, 2012) and Bing Maps (2010). In 2008, he was awarded MIT’s prestigious TR35 (“35 under 35”).