For AI to Get Creative, It Must Learn the Rules-Then How to Break 'Em - DiscoveryCampus
post-template-default,single,single-post,postid-12545,single-format-standard,ajax_fade,page_not_loaded,,qode-theme-ver-2.6,wpb-js-composer js-comp-ver-5.4.5,vc_responsive

For AI to Get Creative, It Must Learn the Rules–Then How to Break ‘Em

For AI to Get Creative, It Must Learn the Rules–Then How to Break ‘Em

15:29 25 January in Uncategorized

American poet Ralph Waldo Emerson once said, “Every artist was first an amateur.” He likely never thought those words would apply to machines. Yet artificial intelligence has demonstrated a growing aptitude for creativity, whether writing a heavy-metal rock album or producing an original portrait that is strikingly reminiscent of a Rembrandt.

Applying AI to the art world might seem unnecessarily derivative; there are, of course, plenty of humans delivering awe-inspiring work. Proponents say, however, the real beauty of training AI to be creative does not lie in the end product—but rather in the technology’s potential to expand on its own machine-learning education, and to solve problems by thinking outside the box far faster and better than humans can. For example, creative problem-solving AI could someday make snap decisions that save the lives of the passengers in a self-driving car if its sensors fail, or propose unconventional combinations of chemical compounds that lead to new drugs for previously untreatable diseases.

AI with a creative streak will be essential in developing highly automated systems that can respond appropriately to human life, says Mark Riedl, an associate professor at Georgia Institute of Technology’s School of Interactive Computing. “The fact is, we do lots of little bits of creativity every single day; lots of problem-solving goes on,” Riedl says. “If my son gets a toy stuck under the couch, I have to devise a tool out of a hanger [to retrieve it].”

Riedl points out human creativity is also important in human social interactions, even telling a well-timed joke or recognizing a pun. Computers struggle with such subtleties. An incomplete understanding of how humans construct metaphors, for example, was all it took for an experiment in AI-generated literature to compose a new Harry Potter chapter filled with nonsensical sentences such as, “The floor of the castle seemed like a large pile of magic.”

Still, getting machines to accurately mimic human style—whether Rembrandt’s or J. K. Rowling’s—is perhaps a good place to start when developing creative AI, Riedl says. After all, human creators often start off imitating the skills and processes of accomplished artists. The next step, for both people and machines, is to use those skills as part of a strategy to create something original.

AI Art School

Today’s AI programs are not advanced enough to spontaneously compose hit songs or paint masterpieces. To get AI to do those things, humans must first calibrate a program by feeding it large numbers of examples. German AI artist Mario Klingemann, for instance, has designed artificial neural networks to assemble strange and beguiling images based on existing photographs and other visual artwork. An artificial neural network consists of a series of interconnected processing nodes, a system loosely based on the human brain’s neural structure. In an artificial network each electronic “neuron” takes in an array of numbers, performs a simple calculation on those inputs and then sends the result to the next layer of neurons—which in turn performs more complex calculations on the data.

Klingemann’s approach involves feeding source material such as paintings and photographs into generative adversarial networks, or GANs, which combine the power of two neural networks. One network generates images based on a certain theme or set of guidelines; the other evaluates the images based on its knowledge of those guidelines. Thanks to feedback from the second network, the first gradually gets better at making images that more accurately adhere to the chosen theme. “Right now [the networks] are just tools that augment our own creativity,” Klingemann notes. “We as humans still have to recognize the creativity or novelty.” His goal is to build artistic networks that can independently select and even tweet out their own best work based on the given theme.

Today’s GANs are strictly used to create new content or images within a broader creative system, says Alex Champandard, founder of, a start-up that aims to develop AI tools for creative people. GANs are able to produce a lot of material quickly but still rely heavily on people to establish their guidelines, he adds.

From the Art World to the Real World

GANs’ content-generating capabilities are a good start when it comes to developing AI that can solve real-world problems, says Ian Goodfellow, a staff research scientist at Google and lead author of the 2014 paper that first described the concept of GANs. Goodfellow has been working on machine-learning models to let computers invent more dynamic narratives, which could go beyond limited scenarios such as planning out a series of chess moves—something computers have done extremely well for decades.

Take a classic example of forward-planning that humans do all the time: When heading to the airport, we often fuzzily map out—purely in our heads—the expected key details of the journey, such as traffic patterns or road repairs. GANs could plan such a trip but they would likely do so in excruciating detail and come up with many possible routes to the destination, Goodfellow says. What we really need, he adds, is a layer of computation that looks at the many options produced by a neural network and intuitively decides which one is best.

Another key component of human creative thinking is the ability to take knowledge from one context and use it within another. George Harrison picks up a sitar and applies his guitar-playing nous to the instrument. Shakespeare reads stories from Greek mythology and writes an English play inspired by those tales. A chief executive uses knowledge of military strategy, or perhaps chess, to plan a business deal.

To that end, experiments are now underway with AI algorithms that can mix and match material. For example, researchers at the University of California, Berkeley, are using their “cycle-consistent adversarial network” (CycleGAN) to transform a video of horses into one of zebras. The AI detects the basic shape of a horse in the first video and can play with the aesthetic on top of that image, immediately and seamlessly swapping a shiny brown coat of hair for one with black-and-white stripes while the image is moving. Such work could be a stepping-stone to AI that can enable a self-driving car to adapt to unfamiliar road conditions, avoiding accidents. “If you’re gathering your [road-] training data mostly in California, you might not have a lot of real data [on] snowy situations,” Goodfellow says. “But you could take all your real data in sunny conditions and use [generative systems] to change it into snowy conditions.”

This suggests teaching AI not only the rules, but also how to throw them out the window when necessary—much like amateurs who grow into artists.

Source link

asdasd asda
No Comments

Post A Comment