Google puts art in AI, expanding limits of computer creativity
By Kim Han-joo
SEOUL, June 22 (Yonhap) -- Google Inc. is teaching an artificial intelligence (AI) algorithm to generate music and art as part of its efforts to expand the limits of computer creativity to the level of human beings going forward, the company's senior engineer said Thursday.
The ambitious yet experimental "Magenta" project is one of many projects from Google Brain, which is responsible for many AI products like Google Translate and Google Photos. The team is dedicated to experimenting with new and different forms of machine learning to make computers smarter.
"Can we use machine learning to create compelling art and music?" said Douglas Eck, a research scientist from Google Brain. The program is done by using TensorFlow -- Google's open-source library for machine learning -- to train computers to some day create art.
Google released two new software programs in May, one of which is named SketchRNN and allows users to draw lines and sketches while predicting what comes next.
"We gathered more than five million user-drawn sketches," said Eck, adding that the program produces drawings by guessing what users want to draw.
The scientist said there are still limits to the program, as the computer can only currently "output" 75 shapes like as cats and yoga poses.
Google forecasts that the software may eventually be able to automatically draw the details of a cat's face and even fill in colors, and eventually advance to the scope of making art pieces.
The second program, called NSynth, takes individual audio samples such as the sound of a guitar and a piano segment played by professional musicians and combines them to make a totally different sound, Google said.
The program can analyze and differentiate distinct audio qualities and combine the properties to form a brand-new sound that has never been created before by using publicly available recurrent neural network code, the researcher said.
"The goal of the program is not to replace artists but to help artists and original song writers by creating totally different sounds based on a huge data set," Eck said.
The scientist said there are still many obstacles to overcome to reach the level of human creativity, as one neural network is still not able to create the long-term sound that people can recognize as compelling music.
"In order to create the actual waveform (of the music), it is necessary to make 16,000 predictions per second," the engineer said, adding that the program is currently doing a poor job at creating music.
khj@yna.co.kr
(END)
-
New Defense Minister Lee takes office, warns of 'stern' response to possible N.K. provocations
-
(LEAD) Yoon taps ex-deputy NSA for spy chief
-
Gov't to significantly increase international flights to meet travel demand
-
(2nd LD) BTS wins three Billboard Music Awards, marking 6th year to win an award
-
Psy returns to Billboard Hot 100 after 7 years with 'That That'
-
Full text of President Yoon's inaugural address
-
(2nd LD) Yoon offers to revive N.K. economy with 'audacious plan'
-
S. Korea to send condolence delegation to UAE over death of president
-
(News Focus) With Yoon, S. Korea, U.S. to strengthen alliance, deterrence against N. Korea: experts
-
(LEAD) Yoon to take oath of office as S. Korea's new president
-
(LEAD) Crypto investor probed for alleged trespassing at Terraform CEO's home
-
S. Korean Navy SEAL-turned-YouTuber in Ukraine claims to be injured
-
N. Korea asks China for help in fight against COVID-19: source
-
Crypto investor probed over allegedly visiting house of Terraform's CEO
-
S. Korean volunteer fighter in Ukraine doesn't regret his action despite facing imprisonment at home