Arts

Algorithm and rhyme

Artificial intelligence takes on songwriting

Rodgers and Hammerstein. John and Taupin. ALYSIA and MABLE? Perhaps you haven’t heard of those last two yet but, thanks to the work of a local computer scientist and her team, musicians of the near future may be utilizing artificial-intelligence systems like them to help the creative process along.

Dr. Margareta Ackerman, an assistant professor at San Jose State University, will give a free, public lecture on her algorithmic songwriting systems at Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA) on April 13.

"I come from a background of being both a computer scientist and musician, and I had trouble composing," Ackerman said. "When I discovered computational creativity -- the idea of a computer as a collaborator -- I came up with a system that could collaborate with me on writing melodies that I could then sing."

That system is ALYSIA (Automated LYrical SongwrIting Application), which generates and suggests melodies based on human-provided lyrics. A second system she’s developing, MABLE -- "I like to give them girls’ names; they’re my daughters," Ackerman explained with a smile -- develops lyrics in collaboration with humans. (MABLE stands for MexicA’s BaLlad machinE, as it was originally used in conjunction with the storytelling system MEXICA, and is a joint project with Professor Rafael Pérez y Pérez of Mexico’s Universidad Autónoma Metropolitana and Ackerman’s student Divya Singh.)

To work with ALYSIA, the user inputs lyrics one sentence at a time. ALYSIA then generates multiple suggestions for melodies (played back on a computerized piano and written out in musical notation). ALYSIA also ranks its melody choices based on what it thinks best. "Best," of course, is in the ear of the beholder.

Help sustain the local news you depend on.

Your contribution matters. Become a member today.

Join

"This whole topic of valuation is huge in computational creativity," Ackerman said. "In a lot of pop songs, you hear a lot of the same notes repeated over and over again, sometimes even the same rhythm. The hook is usually more interesting," she said. With ALYSIA, "highly repetitive melodies get punished."

While artificial intelligence creating music is not new, Ackerman said what makes ALYSIA special is its understanding of human language. "It’s learned the relationship between words and syllables and melodies and individual notes. That’s what makes our research original, that marriage between natural-language processing and music generation," she said of the system she’s created in collaboration with data scientist David Loker and student Christopher Cassion. Many of ALYSIA’s songs so far have been made with Ackerman’s own lyrics, although she's created one using an Emily Dickinson poem (recordings of their songs can be heard online).

While its knowledge base is mostly informed by contemporary pop songs, Ackerman said it can be used in other genres as well. Recently, the system was trained on Puccini opera, creating a system Ackerman calls ROBOCCINI. Her colleague James Morgan wrote Italian opera lyrics and collaborated with the system to create a new aria, which will be performed as part of San Jose’s Paseo Public Prototyping Festival.

ALYSIA learns by constructing a predictive model. Ackerman likened it to a child learning to classify different types of animals.

"They see a lot of data, mom and dad show them cats and dogs and tell them how they should be called, and then over time the child is able to do the labeling themself," she said. "Machine learning makes this very explicit: you feed it data and then it constructs a model. What it actually tries to do is predict the next note, except then we can agree with it: 'Yes you did it right! What should be the next one?' And it guesses and we agree with it again. Once a model is constructed, it can generate as many melodies as we want."

Stay informed

Get daily headlines sent straight to your inbox in our Express newsletter.

Stay informed

Get daily headlines sent straight to your inbox in our Express newsletter.

She said she hopes to have a version of ALYSIA ready for public use by the end of the summer.

"I think it can help musicians on a large scale," she said, particularly new musicians, or those in the electronic-music genre."It’s difficult to compose original melodies."

Purists might scoff that writing melodies via computer seems like cheating, but humans already use computers for music creation in a number of ways. Ackerman sees her systems as another weapon in an artist’s arsenal and a true collaborator, akin to working with a fellow band member or producer.

"There is this mammoth search space of melodies out there. It’s enormous. We could never search all of it and we don’t need to," she said. "For somebody who’s a novice, or even for somebody experienced, they need some help exploring. When we sit by the piano, that’s what we’re doing. It’s like a band member saying, 'why don’t we try something like this?'"

For Ackerman, who was born in Belarus and raised in Israel until moving to Canada at 12, her work with algorithmic songwriting is a perfect way to combine her passion for arts and sciences.

"I think I was born to be an artist, to be honest. I kind of got derailed and fell in love with computer science as a teenager and really missed the arts," she said. She found her way back into music while working toward her doctorate, when her husband began taking opera-singing lessons.

"I was like, 'that’s supposed to be me!' I spent the last two years of my Ph.D learning to sing opera," she said. "I kind of lived parallel lives … I loved what I was doing but suddenly there was something I loved maybe even more."

A conference in San Diego, where she learned about the computational creativity community focused on the intersection of computers and art, changed her life.

"I thought instantly, 'I need to make myself a collaborator, to help me write songs,'" she said.

Ackerman believes ALYSIA could be quite useful to composition students. "It’s sort of like … training wheels," she said. “At first I would get a melody and nothing would come to my mind except the boringest music … Now, I get to a certain measure and I think, 'I know what I want here,'" she said.

ALYSIA and MABLE have collaborated as well. Ackerman and Singh recently created a song with both systems called "A Beautiful Memory." And while the systems themselves may eventually be able to create music fully independently, for Ackerman, it’s the collaboration between human and machine that’s most thrilling.

"Where things click is where a human singer sings computer-composed music. That’s the merging of worlds that I enjoy," she said.

What: Margareta Ackerman’s talk on algorithmic songwriting

Where: CCRMA Classroom, Knoll 217, 660 Lomita Drive, Stanford

When: Thursday, April 13, at 5:30 p.m.

Cost: Free

Info: Go to CCRMA and Ackerman's website.

Craving a new voice in Peninsula dining?

Sign up for the Peninsula Foodist newsletter.

Sign up now

Follow Palo Alto Online and the Palo Alto Weekly on Twitter @paloaltoweekly, Facebook and on Instagram @paloaltoonline for breaking news, local events, photos, videos and more.

Algorithm and rhyme

Artificial intelligence takes on songwriting

by / Palo Alto Weekly

Uploaded: Wed, Apr 5, 2017, 9:46 am

Rodgers and Hammerstein. John and Taupin. ALYSIA and MABLE? Perhaps you haven’t heard of those last two yet but, thanks to the work of a local computer scientist and her team, musicians of the near future may be utilizing artificial-intelligence systems like them to help the creative process along.

Dr. Margareta Ackerman, an assistant professor at San Jose State University, will give a free, public lecture on her algorithmic songwriting systems at Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA) on April 13.

"I come from a background of being both a computer scientist and musician, and I had trouble composing," Ackerman said. "When I discovered computational creativity -- the idea of a computer as a collaborator -- I came up with a system that could collaborate with me on writing melodies that I could then sing."

That system is ALYSIA (Automated LYrical SongwrIting Application), which generates and suggests melodies based on human-provided lyrics. A second system she’s developing, MABLE -- "I like to give them girls’ names; they’re my daughters," Ackerman explained with a smile -- develops lyrics in collaboration with humans. (MABLE stands for MexicA’s BaLlad machinE, as it was originally used in conjunction with the storytelling system MEXICA, and is a joint project with Professor Rafael Pérez y Pérez of Mexico’s Universidad Autónoma Metropolitana and Ackerman’s student Divya Singh.)

To work with ALYSIA, the user inputs lyrics one sentence at a time. ALYSIA then generates multiple suggestions for melodies (played back on a computerized piano and written out in musical notation). ALYSIA also ranks its melody choices based on what it thinks best. "Best," of course, is in the ear of the beholder.

"This whole topic of valuation is huge in computational creativity," Ackerman said. "In a lot of pop songs, you hear a lot of the same notes repeated over and over again, sometimes even the same rhythm. The hook is usually more interesting," she said. With ALYSIA, "highly repetitive melodies get punished."

While artificial intelligence creating music is not new, Ackerman said what makes ALYSIA special is its understanding of human language. "It’s learned the relationship between words and syllables and melodies and individual notes. That’s what makes our research original, that marriage between natural-language processing and music generation," she said of the system she’s created in collaboration with data scientist David Loker and student Christopher Cassion. Many of ALYSIA’s songs so far have been made with Ackerman’s own lyrics, although she's created one using an Emily Dickinson poem (recordings of their songs can be heard online).

While its knowledge base is mostly informed by contemporary pop songs, Ackerman said it can be used in other genres as well. Recently, the system was trained on Puccini opera, creating a system Ackerman calls ROBOCCINI. Her colleague James Morgan wrote Italian opera lyrics and collaborated with the system to create a new aria, which will be performed as part of San Jose’s Paseo Public Prototyping Festival.

ALYSIA learns by constructing a predictive model. Ackerman likened it to a child learning to classify different types of animals.

"They see a lot of data, mom and dad show them cats and dogs and tell them how they should be called, and then over time the child is able to do the labeling themself," she said. "Machine learning makes this very explicit: you feed it data and then it constructs a model. What it actually tries to do is predict the next note, except then we can agree with it: 'Yes you did it right! What should be the next one?' And it guesses and we agree with it again. Once a model is constructed, it can generate as many melodies as we want."

She said she hopes to have a version of ALYSIA ready for public use by the end of the summer.

"I think it can help musicians on a large scale," she said, particularly new musicians, or those in the electronic-music genre."It’s difficult to compose original melodies."

Purists might scoff that writing melodies via computer seems like cheating, but humans already use computers for music creation in a number of ways. Ackerman sees her systems as another weapon in an artist’s arsenal and a true collaborator, akin to working with a fellow band member or producer.

"There is this mammoth search space of melodies out there. It’s enormous. We could never search all of it and we don’t need to," she said. "For somebody who’s a novice, or even for somebody experienced, they need some help exploring. When we sit by the piano, that’s what we’re doing. It’s like a band member saying, 'why don’t we try something like this?'"

For Ackerman, who was born in Belarus and raised in Israel until moving to Canada at 12, her work with algorithmic songwriting is a perfect way to combine her passion for arts and sciences.

"I think I was born to be an artist, to be honest. I kind of got derailed and fell in love with computer science as a teenager and really missed the arts," she said. She found her way back into music while working toward her doctorate, when her husband began taking opera-singing lessons.

"I was like, 'that’s supposed to be me!' I spent the last two years of my Ph.D learning to sing opera," she said. "I kind of lived parallel lives … I loved what I was doing but suddenly there was something I loved maybe even more."

A conference in San Diego, where she learned about the computational creativity community focused on the intersection of computers and art, changed her life.

"I thought instantly, 'I need to make myself a collaborator, to help me write songs,'" she said.

Ackerman believes ALYSIA could be quite useful to composition students. "It’s sort of like … training wheels," she said. “At first I would get a melody and nothing would come to my mind except the boringest music … Now, I get to a certain measure and I think, 'I know what I want here,'" she said.

ALYSIA and MABLE have collaborated as well. Ackerman and Singh recently created a song with both systems called "A Beautiful Memory." And while the systems themselves may eventually be able to create music fully independently, for Ackerman, it’s the collaboration between human and machine that’s most thrilling.

"Where things click is where a human singer sings computer-composed music. That’s the merging of worlds that I enjoy," she said.

What: Margareta Ackerman’s talk on algorithmic songwriting

Where: CCRMA Classroom, Knoll 217, 660 Lomita Drive, Stanford

When: Thursday, April 13, at 5:30 p.m.

Cost: Free

Info: Go to CCRMA and Ackerman's website.

Comments

Post a comment

Sorry, but further commenting on this topic has been closed.