Music can touch the hearts of any audience without them
possessing
any knowledge of its context. The power of music is transcendental, and it stems from
the timbre of the instrument(s), the fundamental rhythmic structure and melody, the
dynamics, instrumentation, and many more, all of which cooperate in some form of harmony
to create the final product. With the recent rise of Artificial Intelligence-Generated
Contents (AIGC), AI for music is a promising field full of creativity, novel
methodologies, and technologies that are yet to be explored. Currently, AI for music
methods have been commonly concentrated on utilizing machine learning and deep learning
techniques to generate new music. Despite the significant milestones that have been
achieved thus far, many are not necessarily robust for a wide range of applications.
AI music itself is a timely topic. This workshop aims to generate momentum around this
topic of growing interest, and to encourage interdisciplinary interaction and
collaboration between AI, music, Natural Language Processing (NLP),
machine learning, multimedia,
Human-Computer Interaction (HCI), audio processing, computational
bioacoustics, computational
linguistics, and neuroscience.
It serves as a forum to bring together active researchers and practitioners from
academia and industry to share their recent advances in this promising area.
Dec. 9 Tuesday, 2025 Online (GMT-8)
Virtually: Please fill out the AIMG 2025 Participant Online Form by Dec. 8 to receive the online meeting link to participate in the workshop for FREE. Please join IEEE Big Data Workshop - AIMG 2025 and add your paper ID after your name. information.
| Paper Type | Paper Title | Author(s) |
| Session I: Paper Presentation & AI Music Showcase(11:30-13:00) | ||
| Opening Remarks | ||
| Short | Effects of Tempo and Tonality on Listener Enjoyment of Automated Pop Mashups | Anh-Dung Dinh, Xinyang Wu, Andrew Horner |
| Full | Chord Latent Decoupling for Music Mashups | Yu Foon Darin Chau, Andrew Horner |
| Short | Can Language Models Verify Classical Music Note Sequences for Early Learners? | Radhika Grover, Ankit Maurya, Manikandan Ravikiran, Rohit Saluja |
| Full | Pay (Cross) Attention to the Melody: Curriculum Masking for Single-Encoder Melodic Harmonization | Maximos Kaliakatsos-Papakostas, Dimos Makris, Konstantinos Soiledis, Konstantinos-Theodoros Tsamis, Vassilis Katsouros, Emilios Cambouropouloss |
| Poster | Emovectors: assessing emotional content in jazz improvisations for creativity evaluation | Anna Jordanous |
| Lunch Break/Keynote (13:00-14:00) | ||
| Session II: Paper Presentation & AI Music Showcase (14:00-16:00) | ||
| Short | Story2MIDI: Emotionally Aligned Music Generation from Text | Mohammad Shokri, Alexandra Salem, Gabriel Levine, Johanna Devaney, Sarah Ita Levitan |
| Full | MusicAIR: A Multimodal AI Music Generation Framework Powered by an Algorithm-Driven Core | Callie C. Liao, Duoduo Liao, and Ellie L. Zhang |
| Short | A Modular Approach to Music Generation: Adding Music Controls to Neural Audio Compression Models | Daniel Faronbi, Peter Traver, Juan Bello |
| Full | Neural Motif Recombination: A Transformer-Based Framework for Cross-Genre Music Generation | Sanjay Majumder |
| Short | Diffusion for Room Impulse Response Generation | Rebecca Wroblewski, Julius Smith |
| Poster | Applying Literary Structures to AI Music Models | Jada Polard |
| Full | Dynamic Multi-Species Bird Soundscape Generation with Acoustic Patterning and 3D Spatialization | Ellie L. Zhang, Duoduo Liao, and Callie C. Liao |
| Poster | BiGRU: Bi-Directional GRU-Based Approach for Audio Source Separation | Sanjay Majumder, Karl Reichard |
| Session III: AI Music Competition & Showcase (16:00-17:30) | ||
| Music | "Drifting in Circles": Algorithmic Music based on Symbolic Musical Patterns | Miguel Gomez-Zamalloa Gil |
| Music | two tales from the shadows of the grid | Brian Lindgren |
| Music | Hallucinations for voice and piano | Kyle Vanderburg |
| Music | Conditioned Stochasticity: AI-Assisted Composition with Fine-Tuned Latent Diffusion | Misagh Azimi |
| Music | Trajectories for an Autoencoded Body | Tsubasa Tanak, Kyohei Uchida |
| Music | Zone 19: An Algorithmic Journey Through the 19-EDO Soundscape | Ali Balighi |
| Closing Remarks & Award Announcements | ||
*The program schedule is subject to change.
This is an open call for papers, which includes original contributions considering recent findings in theory, applications, and methodologies in the field of AI music generation. The list of topics includes, but not limited to:
Acceptance notifications and review reports were sent via email. Please consider all reviewers' comments and address their recommendations in meaningful edits to your paper before submitting the revision by the deadline specified in your email for final review.
As required by the main conference, all accepted papers are required to provide a video presentation. Authors must upload their presentation videos (.mp4) to the main conference video server by the deadline.
If you are interested in serving on the workshop program committee or paper reviewing, please contact Workshop Chair.
This group is dedicated to the release of announcements and
notices related to the AI Music Generation (AIMG) community. News, calls for papers,
calls for collaborations, datasets, employment-related announcements, etc. are all
greatly welcomed. Posts will be subject to approval before released to the community.
Welcome to subscribe to the AIMG group!