"MGU-V: A Deep Learning Approach for Lo-Fi Music Generation Using Varia" by Amit Kumar Bairwa, Siddhanth Bhat et al.
 

MGU-V: A Deep Learning Approach for Lo-Fi Music Generation Using Variational Autoencoders with State-of-the-Art Performance on Combined MIDI Datasets

Document Type

Article

Publication Title

IEEE Access

Abstract

Music generation presents a significant challenge within the realm of generative AI, encompassing diverse applications in music production, real-time composition, and other related fields. This paper introduces MGU-V (Music Generation Using Variational Autoencoders), a sophisticated deep learning framework engineered to generate Lo-Fi music. MGU-V harnesses the power of Variational Autoencoders (VAEs) to model and create high-quality music compositions by learning robust latent representations of musical structures. The framework is rigorously evaluated using two meticulously curated and merged benchmark MIDI datasets, demonstrating its effectiveness and adaptability across various musical genres. Through extensive experimentation, MGU-V achieves state-of-the-art performance, significantly surpassing existing methods. The model achieves an impressive accuracy rate of 96.2% and a minimal loss of 0.19, emphasizing its precision and reliability. These outstanding results underscore the potential of MGU-V as a valuable tool for music producers, composers, and AI researchers alike. Its ability to generate Lo-Fi music with high fidelity and consistency highlights promising new avenues for future research and development in AI-driven music generation. The success of MGU-V not only sets a new benchmark in the field but also suggests that AI can increasingly contribute to creative processes traditionally dominated by human expertise.

First Page

143237

Last Page

143251

DOI

10.1109/ACCESS.2024.3471918

Publication Date

1-1-2024

This document is currently not available here.

Share

COinS