Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Midi music generation neural network using RNN with LSTM layer

Notifications You must be signed in to change notification settings

FrozenAssassine/Sound_Generation_RNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

History

8 Commits

Repository files navigation

🎡 Midi Music Generation Neural Network using RNN (LSTM Layer)

🧠 How Does It Work?

The MIDI files are loaded and parsed using the music21 Python library.
All the notes are tokenized and stored. Every note is stored in a mapping table: notes_to_int and int_to_notes.

The training data is created from the notes. The neural network is trained on 100 notes as input and the next following note as the output.
The LSTM layer captures the important data and sequences during training.

The model works by predicting the next note after a sequence.
This means, if you enter a sequence, the next note gets predicted and added to the input sequence.
Then, the new input sequence is fed through the neural network again, and the next fitting note is returned.

This process continues for as long as specified. For testing, I specified 200 steps.

After that, you have a file with all the generated notes, which you can open in any DAW or similar software.
I used FL Studio to test the output.

🧰 Technologies Used

  • Python
  • TensorFlow / Keras (RNN with LSTM Layer)
  • music21

🎢 Example Audio from my Model:

output_1.mp4

About

Midi music generation neural network using RNN with LSTM layer

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

AltStyle γ«γ‚ˆγ£γ¦ε€‰ζ›γ•γ‚ŒγŸγƒšγƒΌγ‚Έ (->γ‚ͺγƒͺγ‚ΈγƒŠγƒ«) /