Projects

Rhythms of the Machine: Deep Learning in Music Creation

November 2024

Team: Abhishek Paul

Resources: [Technical report]

Summary:
Project Goal: The goal of this project is to create an application that allows musicians to create extensions to their input. I created and trained an LSTM-based recurrent neural network designed to model monophonic music with expressive timing and dynamics.

Outcome: The model successfully generated fluid extensions, maintaining the melody and rhythmic structure. The framework I developed allows for future enhancements of neural network models for music generation. This adaptable framework requires minimal code changes to use different datasets and alter model parameters for future projects. The model can use any variable length of music input to generate an extension or no input to create a fresh composition. Here are a few examples generated by the model:

Input:



Generated Extensions:

The model is also able to generate compositions from scratch without any input. For the outputs below, the model had no prior musical context and had to rely solely on its learned weights and predictions to construct the compositions:













My contribution: On all aspects of the project




J.A.R.V.I.S

February 2024

Team: Abhishek Paul

Resources: [Technical report]

Summary: Chatbot that answers factual questions based on the dense telecom documentation (Ericsson and 3GPP documentation). Created a data pipeline that ingested pdfs, json and html files in order to get encodings. The chatbot consisted of two modes i) document search which used the multi-qa-mpnet-base-dot-v1 model. ii) generative which used the llama-2-7B-chat-hf model Allows Radio Engineers to quickly search up information without needing to go through the Ericsson adnd 3GPP documentation.




Summary:




Autonomous agents for multiplayer SuperTuxCart

March 2023

Team: Autonomous agents for multiplayer SuperTuxCart

Resources: [Technical report]

Summary:
Project Goal: The goal of this project was to train an AI agent to play SuperTuxKart, competing with other trained agents in a 2v2 hockey game.

Project Description:
1) Approach:
      i) Reinforcement Learning: Initially, we used Q-Learning for training. However, it struggled to learn optimal strategies.
      ii) Imitation Learning: To overcome this, we trained the model using imitation learning on winning strategies derived from over 2000 games.
      iii) Hybrid Model: We then improved the reinforcement learning model by using the trained imitation model for initialization. This helped further refine the agent’s strategy.

2) Future Work:
      i) Data Augmentation: Incorporate data augmentation during training to randomize initialization and improve robustness.
      ii) Internal State Controller: Explore the design of a hand-crafted internal state controller to compare its effectiveness against AI models.

Outcome: Our project successfully demonstrated the application of a hybrid approach, combining imitation learning with reinforcement learning, to train an AI agent for SuperTuxKart. The refined agent showed improved performance in minimizing player-puck and puck-goal distances, paving the way for future enhancements and comparative studies with hand-crafted controllers. This project consisted of training an agent to play SuperTuxKart to compete with the other trained agents in a 2v2 hockey game. Our strategy was to utilize reinforcement learning to minimize the player-puck and puck-goal distance. Our initial attempt at using Q Reinforcement Learning had trouble learning optimal strategies. Imitation Learning was then used to train the model on the winning strategies of over 2000 games. Our reinforcement learning model was then improved by using the imitation model as its initialization in order to further refine the agent’s strategy. Future work can involve utilizing data augmentation during training in order to randomize initialization. A hand-crafted internal state controller could also be designed to see if it is superior to the AI models.

My contribution: Generated about 2000 games in order to train the imitation learning model implementation. Worked on the Q-reinforcement learning algorithm that trained on the imitation model in order to create the final agent




Music Generation Using Machine Learning

September 2017

Team: Abhishek Paul, Romal Peccia, Donald Lleshi

Resources: [Technical report]

Summary:
Project Goal: The goal of this project was to create a music composition application that utilizes machine learning to generate music extensions. The application allows users to play a melody on a MIDI keyboard, and the system generates a continuation of that melody for the same length of time.

Project Overview:
1) UI:
      i) Users are prompted to play a melody on a MIDI keyboard.
      ii) The application transforms the input melody using the same transformations applied to the training dataset.
      iii) The transformed melody is fed through a trained model to generate a continuation of the played notes.
      iv) The user’s recording and the model-generated continuation are saved and displayed to the user.
2) Machine Learning Pipeline:
      i) Data Transformation: Developed techniques to convert music into machine-readable embeddings.
      ii) Model Development: Designed and trained a Recurrent Neural Network (RNN) for music generation.
      iii) Integration: Integrated the final model with a user interface and a physical MIDI keyboard.

Outcome: Our team successfully created a music composition application using neural networks. The framework we developed allows for future enhancements of neural network models for music generation. This framework is adaptable, requiring minimal code changes to use different datasets and alter model parameters for future projects.

My contribution: On all aspects of the project