Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
#

multimodal-emotion-recognition

Here are 24 public repositories matching this topic...

Video-Audio-Face-Emotion-Recognition

The repo contains an audio emotion detection model, facial emotion detection model, and a model that combines both these models to predict emotions from a video

  • Updated Sep 13, 2023
  • Jupyter Notebook

This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation for the paper MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild.

  • Updated Sep 16, 2024
  • Python

This API utilizes a pre-trained model for emotion recognition from audio files. It accepts audio files as input, processes them using the pre-trained model, and returns the predicted emotion along with the confidence score. The API leverages the FastAPI framework for easy development and deployment.

  • Updated Apr 23, 2024
  • Python

Improve this page

Add a description, image, and links to the multimodal-emotion-recognition topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the multimodal-emotion-recognition topic, visit your repo's landing page and select "manage topics."

Learn more

AltStyle によって変換されたページ (->オリジナル) /