Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
@cmhungsteve
cmhungsteve
Follow

Min-Hung (Steve) Chen cmhungsteve

Highlights

  • Pro

Block or report cmhungsteve

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
cmhungsteve /README.md

Hi there 👋

My name is Min-Hung (Steve) Chen (陳敏弘 in Chinese). I am a Senior Research Scientist at NVIDIA Research Taiwan, working on Vision+X Multi-Modal AI. I received my Ph.D. degree from Georgia Tech, advised by Prof. Ghassan AlRegib and in collaboration with Prof. Zsolt Kira. Before joining NVIDIA, I was working on Biometric Research for Cognitive Services as a Research Engineer II at Microsoft Azure AI, and was working on Edge-AI Research as a Senior AI Engineer at MediaTek, respectively.

My research interest is mainly Multi-Modal AI, including Vision-Language, Video Understanding, Cross-Modal Learning, Efficient Tuning, and Transformer. I am also interested in Learning without Fully Supervision, including domain adaptation, transfer learning, continual learning, X-supervised learning, etc.

[Update] I released a comprehensive paper list for Vision Transformer & Attention to facilitate related research. Feel free to check it (I would be appreciative if you can ★STAR it)!

[Personal Website][LinkedIn][Twitter][Google Scholar][Resume]

Min-Hung (Steve)'s GitHub stats

Pinned Loading

  1. NVlabs/DoRA NVlabs/DoRA Public

    [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation

    Python 866 60

  2. Awesome-Transformer-Attention Awesome-Transformer-Attention Public

    An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites

    5k 495

  3. SSTDA SSTDA Public

    [CVPR 2020] Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation (PyTorch)

    Python 155 23

  4. TA3N TA3N Public

    [ICCV 2019 (Oral)] Temporal Attentive Alignment for Large-Scale Video Domain Adaptation (PyTorch)

    Python 262 40

  5. chihyaoma/Activity-Recognition-with-CNN-and-RNN chihyaoma/Activity-Recognition-with-CNN-and-RNN Public

    Temporal Segments LSTM and Temporal-Inception for Activity Recognition

    Lua 445 146

  6. MediaTek-NeuroPilot/mai21-learned-smartphone-isp MediaTek-NeuroPilot/mai21-learned-smartphone-isp Public

    The official codebase for the Learned Smartphone ISP Challenge in MAI @ CVPR 2021

    Jupyter Notebook 123 28

AltStyle によって変換されたページ (->オリジナル) /