Siwei Li HPLQAQ
-
Tsinghua University, Department of Electronic Engineering
- Tsinghua University, Haidian District, Beijing
-
04:03
(UTC +08:00) - hplqaq.github.io
Stars
A script for a portable panel that can be used in any VRC world
A simple, easy-to-hack GraphRAG implementation
A fast PostgreSQL Database Client Library for Python/asyncio.
All languages stopwords collection
MixTeX multimodal LaTeX, ZhEn, and, Table OCR. It performs efficient CPU-based inference in a local offline on Windows.
MTEB: Massive Text Embedding Benchmark
Display progress as a pretty table in the command line.
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Chat Templates for π€ HuggingFace Large Language Models
NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing
[CVPR2024] ModaVerse: Efficiently Transforming Modalities with LLMs
Using SAE's to interpret Reward Models (RM)
Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.
The hub for EleutherAI's work on interpretability and learning dynamics
Sparsify transformers with SAEs and transcoders
Code for reproducing our paper "Not All Language Model Features Are Linear"
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
Fine-tuning & Reinforcement Learning for LLMs. π¦₯ Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
Representation Engineering: A Top-Down Approach to AI Transparency
ViT Prisma is a mechanistic interpretability library for Vision and Video Transformers (ViTs).
Training Sparse Autoencoders on Language Models
[CVPR 2023] Efficient Frequency Domain-based Transformer for High-Quality Image Deblurring
An up-to-date list of works on Multi-Task Learning
A Collection of Papers and Codes for CVPR2025/CVPR2024/CVPR2021/CVPR2020 Low Level Vision
Sparse Autoencoder for Mechanistic Interpretability