Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
@waqasm86
waqasm86
Follow

Block or report waqasm86

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. llcuda/llcuda llcuda/llcuda Public

    CUDA 12-first backend inference for Unsloth on Kaggle — Optimized for small GGUF models (1B-5B) on dual Tesla T4 GPUs (15GB each, SM 7.5)

    Jupyter Notebook 8 1

  2. llamatelemetry/llamatelemetry llamatelemetry/llamatelemetry Public

    CUDA-first OpenTelemetry Python SDK for LLM inference observability and explainability.

    Python 1

  3. Ubuntu-Cuda-Llama.cpp-Executable Ubuntu-Cuda-Llama.cpp-Executable Public

    Pre-built llama.cpp CUDA binary for Ubuntu 22.04. No compilation required - download, extract, and run! Works with llcuda Python package for JupyterLab integration. Tested on GeForce 940M to RTX 4090.

    Python 1

  4. cuda-nvidia-systems-engg cuda-nvidia-systems-engg Public

    Production-grade C++20/CUDA distributed LLM inference system with TCP networking, MPI scheduling, and content-addressed storage. Features comprehensive benchmarking (p50/p95/p99 latencies), epoll a...

    C++

  5. llcuda/llcuda.github.io llcuda/llcuda.github.io Public

    This is a github pages website for my llcuda python sdk project

    Python

AltStyle によって変換されたページ (->オリジナル) /