Skip to content
View princepride's full-sized avatar

Block or report princepride

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
princepride/README.md

Hi, I'm Wang Zhipeng 👋

I'm a software engineer passionate about building the next generation of AI systems. My work is centered around the post-training of large models and developing robust AI Infrastructure.

I am a strong believer in the power of open source to democratize AI and address the public's concerns about its rapid advancement. I believe that transparent, collaborative development is the key to building safe and beneficial AI for everyone.


🔭 My Contributions

I am proud to have contributed to a range of impactful open-source projects across both academia and industry.

Academic & Research Projects:

  • HKUNLP/Dream: The large language diffusion models trained by Hong Kong University NLP Lab.
  • thunlp/ProactiveAgent: An agent that actively use tools through captured computer operation signals created by Tsinghua University NLP Lab.
  • bytedance/tarsier: A multimodal large language model developed by ByteDance Omni Lab that accurately analyzes images and videos.

Industry-Leading Open Source Projects:

  • vllm-project/vllm: A high-throughput and memory-efficient inference and serving engine for LLMs.
  • vllm-project/production-stack: Focused on building a robust, production-ready stack for serving LLMs with vLLM.
  • LMCache/LMCache: Redis for LLMs. A project designed to optimize LLM serving by caching KV-pairs.
  • huggingface/transformers: A library of pretrained text, computer vision, audio, video, and multimodal models for inference and training.

📫 How to reach me:

Pinned Loading

  1. LMCache LMCache Public

    Forked from LMCache/LMCache

    Redis for LLMs

    Python 1

  2. vllm vllm Public

    Forked from vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 1

  3. production-stack production-stack Public

    Forked from vllm-project/production-stack

    vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization

    Python 1