VLLM
EasyLLM| Launched | |
| Pricing Model | Free |
| Starting Price | |
| Tech used | |
| Tag | Software Development,Data Science |
| Launched | 2024 |
| Pricing Model | Free |
| Starting Price | |
| Tech used | |
| Tag | Developer Tools,Chatbot Builder,Coding Assistants |
| Global Rank | |
| Country | |
| Month Visit |
| Global Rank | |
| Country | |
| Month Visit |
Estimated traffic data from Similarweb
LLMLingua - To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
StreamingLLM - Introducing StreamingLLM: An efficient framework for deploying LLMs in streaming apps. Handle infinite sequence lengths without sacrificing performance and enjoy up to 22.2x speed optimizations. Ideal for multi-round dialogues and daily assistants.
LazyLLM - LazyLLM: Low-code for multi-agent LLM apps. Build, iterate & deploy complex AI solutions fast, from prototype to production. Focus on algorithms, not engineering.
OneLLM - OneLLM is your end-to-end no-code platform to build and deploy LLMs.