Link: devforth.io/lab/chat
No signup needed. Every model available there can be executed on own hardware with vLLM or similar tool.
This is our playground chat UI where you can test popular open source model for quality, response delay and decoding speed, RAG summarization capabilities and tool calls.
Primarily created for our clients to make decisions and testing open source models on own tasks, but sharing with community as well.
You can also set different levels of reasoning_effort.
Please leave comments if you wish us to add more models or features.
Top comments (0)