New Conversation
Welcome to BBSRadioTV Chat
Powered by Qwen3-0.6B via vLLM
💡
Explain quantum computing
💻
Write Python code
🧘
Benefits of meditation
📱
App brainstorming
Qwen can make mistakes. Consider checking important information.
Settings
vLLM API URL
The URL where your vLLM server is running
Model Name
The model name as configured in vLLM
Temperature:
0.7
Higher values make output more random
Max Tokens
Maximum number of tokens in the response
System Prompt
You are Qwen, a helpful AI assistant. You provide clear, accurate, and thoughtful responses.
Instructions for how the AI should behave