v0.1 Open-source feedback platform

Turn community chat
into a product backlog.

Feedbot listens to your Telegram groups, classifies every message with an LLM, and serves the structured backlog to Claude Code, Cursor, and Windsurf over MCP. Self-host in 30 seconds.

Try Feedbot Cloud Self-host guide
~ /feedbot — install live
curl -fsSL https://get.feedbot.dev | sh
One command. Real footage, no edits. ~30 seconds to a running stack.
MIT licensed MCP-native Postgres + FastAPI + React Runs anywhere Docker runs
How it works

From a chaotic group chat to a triaged backlog in three steps.

Feedbot lives between your community and your tooling. It picks the signal out of the noise so your engineers can focus on shipping.

Connect a Telegram group

Generate an invite link in the dashboard, drop the bot into a group, and Feedbot starts listening. Replies in chat are tracked back to the original feedback.

Auto-classify with your LLM

Bring your own OpenAI or Anthropic key. Each message becomes a typed feedback row (bug · feature · question) with severity, tags, and a one-line summary. Monthly budgets cap surprises.

Query from your editor over MCP

Every Feedbot install ships an MCP server at /mcp/. Drop the URL into Claude Code and ask "what bugs are open in project X?" — the model reads, filters, and triages from your IDE.

What's inside

Everything you need to run feedback like a system, not a folder of screenshots.

Self-host in 30 seconds

A single curl … | sh bootstraps a full Docker stack — API, dashboard, bot, Postgres, and TLS termination. No YAML, no .env editing, no Kubernetes.

Configure in the UI

SMTP, Telegram, domain, HTTPS — all in the dashboard. Save once, the orchestrator restarts only what changed.

Telegram-first conversational loop

Replies in Telegram are tracked back to the original issue. The bot acks reactions and posts updates when the team marks an item done.

MCP-native

Streamable-HTTP MCP server at /mcp/. Works with Claude Code, Claude Desktop, Cursor, Windsurf — anywhere MCP speaks.

BYOK by default

LLM keys stay yours, Fernet-encrypted at rest. Per-project budget caps prevent runaway bills.

Open source, openly built

Every feature ships with a public roadmap, ADRs, and a changelog. MIT licensed forever. No "open core" pivot, no feature gating between free and paid — the cloud is ops-as-a-service.

Pricing

Free forever for self-host. Hosted plan when you'd rather not run servers.

Same product, two delivery modes. Migrate either direction with a Postgres dump.

Self-host

$0 / forever

MIT licensed. Bring your own Docker host.

  • Unlimited projects, members, feedbacks
  • LLM auto-classification (BYOK)
  • MCP server included
  • Community support on GitHub
Read the quickstart
FAQ

Quick answers.

Will self-host stay free?

Yes — forever, MIT. We don't believe in feature-stripping the OSS to push people to a paid tier. The cloud is ops-as-a-service, not a different product.

Which platforms can I deploy on?

Anything that runs Docker. We test on Coolify, Dokploy, plain Docker behind nginx-proxy-manager, and Hetzner / DigitalOcean VMs. See the self-host guide.

Can I migrate self-host → cloud (or back)?

Yes. Postgres dump → restore. Same schema both sides. Detailed migration guide ships with cloud GA.

What about LLM costs?

You bring your own OpenAI or Anthropic key. Costs are tracked per-project in llm_calls; monthly budget caps stop calls when you hit your limit.

What does the MCP integration look like?

Streamable-HTTP MCP at /mcp/. Drop the URL into Claude Code, Cursor, or Windsurf and the model can list, read, filter, and update feedback rows on your behalf — same auth as the API.

Is there an enterprise tier?

Not at launch. We focus on shipping the product. If you need a signed DPA, EU residency, or a custom SLA, email us at [email protected].

Ship feedback like a feature, not a folder of screenshots.

30 seconds to spin up. No card required.