Built by engineers who broke too many pipelines

DataFlow started in 2023 when we got tired of Kafka + schema registry + connector fleets eating half our on-call hours. We wanted a single place to send events and query them back.

What we do

DataFlow Analytics is a hosted event pipeline. You point your apps at one HTTPS endpoint, and we take care of routing, schema evolution, storage, and replay. Query the result via SQL or stream it to your warehouse.

How we're different

  • No collectors. Native HTTP/2 ingest — no Fluentd, no Vector, no daemons to maintain.
  • No schema registry. Schemas are inferred and versioned; breaking changes are opt-in.
  • No seat pricing. Pay for the GB you ship — all seats are free.

The team

Seven engineers, split between Berlin and Lisbon. Most of us came from infrastructure teams at mid-stage startups. We're backed by a small handful of angel investors and revenue — no VC, no growth targets that force us to ruin the product.

Where we run

Ingest endpoints in Frankfurt, Ashburn, and Singapore. Cold storage on S3-compatible object stores. No single-AZ dependencies — if a region goes down, your events get routed to the next closest.

Get in touch

Hit our public API docs if you want to try it. Or email hello at dataflow dot example — no SDRs, you'll get a real engineer replying.