Skip to main content

Senior Data Engineer

KeyrockBrussels Belgium | Belgium | EuropeToday
RemoteRustData & Distributed SystemsPythonSolidJSClickHouseTimescaleDBDockerTerraformKafkaWeb3

Job Description

Senior Data Engineer

About Keyrock

Since our beginnings in 2017, we've grown to be a leading change-maker in the digital asset space, renowned for our partnerships and innovation.

Today, we partner with over 250 team members around the world. Our diverse team hails from 42 nationalities, with backgrounds ranging from self-taught DeFi natives to PhDs. Predominantly remote, we have hubs in London, Brussels, and Singapore, and host regular online and offline hangouts to keep the crew tight.

We are trading on more than 80 trading venues, and working with a wide array of asset issuers. As a well-established market maker, our distinctive expertise led us to expand rapidly. Today, our services span market making, options trading, high-frequency trading, OTC, and DeFi trading desks.

But we’re more than a service provider. We’re an initiator. We're pioneers in adopting the Rust Development language for our algorithmic trading, and champions of its use in the industry. We support the growth of Web3 startups through our Accelerator Program. We upgrade ecosystems by injecting liquidity into promising DeFi, RWA, and NFT protocols. And we push the industry's progress with our research and governance initiatives.

At Keyrock, we're not just envisioning the future of digital assets. We're actively building it.

About the team and why the role exists

The Central Data Team (CDT) is only a few months old, but data has been Keyrock's lifeblood since day one. We're now building the Keyrock Data Platform to give Keyrockers, and the AI agents working alongside them, the data and context they need to act fast and autonomously within agreed boundaries and aligned with our shared goals. Doing that means taking data from across the company and making sense of it in real time for all the functions that depend on it: trading desks, wealth and asset management, product, risk, finance, compliance, and research to name a few.

You'd be one of the early hires in CDT and we expect you to contribute to most of the bigger decisions and build work. The standards we settle on and the way the team ends up working are still open.

What You’ll Do

  • Build streaming and batch pipelines that ingest, normalise, and distribute market, trading, and portfolio data, resilient to feed and exchange failures.

  • Build the self-serve tooling (SDKs, patterns, templates, AI agents) so other teams publish, consume, and build on data products without waiting on us.

  • Own data contracts and schema evolution. Keep schema changes from turning into multi-team coordination events.

  • Design the lakehouse and time-series layer around consumer query patterns.

  • Build and evolve the Data Governance and Data Quality Framework: stale-feed detection, schema validation, range checks, idempotent writes, lineage, ownership, self-healing.

  • Build the derived analytics the business runs on: cross-exchange spreads, VWAP at depth, order book microstructure for the desks; portfolio views, exposure, performance for wealth and asset management.

  • Make observability, cost, and performance first-class from day one.

  • Treat infrastructure as code (Docker, Terraform, CI/CD) alongside our Central Infrastructure Team.

  • Work in the open: write things down, partner closely with Architecture, Infrastructure, Platform, and the rest of the teams.

What We’re Looking For

Tools matter less to us than how you think about problems. That said, here's the shape of the person we have in mind.

Engineering Craft

  • 8+ years of building production data systems that other people rely on.

  • Strong proficiency in Python and SQL: not just being able to write a query, but being able to reason about what the engine is doing with it.

  • Code that's easy for someone else to read, test, and delete later.

  • Strong understanding of data modelling for both streaming and analytical workloads.

  • Efficiency, quality, idempotency, and observability are taken seriously by default.

Systems Design

  • You've designed and operated streaming systems on Kafka, Redpanda, MSK, or Kinesis, and you have opinions about partitioning, consumer groups, offsets, and schema registries.

  • You've used a time-series store in production (ClickHouse ideally; TimescaleDB, QuestDB, or similar are fine too) and can talk about table design as a function of query patterns.

  • You've worked with a lakehouse architecture and reason about table layout, partitioning, and compaction as design choices that shape query performance and storage cost.

  • You build for self-healing and idempotency. Reprocessing is safe, retries don't double-write, and the system recovers without a human in the loop.

Operational Readiness

  • Docker, Terraform, and CI/CD are how you work, not a separate "DevOps" thing.

  • You think about cost and performance early.

  • You instrument as you build: logs, metrics, and traces are part of the system from day one.

  • You design for data quality and governance up front covering contracts, validation, lineage, and ownership

How You Think and Work

  • You reason from first principles when a problem is new, stay pragmatic when it isn't, and update your view when you learn more.

  • You treat the trading desks, wealth and asset management, product, risk, finance, compliance, and research as customers of what you build, and communicate with them that way.

  • You optimise for outcomes over output. A smaller, simpler thing that ships and works beats a bigger thing that doesn't.

  • You take ownership end-to-end: design, ship, operate, improve.

  • You say what you think including when it's an unpopular take. You change your mind when the argument is better.

  • You make the people around you better. Reviews are real, juniors grow from working with you, and peers want to work with you again.

  • You're curious about how markets work. Data engineering on its own won't keep you interested here.

  • You're honest about what you know and what you don't, and quick to close the gap.

  • You understand financial market data: order books, trades, reference data, portfolios, exposures. Crypto, TradFi, or both are a strong plus.

Nice to Have

  • Lakehouse experience with Apache Iceberg or Delta Lake.

  • Familiarity with DataHub or similar metadata/lineage platforms.

  • Rust. Some of our performance-critical services are written in it. Interest is welcome; fluency isn't required.

What You’ll Get

  • A from-scratch mandate. You'll be among the first hires in CDT, and the platform, the standards, and the team culture are yours to shape with us.

  • Strong partners. Close working relationships with Architecture, Infrastructure, Platform, and the desks themselves. You won't be building data in a vacuum.

  • Autonomy on how you work. Flexible hours, remote-first, business-hours on-call shared across the team.

  • A competitive salary package, with various benefits.

  • A team that likes each other. Regular online get-togethers and a yearly onsite where everyone's in the same room.

Our Recruitment Process

We’re after those with the right skills and a conscious choice to join our field. The perfect fit? Someone who’s driven, collaborative, acts with ownership and delivers solid, scalable outcomes. As an employer we are committed to building a positive and collaborative work environment. We welcome employees of all backgrounds, and hire, reward and promote entirely based on merit and performance. Due to the nature of our business and external requirements, we perform background checks on all potential employees, passing which is a prerequisite to join Key

The Rusty Bucket
Weekly curated Rust jobs delivered to your inbox.