Data Engineer
Coople Poland•Warszawa, Warszawa i okolice
💰 Wynagrodzenie
Widełki nieujawnione
📋 Informacje
📝 Opis główny / Wstęp
We’re looking for a pragmatic and hands-on Full Stack Data Engineer to take end-to-end ownership of our data platform. A key part of this role is data democratisation: bringing trusted data to everyone at Coople in a way that is easy to use, discoverable, and consistent. As a data-driven organisation, we want to evolve to an environment where data is truly available “everywhere” and is a default input into day-to-day and strategic decisions.
Our core stack includes Databricks (Data Lake) and Power BI (reporting). The next step is to evolve our data development environment and data stack into an AI-first setup: we want to use AI wherever it creates leverage (development, testing, documentation, data quality checks, incident response, and stakeholder enablement), and continuously optimize our stack and ways of working for faster, safer delivery.
To make data accessible, we also want to enable conversational access to data (a chat interface for metrics, definitions, and insights) so that non-technical stakeholders can reliably get answers fast, with clear context and trust signals. You will work closely with development teams and stakeholders in Warsaw (partly in person/hybrid) and you’ll often act as the “glue” between product engineering, business users, and clients in other locations too.
In this role you can expect a.o.:
End-to-end data ownership
- Own the full data lifecycle: ingest data from source systems, build and maintain reliable pipelines into the Data Lake, and curate datasets that are trustworthy, traceable, and easy to use for analytics, product teams, and reporting.
- Deliver high-signal dashboards and reporting in Power BI, while partnering with stakeholders to refine requirements, standardise metric definitions, and improve discoverability and self-service through documentation, catalogs, and AI.
Data democratisation & decision enablement
- Make data easy to use for everyone at Coople by reducing friction to get answers and maintaining a clear single source of truth for key metrics, definitions, and ownership.
- Enable a “data in every decision” culture by embedding insights into planning and operations, and supporting and help shaping a chat-based interface that lets stakeholders query data in natural language with reliable, contextualised answers.
AI-first data engineering
- Evolve the data stack and development environment toward AI-optimised workflows, including prompting patterns, automation, and review practices.
- Use AI to improve data quality, testing, lineage documentation, and operational runbooks while continuously improving speed, reliability, and maintainability of the platform.
Collaboration & operational excellence
- Partner closely with Warsaw-based development teams to ensure robust data contracts, event definitions, and integrations across systems. Other teams within Coople are based in London and Zürich (approx 120 employees in total in all 3 locations)
- Maintain strong operational hygiene through monitoring, alerting, incident response, and continuous improvement of engineering standards and practices
Our stack & tools
- Data Lake: Databricks, AWS
- Reporting: Power BI
- Cloud: AWS
- Dev & Ops: GitHub (Actions/PRs), Datadog
- Productivity: Linear, Notion
- AI tooling: AI-assisted development with Cursor
Skills required
- Strong data engineering fundamentals: SQL, data modelling, ETL/ELT patterns, and operating data pipelines in production
- Databricks experience is a must (Spark/Delta, notebooks, jobs/workflows, Delta Lake concepts) and building/operating a Data Lake
- Power BI knowledge is a must (data modelling, measures, performance considerations, and stakeholder-facing reporting)
- Strong engineering standards: CI/CD, testing strategies, observability, performance tuning, and production readiness for data pipelines
- Experience owning data end-to-end: ingestion, transformations, data quality, and delivery to business-facing artifacts
- Experience enabling data democratisation (self-service, documentation, metric definitions, and usability for non-technical stakeholders)
- Interest in conversational data experiences (chat-based access to trusted metrics, definitions, and insights)
- AI-first development mindset: comfortable using AI tools to speed up delivery (e.g., code generation, review, test creation, documentation) and strong judgment on validation, correctness, and safety
- Clear communication with business stakeholders, dev teams, and clients, including translating needs into data products
- Fluent English
Interview process
Our interview process is designed to be transparent, fast, and focused on mutual fit.
- Prescreening: conversation with our internal recruiter
- Technical interview
- Hiring Manager Conversation: discussion with the VP Engineering and VP Product
- Optional: peer-to-peer conversation with the team
- Closing: expect a response within a few days after the final stage
Salary & benefits
- PLN 18'000 – 25'000 monthly base salary (employment contract, on request we can consider B2B too)
- 10% target bonus and ESOP eligibility
- Hybrid working model (on average up to 1 or 2 days per week on-site)
- 26 days’ annual leave
- Team events and company gatherings
Why join Coople
At Coople, we connect companies and workers through a platform built for flexible work. We help businesses plan smarter and support people in finding jobs that fit their lives. Our values – Agile, Collaborative, Empowering, Fair, and Passionate – shape our team spirit and define the way we work to truly make a difference.
In this role, you’ll help shape how we build and operate a modern data platform in an AI-first engineering environment, with real ownership, close collaboration with Warsaw-based dev teams, and direct impact on business outcomes.