
Data Platform Engineer (Databricks/Python/AI)
Square One Resources•Warszawa, Mokotów
💰 Wynagrodzenie
25200 - 28560 PLN/msc
Oryginalnie: 25200 - 28560 PLN/msc
📋 Informacje
🛠 Wymagane technologie
🌐 Wymagane języki
📝 Twój zakres obowiązków
Your responsibilities, Develop and implement production-ready code for the Cloud Data Platform, including data structures, transformation workflows, and quality monitoring., Optimize SQL and Python scripts for efficient data processing in a cloud environment., Collaborate closely with data analysts, DevOps engineers, and architects on platform development and change implementation., Deliver both functional and non-functional requirements related to data flow modifications., Implement solutions specific to Data Lakehouse and Data Intelligence Platform products., Define rules and build tools supporting data quality control processes., Develop and implement data management and data governance best practices., Work with IT system administrators to design optimal interfaces for data exchange with the Cloud Data Platform., Identify, integrate, and configure reference data interfaces between the Data Platform and source systems.
Previous roles as a Data Engineer, Data Warehouse Developer, Data Analyst, Business Intelligence Developer, or related positions., Hands-on experience with Data Warehouse, Data Platform, or Lakehouse implementation or enhancement projects., Experience with documentation such as Source-to-Target matrices, Business Requirements, and Business Glossary., Participation in complex data integration projects involving multiple sources., Experience optimizing SQL and Python code using AI models., Experience in UAT (User Acceptance Testing) with key organizational stakeholders., Teamwork experience in Agile environments., Advanced proficiency in ANSI-SQL., Advanced Python programming skills., Excellent communication skills and ability to build business relationships remotely., Knowledge and practical experience with CI/CD change management standards., Accuracy and entrepreneurial mindset in task execution., Practical experience with Azure Databricks Lakehouse technology., Experience using Databricks connectors for source system integration., Ability to design and orchestrate complex data flows in a medallion architecture for Enterprise Data Platforms., Experience applying LLM AI models for code generation in data transformation and ingestion on Databricks., Ability to design data models, including Star and Snowflake schemas, for efficient analytics., Knowledge of performance optimization techniques for data warehouses, such as indexing, partitioning, and caching., Monitoring new technologies and trends in data warehousing and proposing adoption for platform optimization., English proficiency at B2/C1 level.
This is how we work, at the client's site
This is how we work on a project, Continuous Deployment, Continuous Integration
SQUARE ONE RESOURCES sp. z o.o., At Square One Poland we link IT experts with the business. With over 25 years of experience, we specialize in recruitment processes on a global scale. Despite years of experience, we still have a startup DNA and this is our advantage. Our offices are located in London and Warsaw, however, we can reach clients from all over the world, from start-ups to big worldwide corporations.
This is how we work,
About the project
You will be part of a team developing production-ready software for a Cloud Data Platform. Your work will focus on designing and implementing data structures, data transformation workflows, and quality monitoring mechanisms. The project involves optimizing data processing scripts in SQL and Python for cloud environments and collaborating closely with data analysts, DevOps engineers, and solution architects to evolve and maintain the Data Platform.
This role is central to implementing data engineering solutions for Data Lakehouse and Data Intelligence Platforms, ensuring compliance with data management and governance best practices, and integrating multiple data sources efficiently.
⚙️ Praktyki developerskie
This is how we work on a project, Continuous Deployment, Continuous Integration
📝 Opis główny / Wstęp
About the project
You will be part of a team developing production-ready software for a Cloud Data Platform. Your work will focus on designing and implementing data structures, data transformation workflows, and quality monitoring mechanisms. The project involves optimizing data processing scripts in SQL and Python for cloud environments and collaborating closely with data analysts, DevOps engineers, and solution architects to evolve and maintain the Data Platform.
This role is central to implementing data engineering solutions for Data Lakehouse and Data Intelligence Platforms, ensuring compliance with data management and governance best practices, and integrating multiple data sources efficiently.
Your responsibilities
- Develop and implement production-ready code for the Cloud Data Platform, including data structures, transformation workflows, and quality monitoring.
- Optimize SQL and Python scripts for efficient data processing in a cloud environment.
- Collaborate closely with data analysts, DevOps engineers, and architects on platform development and change implementation.
- Deliver both functional and non-functional requirements related to data flow modifications.
- Implement solutions specific to Data Lakehouse and Data Intelligence Platform products.
- Define rules and build tools supporting data quality control processes.
- Develop and implement data management and data governance best practices.
- Work with IT system administrators to design optimal interfaces for data exchange with the Cloud Data Platform.
- Identify, integrate, and configure reference data interfaces between the Data Platform and source systems.