Now
Data Engineer
Find your best career opportunity and #growITwithus!
Job description
We are seeking a talented and motivated Senior Data Engineer to join our team. The person will play a key role in designing, building, and optimizing next-generation Lakehouse data platforms. This is a hands-on role focused on transforming complex, legacy datasets into structured, high quality data products that power analytics and operational workflows. We are looking for a proactive, independent problem-solver who excels in a fast-paced rebuild environment, maintains code stability (including AI generated code), and drives rapid iteration.
#LI-Remote
#LI-PP1
Your responsibilities
• Design and implement robust, scalable data ingestion and transformation pipelines using Databricks, PySpark, and distributed processing, and utilize Airflow (or similar) for reliable workflow orchestration
• Implement Delta Lake principles, focusing on CDC and schema evolution, and establish data quality frameworks (e.g., Great Expectations) within CI/CD pipelines for data integrity
• Develop and optimize complex SQL and Python scripts, integrating diverse data sources (APIs, S3 files, etc.) and handling both structured and unstructured data
• Support the implementation of data governance and cataloguing solutions (e.g., Unity Catalog) and proactively investigate and improve inconsistent legacy datasets
• Guide and manage AI agents for code generation, ensuring quality and stability
• Work pragmatically and collaboratively to drive technical solutions
Our requirements
To succeed at this job, you must have:
Technical & Professional Qualifications
• 5+ years of professional experience in data engineering, focused on cloud and distributed processing environments
• Strong experience with Databricks, PySpark, distributed processing, and Delta Lake, with deep knowledge of CDC and schema evolution
• Expert SQL optimization and Python skills, with hands-on experience using Airflow (or similar)
• Familiarity with relevant AWS components and good understanding of CI/CD for data workflows and implementing data quality frameworks
• Knowledge of streaming platforms (Kafka/Kinesis) and exposure to Unity Catalog or similar governance tools
• Proactive problem-solver, comfortable working with complex, inconsistent data without needing explicit specifications
• Able to work independently in a fast-paced rebuild environment, collaborative, pragmatic, and thriving on rapid iteration
• Skilled at directing AI agents to code while maintaining high quality standards
Benefits
• Internal Mobility Program – enjoy many opportunities for career growth, job rotations, diversity of projects & technologies
• Referral Program – enjoy cooperation with your colleagues and get a bonus!
• 5/10 Years NATEK Club – we offer long-term cooperation and celebrate each fifth year cooperation anniversary with gifts
• NATEK CSR Events & team buildings – enjoy our values: accountability, partnership and expertise and #workITwithus
• Education – benefit from our know-how, learning courses and certificates and IT events – be part of the community of leaders of the industry and take part as a speaker or a visitor
• Multisport
• Technical and Personal training
• Language courses
Additionally only for employment contract:
- Cafeteria
- Financial Contribution for Meals
- Life insurance
- Bonuses for Personal Events
- Day off for volunteering
Related jobs:
| Database | Poland | Remote | More | |
| Database | Bulgaria, Sofia | Remote | More | |
| DatabaseITSolution ArchitectProject Management | Poland, WarsawPoland, WroclawPoland, Gdansk | Hybrid | More | |
| Database | Poland | Remote | More | |
| DatabaseArchitect | Bulgaria, Sofia | Remote | More | |
| Product OwnerDatabase | BulgariaBulgaria, Sofia | Remote | More | |
| PowerBIDatabaseDevelopment | Poland, WarsawPoland, Cracow | Hybrid | More | |
| Database | Poland, WroclawPoland, WarsawPoland, Poznan | Hybrid | More |


