Big Data Engineer

Job description

It is a project to build a strategic, datasourcing and analytics hub; hub will provide data for the bulk of the current and future HR integrations and data analytics needs, providing additional data for cross-domain HR analytics and data to the bank Enterprise. It includes building of semantic (web ontologies) domain models. Core Project team is based in Poland and delivers new functionalities using Agile (Scrum) methodology.

Your responsibilities

• Work as the Big Data developer in a self-organizing Scrum team
• Implement data sourcing and transformation code, perform semantic data modelling, API build
• Manage the deployment, maintenance, and L3 user support of the reporting / data access tool(s)
• Create technical documentation of all delivered artifacts
• Perform other duties as assigned

Our requirements

• Project experience with at least one of the following Big Data platforms is a must: Cloudera, Hortonworks (min. 2 years)
• Knowledge of Hadoop ETL tools (Sqoop, Impala, Hive, Oozie)
• SQL programming skills
• Bash scripting experience (min. 2 years)
• Working knowledge of at least one of the programming languages: Python, PySpark, R, Scala, Java (min. 2 years)
• Self-motivated and a team player with good problem solving skills
• Ability to meet tight deadlines and work under pressure


• Your professional career growth by matching your skills and plans with the suitable projects
• Work in modern environment with innovative technologies
• Attractive salary referring to your skills and experience
• Flexible working hours
• Company social events
• Private healthcare
• Multisport card
• For contractors – eligibility for up to 23 additional days

Share this offer:

You haven’t found the position you were looking for? Create your profile, send us your CV and stay connected!

Create your profile