Data Engineer

Job description

You’ll be joining a team of passionate developers to work on projects for local and international companies as well as in academic research. This will involve different phases of the end-to-end delivery – direct contact with the client, business analysis of the problem, coming up with an appropriate solution, implementation and moving it to the production environment. You’ll be directly cooperating with machine learning team members, either by leading the project or assisting in various stages.


Job requirements

  • MS/PhD (or BS +3-4 years industry experience) in Computer Science or related fields

  • 3 years minimum working with data-intensive projects

  • Independent worker - as the company's only Data Engineer, it's imperative that this candidate can independently drive forward data priorities

  • Expertise with relational databases and SQL querying and scripting

  • Expertise in Python

  • Demonstrated ability to write high-quality, production-ready code (readable, well-tested, with well-designed APIs)

  • Familiarity with DevOps related concepts / tools (e.g. Docker, Kubernetes, Terraform)

  • Passion for architecting large distributed systems with elegant interfaces that can scale easily.

  • Experience in areas relevant to data engineering, including data management, custom ETL design and data modeling.

  • Ability to communicate effectively and collaborate with people of diverse backgrounds and job functions

  • Familiarity with cloud computing services (AWS or GCP)

  • Familiarity with web services and application frameworks (Django, Flask, FastAPI)

  • Proficiency in Linux environment (including shell scripting), and experience with version control practices and tools

  • Experience with working with machine learning and/or data scientist stakeholders in accelerating workflows

  • Experience with large-sized data sets and associated technologies such as Spark/Big Query/Hive/Flink/Kafka, etc.

Nice to Have

  • Expertise additional general-purpose programming languages (such as Java, Scala, C/C++, or Go)

  • Hands-on experience with Cloud Data Warehouse (Snowflake or Redshift) and Big Data technologies (e.g S3, Hadoop, Hive, Spark, Flink, Kafka, etc).

  • Working knowledge of statistics and various flavors of statistical modeling techniques

  • Experience with deploying ML models

  • Experience with MLOps related concepts / tools (e.g. MLFlow/Neptune/Kubeflow/W&B)

Salary: 11 000 - 17 000 PLN + VAT (B2B)


We offer you:

  • working with the newest machine learning technologies
  • 1500 PLN budget on self-development per year
  • possibility to contribute to a variety of interesting projects
  • Seniority Level check
  • Internal workshops
  • Personal branding (articles, conference speaker, internal workshop leader)
  • flexible work hours
  • remote work possibility
  • chillout room / free beverages / team & company events
  • MultiSport Plus
  • LuxMed VIP
  • friendly atmosphere