Senior Data Engineer

  • Permanent
  • Full time
  • Remote
  • Tech

About Us

Divbrands is a global, remote-working e-commerce company. We focus on data-driven decision-making to launch unique D2C apparel brands to specific audiences across the globe. Because of our intense research and targeting process, we only launch when we know a brand will succeed. With divbranders from different countries, we embody a company culture that is vibrant, diverse, playful, and performance-driven.

We research, develop, and build our own brands online. We specialize in creative production with regional creative hubs that make agile, data-backed creatives that convert sales. Each brand, product, and piece of creative content is "glocalised" for our various local, regional, and nationally diverse customers around the globe. We believe that online shopping behaviour never stays the same.

Consumers face a rapidly changing social, economic, and retail environment, demanding that brands become versatile, data-based, and driven by actual user behaviour. Divbrands has been doing this for over seven years - our mission is to achieve the most satisfying e-commerce experience for the modern shopper, wherever they might be.

In a nutshell:

We are looking for a Senior Data Engineer to:

  • Lead the design, development, and optimization of scalable data pipelines, data warehouses, and lakes, ensuring data quality, governance, and alignment with business needs.
  • Act as the technical reference for the team, offering expertise in data architecture, pipeline optimization, and scalability challenges, driving data quality best practices.
  • We are looking for people with experience working with orchestration and data ingestion tools (e.g., Apache Airflow, Prefect) and proficiency in data transformation using DBT. Additionally knowledge of distributed data processing frameworks like Apache Spark and experience with Databricks is a plus.


What will you do?

As a Senior Data Engineer, you will lead the architecture, development, and optimization of scalable data pipelines, data warehouses, and data lakes that form the backbone of our analytics infrastructure. You'll act as a key technical reference, solving complex data challenges and ensuring that our systems are both robust and scalable. You will play a critical role in maintaining the integrity and reliability of our datasets, enabling the business to make data-driven decisions confidently.

In this role, you will collaborate closely with cross-functional teams to translate business needs into actionable analytical datasets. You'll drive the adoption of cutting-edge tools and frameworks, improving efficiency and maintainability across the data engineering function.

What's the stack?

  • Programming languages: Python and SQL
  • Our big data processing and storage are powered by Google Cloud and Databricks
  • We manage data ingestion and orchestration with Apache Airflow, Airbyte and Fivetran.
  • Data transformation is handled through DBT
  • Our infrastructure is hosted on Google Cloud Platform (GCP).

Desired Skills and qualifications:

For this role we expect someone with at least 4 years of hands-on experience in data engineering or similar roles.

Technical Skills:

  • Advanced skills in Python and SQL, with the ability to write efficient queries and scripts for data manipulation.
  • Expertise with DBT for data transformation.
  • Experience with orchestration pipeline tools such as Apache Airflow, Prefect.
  • Familiarity with cloud infrastructure (e.g., AWS, GCP, Azure).
  • Strong knowledge of database design principles and modern data architectures.
    Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and services like Redshift, BigQuery, Snowflake, or ClickHouse.
  • Experience with Databricks and with distributed data processing frameworks such as Apache Spark, Dask, Flink, or similar is a plus
  • Experience with Docker and Terraform is a plus

Soft Skills:

  • Good communication skills to effectively interact with both technical and non-technical stakeholders.
  • Problem-solving ability, with a focus on finding innovative and scalable solutions.


What's in it for me?

• Fully remote

• Competitive salary with regular performance evaluations

• High-growth e-commerce career & development path

• Highly skilled colleagues to learn from

• 26 days of paid leave

• Parental leave

• Supportive and uplifting peers

• Training and development towards cloud certification (professional level)


What's next?

Apply for the position here. We expect you to send a CV and GitHub link. We might ask for other materials during the process.


FAQ

  • How long is the contract for?
    This is a full-time permanent contract with a trial period of 6 months.
  • What does the process look like?

Initially, you'll have a screening interview with our HR team, followed by an interview with the Head of Data.
Next, you'll complete a live test, which should take no more than 40 minutes. Finally, you will have a concluding interview with the CTO.

We expect the process to take 2-3 weeks. We try to keep it lean and straightforward.

  • Can I expect feedback if not considered?

Unfortunately, at least during the initial stages, we can't give feedback to everyone who applies.

  • Do you work with external recruiters or outsourcing agencies?

No. Please refrain from sending us an offer for your services.