Data Engineer, Analytics Data Engineering

Dropbox
US
Remote

Job Description

Role Description

In this role you will build large, scalable analytics pipelines using modern data technologies. This is not a “maintain existing platform” or “make minor tweaks to current code base” kind of role. We are effectively building from the ground up and plan to leverage the most recent Big Data technologies. If you enjoy building new things without being constrained by technical debt, this is the job for you!

Our Engineering Career Framework is viewable by anyone outside the company and describes what’s expected for our engineers at each of our career levels. Check out our blog post on this topic and more here.

Responsibilities

Help define company data assets (data model), Spark, SparkSQL jobs to populate data models

Help define and design data integrations, data quality frameworks and design and evaluate open source/vendor tools for data lineage

Work closely with Dropbox business units and engineering teams to develop strategy for long term Data Platform architecture to be efficient, reliable and scalable 

Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems

Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way

Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains

Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts

Many teams at Dropbox run Services with on-call rotations, which entails being available for calls during both core and non-core business hours. If a team has an on-call rotation, all engineers on the team are expected to participate in the rotation as part of their employment. Applicants are encouraged to ask for more details of the rotations to which the applicant is applying.

Requirements

5+ years of Spark, Python, Java, C++, or Scala development experience

5+ years of SQL experience

5+ years of experience with schema design, dimensional data modeling, and medallion architectures

Experience with the Databricks platform and data lake architectures for large-scale data processing and analytics

Excellent product strategic thinking and communications to influence product and cross-functional teams by identifying the data opportunities to drive impact

BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent technical experience

Experience designing, building and maintaining data processing systems

Preferred Qualifications

7+ years of SQL experience 

7+ years of experience with schema design, dimensional data modeling, and medallion architectures

Experience with Airflow or other similar orchestration frameworks

Experience building data quality monitoring using MonteCarlo or similar tools

Compensation

US Zone 1

This role is not available in Zone 1

US Zone 2$149,200—$201,800 USDUS Zone 3$132,600—$179,400 USD

Skills & Requirements

Technical Skills

SparkPythonJavaC++ScalaSQLschema designdimensional data modelingmedallion architecturesDatabricks platformdata lake architecturesAirflowMonteCarloproduct strategic thinkingcommunicationsBig DataData EngineeringAnalytics

Salary

$149,200 - $201,800

year

Level

senior

Posted

3/26/2026

Apply Now

You will be redirected to Dropbox's application portal.