Data Engineer-Databricks

Prophecy Technologies
Chicago, US
On-site

Job Description

Job Summary

We are seeking an experienced Databricks Engineer with deep expertise in the Databricks ecosystem and Apache Spark technologies. The ideal candidate will design, develop, and optimize large-scale data processing pipelines while ensuring high performance and reliability in production environments. The role requires strong experience with Spark, Databricks, Terraform, and data engineering best practices, along with the ability to make and defend architectural decisions in a highly technical and performance-driven environment.

Key Responsibilities

• Design, develop, and optimize large-scale data processing pipelines using Apache Spark and Databricks.

• Optimize Spark workloads using various performance tuning techniques to improve efficiency and scalability.

• Build and maintain Spark Structured Streaming pipelines for real-time data processing.

• Develop and optimize queries using Spark SQL to ensure high performance and efficient data processing.

• Implement infrastructure and deployment automation using Terraform and Databricks Asset Bundles.

• Write production-grade, scalable code suitable for high-performance data environments.

• Collaborate with engineering and data teams to design robust and scalable data architectures.

• Participate in architectural discussions and provide technical recommendations for data platform improvements.

• Work independently to prioritize tasks and identify opportunities for optimization and automation.

Required Skills & Experience

• Strong experience with Apache Spark, including optimization techniques for large-scale workloads.

• Hands-on experience with Spark Structured Streaming for real-time data processing.

• Strong expertise in Spark SQL and query performance optimization.

• Experience working with Databricks platform and ecosystem.

• Experience with Terraform and Databricks Asset Bundles for infrastructure and deployment management.

• Strong programming skills and ability to write production-quality code in a high-performance environment.

• Experience working with large-scale distributed data systems.

Competencies

• Data Engineering & Distributed Data Processing

• Spark Performance Optimization

• Real-Time Data Processing

• Data Platform Architecture

• Problem Solving & Analytical Thinking

• Independent Work and Technical Leadership

Preferred Skills

• Experience with Databricks application development and Databricks Apps.

• Comfortable working in Linux environments.

• Background in financial markets such as trading, market making, or related domains.

• Experience working in high-performance engineering environments.

Skills & Requirements

Technical Skills

Apache SparkDatabricksTerraformSpark Structured StreamingSpark SQL

Level

mid

Posted

4/9/2026

Apply Now

You will be redirected to Prophecy Technologies's application portal.