Requirements
,• Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
,• Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
,• Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
,• Experience with data modeling, warehousing and building ETL pipelines
,• Knowledge of distributed systems as it pertains to data storage and computing
,• Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence
,• Experience as a Data Engineer or in a similar role
,• Experience with SQL
,• (Desirable) Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
,• (Desirable) Experience working on and delivering end to end projects independently
What the job involves
,• The team is embedded within finance alongside content and business finance partners, we own the single-source-of-truth metrics layer that serves both finance and business leadership
,• Working across AWS technologies, you'll deliver self-service analytics, infrastructure-as-code, and reliable ETL/ELT workflows
,• You'll work closely with global business partners and technical teams on many non-standard and unique business problems and use creative problem solving to deliver data products that underpin Prime Video strategic decision making, from content selection to on-platform customer experience
,• Build and optimize data pipelines to ingest and transform data from various sources, including traditional ETL pipelines and event data streams
,• Utilize data from disparate sources to build meaningful datasets for analytics and reporting, focusing on consolidating data from various Prime Video systems
,• Implement big-data technologies (e.g., Redshift, EMR, Spark, SNS, SQS, Kinesis) to optimize processing of large datasets
,• Develop and maintain the team's data platform, including infrastructure-as-code using AWS CDK
,• Work closely with business stakeholders to understand their needs and translate them into technical solutions
,• Analyze business processes, logical data models, and relational database implementations
,• Write high-performing SQL queries
,• Collaborate with software engineers to support the data needs of products
mid
4/11/2026
You will be redirected to Amazon's application portal.