Responsible for data engineering and analysis in Sales, Manufacturing, Logistics fields
Discover potential demands and translate requirements into data-driven solutions with stakeholders
Design, build, and optimize scalable pipelines for ingesting, transforming, and integrating large-volume datasets
Work closely with machine learning engineer, apps developers to develop whole data analytics system including end-to-end analytical pipeline and machine learning operation
Ensure data quality, consistency, and real-time monitoring using tools like DBT, 3rd party libraries that can facilitate data validation processes
Work closely with data team to promote digital transformation
Requirements
Academic degree in Data Science, Statistics, Computer Science or a related field
At least 2+ year IT experiences in data migration or data pipelines projects
Exposure to containerization/orchestration (Docker, Kubernetes)
Experience of statistical analysis, Linux system, Pyspark, Git and SQL
Knowledge of ETL tools such as Apache Airflow, DBT
Familiarity with one of the major cloud platforms Azure (Devops, Databricks, or Data Factory) is a plus
Strong sense of responsibility and a good team player
Good communication skills including spoken, written English and Chinese