Job Responsibilities
- Build and maintain data pipelines from third party and trading applications (e.g. Enfusion) into the data warehouse (e.g. Snowflake).
- Use SQL and Python to clean, transform, and analyze data for reporting, risk, and operations, including implementing data quality checks.
- Support creation of dashboards and reports using tools such as Power BI and help automate simple workflows and documentation.
- Work with daily data feeds and help monitor and support production runs of data pipelines.
- Contribute to improving data models and ETL/ELT workflows and participate in code reviews following modern engineering practices.
Requirements 1) Education &
Qualifications
- Final-year student or recent graduate in Computer Science, Data Science, Engineering, Mathematics, or a related quantitative field.
- Solid foundation in programming and problem solving, including basic data structures and algorithms.
- Evidence of applied learning through coursework, personal projects, internships, or open source work involving data or backend development.
- Proficient in SQL and comfortable using Python for data ingestion, transformation, analysis, and debugging.
- Familiarity with cloud data platforms and data warehousing (e.g. Snowflake, Azure), ETL/ELT tools (e.g.
Azure Data
Factory), and visualization tools (e.g. Power BI / Power Platform).
- Understanding of code and data quality practices: use of Git and modern workflows (pull requests, basic CI), simple testing, and implementing validation, reconciliation, and monitoring for data feeds and production runs.
- Excellent command of written and spoken English and Chinese (Cantonese & Mandarin).
- Curious, proactive, and willing to take ownership while remaining open to feedback.
- Comfortable learning independently, dealing with ambiguity, and collaborating with product, operations, and senior engineers.
- Detail oriented and reliable, treating data quality and platform stability as shared responsibilities.