Sarmaya is seeking a skilled and detail-oriented Mid-Level Data Engineer to join our growing team. This role is suited for professionals who have a strong foundation in data engineering and are capable of designing, building, and maintaining scalable data systems that support business operations and decision-
making.
Key Responsibilities
- Design, develop, and maintain robust data pipelines for processing and transforming large datasets.
- Build and manage data architectures, including data warehouses and data lakes.
- Ensure data quality, integrity, and consistency across all systems.
- Optimize database performance and manage data storage solutions.
- Collaborate with cross-functional teams to understand data requirements and deliver reliable data solutions.
- Implement data validation, monitoring, and alerting systems to ensure smooth operations.
- Work with structured and unstructured data from multiple sources and integrate them into unified systems.
- Support data accessibility and reporting by enabling efficient querying and retrieval mechanisms.
- Document data processes, workflows, and system architecture for long-term scalability and maintenance.
Requirements
- Bachelor’s degree in Computer Science, Software Engineering, Data Science, or a related field.
- 2–4 years of experience in data engineering or a related role.
- Strong proficiency in SQL and experience with relational and non-relational databases.
- Experience with programming languages such as Python, Java, or Scala.
- Hands-on experience with ETL/ELT processes and tools.
- Familiarity with data warehousing solutions (e.g., BigQuery, Redshift, Snowflake).
- Understanding of data modeling concepts and best practices.
- Experience working with APIs and integrating external data sources.
- Strong problem-solving skills and attention to detail.
Preferred Qualifications
- Experience with cloud platforms such as AWS, Google Cloud, or Azure.
- Familiarity with workflow orchestration tools (e.g., Airflow).
- Knowledge of big data technologies such as Spark or Hadoop.
- Experience with real-time data streaming tools (e.g., Kafka).
- Understanding of data governance and security best practices.
What We Offer
- A structured and performance-driven work environment.
- Opportunities to work on complex data systems and scalable architectures.
- Exposure to cross-functional collaboration and real-world data challenges.
- Professional growth and development within a dynamic team.
Application Process
Interested candidates are encouraged to apply by submitting their updated resume along with any relevant project work or portfolio demonstrating their experience in data engineering.
Only shortlisted candidates will be contacted.
We appreciate your interest and look forward to reviewing your application.