Job Summary
We are seeking a dynamic and detail-oriented Snowflake Engineer to join our data engineering team. In this role, you will be responsible for designing, developing, and maintaining scalable data solutions using Snowflake, a cloud-based data warehousing platform. Your expertise will enable efficient data storage, processing, and analysis across diverse big data environments, empowering our organization to derive actionable insights. This position offers an exciting opportunity to work with cutting-edge technologies and collaborate with cross-functional teams to optimize our data architecture and workflows.
Responsibilities
- Design, develop, and optimize scalable data pipelines and architectures within Snowflake to support business intelligence and analytics initiatives.
- Collaborate with data scientists, analysts, and stakeholders to understand data requirements and translate them into effective technical solutions.
- Implement ETL (Extract, Transform, Load) processes using tools such as Informatica, Talend, or custom scripting in Python or Bash to ensure seamless data integration from various sources including AWS, Azure Data Lake, Hadoop, and Oracle databases.
- Develop and maintain SQL queries, stored procedures, and database objects to facilitate efficient data retrieval and manipulation.
- Manage cloud infrastructure components on AWS and Azure platforms to support Snowflake deployments and related big data tools like Apache Hive, Spark, and Hadoop.
- Ensure data security, compliance, and governance standards are maintained across all systems.
- Monitor system performance, troubleshoot issues promptly, and implement improvements for reliability and scalability.
- Participate in Agile development cycles by contributing to sprint planning, stand-ups, and retrospectives while adhering to best practices in software development.
- Document architecture designs, workflows, and technical specifications for ongoing maintenance and knowledge sharing.
Skills
- Extensive experience with Snowflake cloud data platform including architecture design and performance tuning.
- Strong proficiency in SQL programming along with knowledge of database design principles.
- Hands-on experience with ETL tools such as Talend or Informatica; scripting skills in Python or Bash are highly desirable.
- Familiarity with big data technologies including Hadoop ecosystem components (HDFS, Hive), Spark, Apache Hive, and Looker for analytics visualization.
- Knowledge of cloud services such as AWS (Amazon Web Services) including S3 storage; experience with Azure Data Lake is a plus.
- Understanding of RESTful APIs for integrating various applications and services within the data ecosystem.
- Experience working with Oracle databases and Microsoft SQL Server for diverse data management needs.
- Ability to analyze complex datasets using analytics tools; strong problem-solving skills are essential.
- Knowledge of model training techniques for machine learning applications is advantageous.
- Familiarity with Agile methodologies to facilitate collaborative project execution in fast-paced environments.
- Skills in database design principles combined with experience in shell scripting (Bash/Unix shell) for automation tasks.
- Excellent communication skills to clearly articulate technical concepts across teams. Join us if you’re passionate about transforming raw data into strategic assets through innovative engineering!
Pay: Up to $80.00 per hour
Work Location: In person