Dear Applicant,
Please let me know if you are interested.
π¨ πΌ Job Title: Data Architect (Life & Annuity Insurance Experience β Mandatory)
π Work Location: Remote (Working hours aligned with EST)
π° Hire Type - Fulltime
β³ Total Experience:15+ Years and β‘οΈ Minimum 5+ years in Insurance Domain (Mandatory)
β
Must-Have Skills
- π¦ Insurance Domain Experience: 5+ years
- π ETL Development
- π Python
- β‘ PySpark
- βοΈ AWS Cloud
π§Ύ Job Summary
The Data Architect will lead the design and maintenance of the data infrastructure that enables secure data storage, access, and analytics across business and client needs. This role defines how data is collected, structured, integrated, and governed to ensure scalability, security, and performance.
The position plays a critical role in supporting business intelligence, regulatory, and compliance initiatives within the Life & Annuity Insurance domain.
π Years of Experience Required
- 15+ years in Data Engineering
- Minimum 5+ years exclusively in the Insurance domain
- π Key Responsibilities
- π§© Define data architecture strategies, frameworks, and models
- ποΈ Design and optimize databases, data warehouses, and data lakes
- π Ensure data structures align with business requirements and compliance standards
- π€ Collaborate with data engineers, analysts, and client teams on pipeline architecture
- π Oversee metadata management, data catalogs, and lineage tracking
- π Ensure data integrity, scalability, and performance
- βοΈ Select appropriate storage and cloud platforms (Snowflake, AWS, Azure, BigQuery, Redshift)
- π Support data governance and access control policies
- π Review existing systems for enhancements, optimization, or migration
- π Document technical standards, architectural designs, and decisions
- π§ Lead system design with strong strategic alignment and governance
- π οΈ Technical Skills
- π Experience designing, implementing, and managing data analytics solutions using Databricks in the Insurance domain
- π Proven experience building ETL pipelines from multiple data sources using Databricks on AWS
- βοΈ Design and implement scalable ETL pipelines to process and transform large datasets
- π€ Collaborate with data scientists, analysts, and stakeholders to deliver high-quality data solutions
- β
Optimize data workflows while ensuring data quality and integrity
- π Monitor and troubleshoot data pipeline performance, implementing improvements as needed
- βοΈ Leverage AWS cloud services for efficient data storage and processing
- π Document data engineering processes, architecture, and best practices
- π Stay up to date with emerging data engineering and cloud technologies
- π» Strong proficiency in Python, PySpark, and SQL with hands-on data pipeline development
- π§± Experience in Data Modeling, Data Lineage, and Canonical Data Model implementation
- π₯ Experience implementing Medallion Architecture
- π¦ Strong experience in the Insurance domain
- π§Ύ Experience working with JIRA
β Mandatory Skills
- β‘Strong expertise in Databricks, including ETL pipeline development and optimization
- βοΈ Proven experience with AWS cloud services
- ποΈ Solid understanding of data modeling, data warehousing, and data integration
- π¨ π» Proficiency in Python or Scala for data transformation
- π Strong hands-on experience with Python, PySpark, and SQL
- π‘οΈ Familiarity with data governance and data quality frameworks
- π Experience with Power BI
- π§ π» Experience using GitLab
- π Experience working in Agile methodologies
- π Education & Certifications
- πMCA or bachelorβs degree in engineering from a reputed institution
- β
Databricks Certification β Preferred
- β
LOMA Certification (Insurance) β Preferred