Job Title: Platform Engineer – IT Data Engineering
Location: Houston, TX (Dairy Ashford)
Type: Full-Time
Schedule: Hybrid
About the Role
We are seeking a Platform Engineer with strong expertise in Databricks administration, data governance, and platform engineering standards. This role supports multiple analytics and AI teams by enabling secure, scalable, and efficient use of a shared Databricks platform.
You will own platform guardrails, operational stability, access patterns, and cost controls to ensure the platform scales reliably across teams.
Key Responsibilities
Platform Administration & Governance
- Manage Databricks workspaces, clusters, jobs, Unity Catalog, and compute policies
- Implement RBAC/ABAC access controls for secure data access
- Define and enforce data standards, naming conventions, and schema rules
Data Quality & Ingestion
- Establish standards for ingestion pipelines, Delta architecture, and validation
- Review and approve pipelines for compliance
- Partner with engineering teams to enforce consistency
Security & Compliance
- Manage permissions, row/column-level security, and workspace isolation
- Ensure compliance with data protection standards
Cost Management
- Monitor usage, implement cost controls, and optimize spend
- Build dashboards for cost and usage visibility
Operational Stability
- Ensure platform reliability via CI/CD and automation
- Monitor pipeline health, schema drift, and performance
- Troubleshoot and resolve platform issues
Best Practices & Enablement
- Define platform standards and development guidelines
- Support onboarding and guide teams on best practices
- Maintain documentation, templates, and frameworks
Required Skills
- 5+ years in Data Engineering / Platform Engineering
- 2–3+ years of Databricks administration experience
- Strong knowledge of Unity Catalog, Delta Lake, Spark, and cluster policies
- Experience with data governance, ingestion frameworks, and schema management
- Hands-on experience with RBAC/ABAC and data security
- Experience in cost optimization and monitoring
- Strong Python/PySpark and SQL skills
- Familiarity with Airflow, DLT, or Databricks Workflows
Preferred Skills
- Experience with Azure, AWS, or Google Cloud Platform
- CI/CD tools (GitHub Actions, Azure DevOps)
- Experience with enterprise-scale data platforms
- Dashboarding for governance, cost, and monitoring