Sr. AI Engineer

Teradata
Denver, US

Job Description

**Our Company**

At Teradata, we believe that people thrive when empowered with better information. That's why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers-and our customers' customers-to make better, more confident decisions. The world's top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise.

  • *What You'll do**

We are building a new service to collect and normalize data catalogs from diverse data sources - including relational databases, data lakes, data warehouses, and streaming systems - and expose them to an AI agent that dynamically constructs and routes queries to the appropriate source. This is a greenfield initiative that requires strong engineering judgment, a systems-thinking mindset, and experience shipping production-grade services.

You will be a core contributor on this project, working from architecture to implementation - designing ingestion pipelines, building the catalog API layer, and collaborating with the AI/ML team to surface the right metadata signals for intelligent query generation.

_Responsabilities_

+ Design, build, and operate a highly available data catalog collection service that ingests schema and metadata from heterogeneous data sources (RDBMS, data lakes, streaming platforms, APIs)

+ Develop robust data pipelines for catalog extraction, normalization, lineage tracking, and semantic tagging to power AI-driven query routing

+ Build and maintain RESTful and/or gRPC APIs that expose catalog data to an AI query agent

+ Deploy and manage services on Kubernetes (K8s), including helm chart authoring, autoscaling configuration, and multi-cluster operations

+ Ensure service reliability through SLO definition, circuit breakers, retry logic, and distributed tracing

+ Integrate with open-source and cloud-native technologies including Apache Kafka, Spark, dbt, Apache Atlas, or OpenMetadata

+ Collaborate with AI/ML engineers to design and iterate on the metadata schema and query routing interface

+ Participate in on-call rotations and contribute to incident response, postmortems, and reliability improvements

+ Contribute to CICD pipelines, infrastructure-as-code (Terraform / Helm), and automated testing frameworks

  • *Who You'll Work With**

This position sits within the Data Intelligence Platform team, a group focused on building next-generation AI-assisted data services on top of Teradata's Vantage Cloud Lake platform. Our team operates at the intersection of cloud infrastructure, data engineering, and applied AI - shipping highly available, multi-tenant services that power intelligent query routing and data discovery at scale.

Our platform responsibilities include:

+ Designing and operating highly available microservices for data catalog ingestion and serving

+ Building AI-assisted query generation and routing services across heterogeneous data sources

+ Deployment and lifecycle management of services on Kubernetes (K8s) across AWS, Azure, and GCP

+ Data pipeline development for catalog extraction, normalization, and semantic enrichment

+ Centralized observability: monitoring, alerting, and distributed tracing for all platform services

+ Providing DevOps tooling and CICD pipelines to support continuous delivery

  • *What Makes You a Qualified Candidate**

+ 3+ years of software engineering experience building and operating production services

+ Proficiency in one or more of: Go, Rust, Java or Python-- with a preference for Rust or Python for backend services

+ Hands-on experience with data pipeline development: ingestion, transformation, and metadata management at scale

+ Solid understanding of RESTful API design principles and service-to-service communication patterns

+ Experience deploying and operating services on Kubernetes (K8s) in production cloud environments

+ Familiarity with at least one major public cloud platform: AWS, Azure, or GCP

+ Strong knowledge of relational and non-relational database systems and their schema/catalog semantics

+ Experience with distributed messaging systems such as Apache Kafka or AWS Kinesis

+ Proficiency with Git, code review workflows, and agile development practices

+ Excellent troubleshooting skills and comfort operating in Linux environments

  • *What You'll Bring**

+ Experience with data catalog or metadata management tools such as Apache Atlas, Open Metadata, DataHub, or Collibra

+ Familiarity with semantic search, vector databases, or LLM-based query generation systems

+ Experience designing or integrating AI/ML model APIs into production backend services

+ Knowledge of data governance, lineage tracking, and schema registry patterns

+ Experience with infrastructure-as-code tools: Terraform, Pulumi, or AWS CDK

+ Background in multi-tenant SaaS platform engineering

+ Contributions to open-source data or infrastructure

Skills & Requirements

Technical Skills

Apache KafkaSparkdbtApache AtlasOpenMetadatasemantic searchvector databasesLLM-based query generation systemsAI/ML model APIsdata governancelineage trackingschema registry patternsinfrastructure-as-code toolsTerraformPulumiAWS CDKmulti-tenant SaaS platform engineeringAIdata engineeringcloud infrastructuredata discoverydata catalogmetadataquery routingKubernetesDevOpsCICD pipelines

Level

mid

Posted

4/7/2026

Apply Now

You will be redirected to Teradata's application portal.