Introduction
At IBM Global Sales, we bring together innovation, collaboration, and expertise to help solve complex business challenges and drive meaningful outcomes. Working across industries and geographies, you will partner with colleagues, Independent Software Vendors (ISVs), Business Partners, and service providers to develop solutions that enable digital transformation and lasting impact.
A Build Engineering AI & Data Engineer is more than a developer — you are a hands-on builder responsible for turning data and AI concepts into real, working solutions that deliver measurable business value. Success in this role requires curiosity, strong technical depth, and the ability to collaborate effectively across ecosystem partners to translate ideas into scalable outcomes.
Working alongside Solution Architects, ISVs, Business Partners, and service providers, you will leverage the watsonx platform and modern data and AI technologies, as well as automation, observability, and FinOps platforms, to prototype, implement, and scale solutions. This role sits at the intersection of data engineering, AI development, ecosystem collaboration, and partner engagement, with a strong focus on execution across both pre-sales activities and post-sales implementation, supporting Build Engineering initiatives across the Americas GEO.
A key part of this role is supporting IBM’s Build motion, where we co-create with ISVs, Business Partners, and service providers to validate, embed, and scale IBM technology across AI, Data, and Automation within the solutions they bring to market for their end customers. This work drives repeatability, accelerates adoption, and strengthens joint go-to-market outcomes.
Your Role And Responsibilities
AI & Data Solution Development & Prototyping
- Build demos, Proof of Concepts (POCs), and Minimum Viable Products (MVPs) to validate use cases and demonstrate business value.
- Develop data and AI-driven applications using foundation models, large language models (LLMs), and related technologies, including NLP, text-based solutions, and tooling such as Project Bob or similar technologies that accelerate pipeline creation, code generation, and deployment.
- Rapidly iterate on prototypes based on partner and stakeholder feedback.
Implementation & Integration
- Translate solution designs into production-ready code and deployable architectures.
- Integrate AI and data capabilities into enterprise systems, APIs, business workflows, and partner platforms.
- Work across structured and unstructured data sources, ensuring data is prepared and optimized for AI and analytics use cases.
Automation & Observability Integration
- Integrate AI and data capabilities with enterprise automation, observability, and FinOps platforms to enable end-to-end workflows and outcomes.
- Work with event streaming, infrastructure automation, secrets management, and cost/operations tooling to operationalize AI-driven use cases.
- Build integrations across APIs and event-driven architectures to connect AI solutions with enterprise systems and partner platforms.
- Support use cases such as incident detection, workflow automation, cost optimization, and performance monitoring.
Build Motion, Pre-Sales & Post-Sales Delivery
- Support both pre-sales activities and post-sales implementations as part of IBM’s Build motion.
- In pre-sales, co-create with ISVs and Business Partners, alongside Solution Architects, to validate IBM technology through discovery, demos, prototypes, POCs, and MVPs.
- In post-sales, co-create with partners to implement, integrate, optimize, and scale solutions in production environments to drive adoption and measurable outcomes.
- Help embed IBM technology into partner platforms and offerings that are sold to their end customers.
- Contribute reusable engineering patterns, accelerators, and assets that improve repeatability and scalability of joint solutions.
Data Engineering & Pipeline Development
- Design, build, and optimize data pipelines to support AI models and analytics use cases.
- Work with structured and unstructured data across batch and streaming architectures.
- Implement data ingestion, transformation, and feature engineering processes.
- Support modern data architectures including lakehouse, vector databases, and event streaming frameworks (e.g., Kafka/Confluent).
- Enable data readiness for AI, including integration with retrieval-augmented generation (RAG) and orchestration pipelines.
Model Utilization & Optimization
- Implement and optimize foundation models and LLMs for performance, scalability, and cost efficiency.
- Apply prompt engineering, fine-tuning, and evaluation techniques.
- Monitor outputs and continuously improve accuracy and reliability.
Delivery Execution & Collaboration
- Partner with Solution Architects, Data Scientists, and ecosystem stakeholders to deliver high-quality outcomes.
- Operate within agile delivery models, contributing to sprint execution and milestone