McKesson is an impact-driven, Fortune 10 company that touches virtually every aspect of healthcare. We are known for delivering insights, products, and services that make quality care more accessible and affordable. Here, we focus on the health, happiness, and well-being of you and those we serve – we care.
What you do at McKesson matters. We foster a culture where you can grow, make an impact, and are empowered to bring new ideas. Together, we thrive as we shape the future of health for patients, our communities, and our people. If you want to be part of tomorrow’s health today, we want to hear from you.
Ontada is a leading oncology real‑world data and evidence, clinical education, and provider technology business dedicated to transforming the fight against cancer. Part of McKesson Corporation, we support science through our data, technology, and channels, accelerating innovation for life science companies, empowering community oncology providers, and advancing patient care. Together with our partners, we improve the lives of cancer patients.
Position Summary
The Software Automation Engineer (P3) is a senior individual contributor responsible for designing, developing, and maintaining automated test solutions across Ontada’s product ecosystem, with a strong focus on AI enabled and GenAI powered systems. Reporting to the QA Lead, this role drives high quality automation for UI, API, backend, and data layers, while ensuring AI/ML features meet expectations for correctness, reliability, safety, and compliance.
This role partners closely with Product, Engineering, and ML Ops teams to validate both traditional software workflows and AI driven behaviors, including prompt based systems, retrieval augmented generation (RAG), and model integrated services.
Key Responsibilities
Quality Engineering Ownership & Execution
- Own and execute test strategy, planning, and execution for assigned features, services, or product areas under the guidance of the QA Lead.
- Identify functional, integration, and nonfunctional quality risks early; communicate risks, impacts, and recommendations clearly.
- Author comprehensive test strategies, test plans, and test cases aligned with product requirements and acceptance criteria.
- Perform exploratory testing to uncover complex, edge case, and systemic defects.
- Coordinate end to end validation across multiple environments to ensure release readiness.
Test Automation & Framework Development
- Design, develop, and maintain automated test suites across UI, API, service, and data layers.
- Contribute to the enhancement and maintainability of automation frameworks using tools such as Selenium, Playwright, Cypress, TOSCA, or similar.
- Develop robust API automation using RestAssured, Postman, or equivalent frameworks.
- Implement effective test data strategies, including synthetic data generation and environment setup.
- Integrate automated tests into CI/CD pipelines to support fast and reliable feedback cycles.
- Leverage AI assisted development tools (e.g., GitHub Copilot, Claude Code, or similar) to accelerate test automation development, refactoring, and debugging while maintaining code quality and security standards.
- Use AI tooling to assist with test case generation, edge case identification, and data driven scenario expansion, validating all outputs through engineering judgment and established QA practices.
AI / GenAI Quality Engineering
- Design and execute test strategies for AI/ML and GenAI powered features, including LLM based workflows.
- Validate prompt behavior, prompt templates, and prompt chaining across different scenarios and data contexts.
- Perform negative testing for AI systems, including prompt injection, jailbreak attempts, hallucination risks, and unsafe outputs.
- Test Retrieval Augmented Generation (RAG) pipelines, including:
- Embedding quality validation
- Retrieval accuracy, recall, and relevance
- Chunking and indexing strategies
- Validate AI outputs for accuracy, consistency, explain ability, and compliance in regulated environments.
- Collaborate with Engineering and ML Ops teams to test model integrations, configuration changes, and inference pipelines.
- Utilize AI powered tools to support prompt analysis, test scenario exploration, and hypothesis generation when validating LLM based features and AI workflows.
- Critically evaluate AI generated suggestions and outputs to ensure accuracy, safety, reproducibility, and regulatory compliance.
Backend & Data Testing
- Perform advanced backend testing across SQL and NoSQL data systems.
- Validate data ingestion, transformations, persistence, and integrity across services and environments.
- Coordinate testing of asynchronous workflows and integrations (e.g., message queues, APIs, batch processes).
Agile Collaboration
- Work closely with Product Owners and Business Analysts to refine user stories, define acceptance criteria, and ensure testability.