Our client – Global financial company, is now looking for QA Software Engineer within their team.
This role is critical to strengthening our technology delivery and quality assurance capabilities, with an expanded focus on providing testing support for AI‑related and digital engagement projects.
QA Software Engineer – QA, Automation & Performance Testing
We are seeking a QA Software Engineer to support the IT Quality Assurance and Testing Team in delivering high‑quality, reliable, and scalable systems.
The role focuses on functional testing support, test automation development, performance testing design and execution, and continuous improvement of testing practices across projects.
The successful candidate will work closely with QA leads, developers, DevOps, and project teams to ensure solutions meet performance, stability, and quality standards before production release.
Responsibilities
- Testing Support & Quality Assurance
- Support end‑to‑end system testing, integration testing, regression testing, and UAT support
- Review functional and technical specifications to derive effective test scenarios
- Prepare and maintain test cases, test data, and execution results
- Log, track, verify, and retest defects using defect management tools
- Ensure testing compliance with SDLC, QA standards, and governance requirements
- Test Automation Development
- Design, develop, enhance, and maintain test automation frameworks for UI, API, and backend testing
- Develop reusable automated test scripts aligned with project standards
- Integrate automation scripts into CI/CD pipelines where applicable
- Analyze automation results and report quality metrics
- Continuously improve automation coverage, stability, and efficiency
- Participate in performance test planning including workload modeling and test strategy definition
- Design and develop performance test scripts (e.g. load, stress, volume, endurance tests)
- Execute performance tests and monitor system behavior
- Analyze test results to identify performance bottlenecks and risks
- Work with development, infrastructure, and architecture teams to resolve performance issues
- Prepare performance test reports with conclusions and recommendations
- Test Environment & Data Management
- Support test environment setup, configuration, and troubleshooting
- Coordinate with infrastructure and DevOps teams on performance test environments
- Prepare and manage test data for automation and performance testing
- AI Testing Strategy & Execution
- Establish comprehensive AI testing frameworks, including:
- Data quality and data drift testing
- Model performance and stress testing
- Bias, fairness, and explainability testing
- Integration, system, regression, and user acceptance testing for AI‑enabled systems
- Oversee or perform test planning, test case design, test execution, defect management, and test reporting for AI and GenAI solutions.
- Vendor & Cross‑Functional Collaboration
- Assess vendor‑provided AI solutions, models, and tools through structured testing, validation, and proof‑of‑concept activities.
- Act as a bridge between QA, IT, data, risk, and business teams to ensure AI solutions are production‑ready and meet enterprise quality standards.
- Continuous Improvement & Collaboration
- Contribute to testing best practices, standards, and frameworks
- Support QA transformation initiatives (e.g. shift‑left testing, automation‑first approach)
- Mentor junior QA members when required
- Participate in project meetings, walkthroughs, and quality reviews
Qualification
Technical Skills
- Solid knowledge of software testing methodologies and SDLC
- Hands‑on experience in test automation development
- UI automation (e.g. Selenium, Playwright, Cypress, etc.)
- API automation (e.g. REST‑based testing)
- Hands‑on experience with performance testing tools
(e.g. JMeter, LoadRunner, Gatling, or equivalent)
- Programming / scripting experience
(e.g. Java, Python, JavaScript, or similar)
- Experience with defect tracking