AI Security Engineer

SentraAI
AE
On-site

Job Description

As an AI Security Engineer at SentraAI you will operate at the intersection of AI architecture application security and offensive security helping enterprise organisations design deploy and operate AI systems that are secure by design and defensible in production.

You will work closely with AI engineers platform teams and security stakeholders to embed runtime guardrails security observability and continuous AI red-teaming into real production systems. This role is accountable for translating AI threat models into concrete engineering controls and for ensuring AI systems remain secure auditable and resilient as they evolve.

This is a hands-on role for practitioners who understand that AI security is an operational discipline not a policy exercise.

About SentraAI

SentraAI is a specialist enterprise AI firm focused on helping large regulated organisations move AI and data platforms from experimentation into production safely and sustainably.

We work inside enterprise run-states where governance operational risk change control and long-term ownership are integral to delivery. Our teams are trusted to design and deliver systems platforms and operating models that can be run audited and evolved not just launched.

We prioritise engineering discipline architectural clarity and delivery quality over speed theatre or hype.

Requirements

AI Threat Modelling and Security Architecture

  • Guide application and platform teams on threat modelling for AI and LLM-based systems across the full lifecycle
  • Develop and maintain AI-specific threat models aligned to recognised standards and regulatory expectations
  • Translate threat models into explicit architectural controls security requirements and acceptance criteria
  • Advise on secure AI design patterns including least-privilege isolation and human-in-the-loop safeguards

Secure Implementation and Runtime Enforcement

  • Work closely with AI and ML engineers to ensure secure implementation of AI guardrails within application codebases
  • Ensure robust input sanitisation validation and prompt hardening for text document and multimodal inputs
  • Ensure output validation redaction and data exfiltration prevention mechanisms are correctly implemented
  • Evaluate test and support deployment of LLM security frameworks and detection mechanisms
  • Ensure security-relevant telemetry and logs are captured in line with regulatory and audit requirements

AI Security Observability and SOC Integration

  • Define and publish AI-specific security indicators for operational monitoring and alerting
  • Enable real-time visibility into AI security signals such as anomalous behaviour prompt abuse or tool misuse
  • Support downstream security operations and incident response teams with actionable AI security context

AI Red Teaming and Offensive Security Integration

  • Embed automated AI security testing into CI/CD pipelines including prompt fuzzing and regression testing
  • Support and guide offensive security teams on LLM-specific attack scenarios
  • Operationalise AI red-teaming tools and custom adversarial test cases
  • Ensure findings feed back into guardrail tuning detection logic and adaptive defence mechanisms

Required Qualifications

Core Experience

  • Strong background in application development security engineering or platform engineering
  • Practical experience working with AI-enabled applications LLMs or ML pipelines
  • Solid grounding in application security concepts and secure software design
  • Hands-on experience implementing or integrating AI guardrails sanitisation and runtime security controls

AI and Security Capability

  • Practical understanding of AI and LLM threat vectors such as prompt injection data poisoning tool abuse and agent escalation
  • Experience collaborating closely with AI engineers platform teams and offensive security practitioners
  • Ability to translate security intent into concrete testable engineering controls

Advantageous but Not Mandatory

  • Experience with AI red-teaming tools or adversarial testing frameworks
  • Familiarity with secure CI/CD and DevSecOps practices
  • Experience operating in regulated or highly governed enterprise environments
  • Exposure to SOC integration detection engineering or security observability

Benefits

Why Work for SentraAI

  • Enterprise AI done properly.

We exist to take AI and data out of experimentation and into production environments that are regulated scrutinised and expected to work every day.

  • Quality is not optional.

SentraAI is built on the belief that engineering discipline governance by design and delivery rigour are competitive advantages not overhead.

  • Clear ownership and accountability.

You will be trusted with real responsibility clear mandates and meaningful outcomes not diluted roles or performative activity.

  • Work that survives contact with reality.

We design systems operating models and decisions that still stand up months and years after go-live not just at demo time.

  • Run-state matt

Skills & Requirements

Technical Skills

Ai threat modelling and security architectureSecure implementation and runtime enforcementAi security observability and soc integrationAi red teaming and offensive security integrationCollaborationProblem solvingAttention to detailAi securityApplication securityMachine learning

Level

Mid-Level

Posted

5/6/2026

Apply Now

You will be redirected to SentraAI's application portal.

Sign in and we'll score your resume against this role.

Find Similar Jobs

Browse roles in the same category, level, and remote setup.

Sign in to open the target role workbench.