Responsibilities
The Technology department at PJT is responsible for creating and continuously improving a robust and secure technology foundation that supports the firm's business activities. As artificial intelligence becomes deeply embedded in both internal operations and the broader vendor ecosystem, the firm faces a new and rapidly evolving risk surface. The AI Security & Risk Manager will be PJT's dedicated subject matter expert at the intersection of AI and security, helping the firm navigate this landscape with rigor and clarity.
We are seeking a high-performing AI Security & Risk professional to join the Cybersecurity team. Reporting to the Head of Technology Risk, this individual will own the firm's approach to identifying, assessing, and managing risk introduced by AI — both through internal AI deployments and through vendors increasingly embedding AI into their platforms. The role requires a practitioner who can operate at both a strategic and technical level: fluent in AI architecture and threat modeling while equally capable of communicating risk clearly to senior leadership and regulators. The candidate must build strong relationships across Technology, Legal, Compliance, and the business to ensure AI risk is managed as an enterprise priority, not a silo.
Additional responsibilities include:
AI Risk Governance & Strategy
- Own and maintain the firm's AI risk framework, covering model risk, data privacy, adversarial threats, third-party AI, and regulatory compliance.
- Develop and enforce AI usage policies in collaboration with Legal and Compliance, including acceptable use, data classification requirements, and prompt handling standards.
- Maintain an inventory of AI tools deployed firm-wide — both sanctioned and shadow — and assess associated risk profiles.
- Provide regular AI risk reporting to the Head of Technology Risk and senior leadership, including emerging threat trends, vendor posture changes, and control gaps.
- Monitor the evolving regulatory environment for AI (EU AI Act, SEC guidance, DORA, NY DFS) and advise on compliance obligations and required controls.
Vendor AI Evaluation & Third-Party Risk
- Lead security and risk assessments of vendors introducing AI capabilities into existing or new platforms, including evaluating model transparency, data handling practices, and auditability.
- Develop and maintain a structured AI vendor evaluation framework, incorporating criteria for model governance, output reliability, data residency, and incident response obligations.
- Partner with Procurement and Legal to ensure AI-specific provisions are reflected in vendor contracts, including data usage restrictions, model change notifications, and liability terms.
- Maintain a tiered risk register of third-party AI integrations, with ongoing monitoring for material changes to vendor AI functionality, architecture, or ownership.
- Engage directly with vendor security and product teams to assess AI-related controls and drive remediation of identified gaps.
AI Threat Modeling & Security Architecture
- Conduct threat modeling for AI systems and integrations, including risks from prompt injection, model inversion, training data poisoning, and adversarial inputs.
- Evaluate AI-specific attack surfaces introduced by LLM integrations, agentic workflows, and MCP-connected data sources.
- Collaborate with infrastructure and application teams to embed AI security controls into deployment pipelines and system design reviews.
- Assess risks associated with AI-generated content, including deepfake vectors, synthetic phishing, and automated social engineering in the context of financial services.
- Contribute to the firm's broader security architecture by ensuring AI components are assessed within the existing control framework.
Internal AI Program Oversight
- Serve as the security and risk point of contact for the firm's internal AI deployments, including Claude Enterprise and any future platform integrations.
- Evaluate data retention, access control, and logging practices for AI platforms to ensure alignment with the firm's compliance and eDiscovery obligations.
- Provide risk assessments for proposed AI use cases across the firm, including a structured framework for approving, conditionally approving, or declining adoption.
- Support audit and compliance reviews related to AI, including evidence collection and engagement with regulators or external assessors as required.
- Develop and deliver AI security awareness content for technology staff and end users.
Qualifications
PJT Partners seeks to hire individuals who are highly motivated, intelligent and have demonstrated excellence in prior endeavors. In addition, qualified candidates will possess the following:
- Bachelor's degree in Computer Science, Information Security, Data Science, or a related field; advanced degree a plus.
- At least 7–10 years of experience in information security, technology risk, or a related field, with a minimum of 3