Staff AI VFX Engineer

Adobe
Culver City, US
On-site

Job Description

The Opportunity

As AI rapidly transforms creative industries, professional production workflows must evolve alongside it. At Firefly Foundry, we’re leading an industry-first initiative to define how generative AI integrates into high-end visual effects for feature films and episodic content. A major franchise holder doesn’t get a generic model; they get a model that knows their characters, environments, and visual style, integrated directly into production-grade tools.

Generative AI has limited practical application in VFX pipelines with real world production challenges. The true breakthrough will come from achieving production-grade quality, deeply integrating with existing DCC tools, and establishing AI as a new, fully realized field within visual effects. That’s exactly what we’re building. At Foundry, we approach this work with a deep respect for the artistry and craft of VFX. Our mission is grounded in empowering artists while ensuring the responsible and commercially safe use of AI in production environments.

We’re looking for pioneers who want to help define what AI-driven VFX looks like at the highest level. You’ll work directly with studios, VFX houses, and creative leadership to take Foundry from pilot to production-scale deployment.

What You’ll Do

  • Build and validate AI-driven VFX workflows: Design end-to-end pipelines that integrate Foundry’s custom-trained diffusion and video models into compositing, look-dev, previs, and virtual production. You’ll write working prototypes, not slide decks to prove out new approaches with real shot data.
  • Solve hard production problems: Tackle the issues that block adoption: temporal coherence across shot sequences, maintaining art-directable control over generated elements, matching on-set lighting and lens characteristics, and hitting the fidelity bar that supervisors demand.
  • Own the integration surface: Define how Foundry models plug into Nuke, Houdini, Maya, After Effects, Premiere Pro, and Substance 3D. Design the APIs, node graphs, and plugin architectures that make AI-generated assets first-class citizens in existing pipelines, including USD/OpenEXR/ACES-compliant outputs.
  • Shape the product from the production floor: Translate what you learn from studio engagements into concrete product requirements for the Firefly and Foundry engineering teams. You’re the bridge between what a VFX supervisor needs at 2 AM during a color session and what our model architecture can deliver.
  • Implement and prototype multi-modal model orchestration: Foundry doesn’t ship a single model. It ships a coordinated stack of image, video, animation, and 3D generation models that need to work together. You’ll design the orchestration layer: how a character generated in the image model maintains identity when animated by the video model; how texture maps generated for Substance 3D stay consistent with hero shots generated in the image pipeline; how style transfer models constrain the output space to a franchise’s visual language across all modalities.
  • Engage studio and VFX leadership: Present to CTOs, VFX supervisors, and heads of production. Run technical deep-dives and creative workshops. You’ll need to be as credible talking to a Nuke compositor as you are in a boardroom with studio executives.
  • Codify repeatable playbooks: Document reference architectures, prompt engineering strategies for VFX use cases, quality evaluation pipelines, and deployment patterns so the next studio engagement doesn’t start from scratch.

Required

  • 5 - 10+ years in VFX engineering, pipeline TD, or tools development with shipped credits in film, episodic, or AAA gaming.
  • Deep fluency in production VFX workflows: compositing (Nuke), 3D (Maya/Houdini), rendering, look-dev, previs/postvis, editorial handoff, and review (Shotgrid, Frame.io, or equivalent).
  • Working knowledge of generative AI fundamentals e.g. diffusion models, LoRA/fine-tuning, ControlNet-style conditioning, prompt engineering, and evaluation metrics (FID, CLIP, perceptual loss). You don’t need to have trained a model from scratch, but you need to understand what’s happening under the hood well enough to debug workflow failures.
  • Proficiency in Python and at least one of C++, Rust, or TypeScript. Comfortable writing production-quality code, not just scripts.
  • Familiarity with VFX data standards: OpenEXR, ACES, USD, Alembic, OpenColorIO.
  • Ability to communicate technical concepts to non-technical studio leadership. Strong written communication, you can write a clear 1-pager or technical design doc.

Preferred

  • Credits on major feature films or high-profile episodic VFX (think tentpole-scale, not just indie shorts).
  • Experience with real-time rendering (Unreal Engine, virtual production stages, LED volumes).
  • Hands-on experience fine-tuning or deploying generative models (Stable Diffusion, Runway, ComfyUI, or similar).
  • Background in computer vision or image processing (optical flow, segmentation, depth estimation, u

Skills & Requirements

Technical Skills

AiVfxNukeHoudiniMayaAfter effectsPremiere proSubstance 3dDiffusion modelsVideo modelsImage modelsAnimation models3d generation modelsCommunicationVisual effectsGenerative aiProduction tools

Employment Type

FULL TIME

Level

senior

Posted

5/2/2026

Continue to LinkedIn

You will be redirected to the job posting on LinkedIn.

Sign in and we'll score your resume against this role.