AI Engineer Jobs
F

Applied AI & Optimization Engineer

Firestorm

Remote (San Diego, CA · Remote, California) Senior Level
Posted 1 week ago

Perks

  • Remote OK

Skills

Python Applied Optimization Operations Research Machine Learning Systems MILP Solvers Constraint Solvers OR-Tools LLMs Combinatorial Optimization Statistical Modeling Scheduling Resource Allocation Discrete-event Simulation Agent-based Modeling Data Analytics Software Engineering

About the Role

Job Description 
Firestorm is building the next generation of uncrewed aircraft and the advanced manufacturing systems that deliver them at speed. The Software Integration & Operations department owns the software layer that spans factory floor to cloud - the applications, automation, edge systems, and intelligence that make it possible to iterate product designs, automate advanced manufacturing, and scale production with uncompromising quality and rigor. 

Firestorm's manufacturing platform has an intelligence layer - a planning workbench, a simulation engine, an analytics surface, and eventually an AI assistant - that turns operational data into better decisions. Building this layer requires applied math, pragmatic ML systems engineering, and a deep understanding of the manufacturing domain it serves. As an Applied AI & Optimization Engineer, you will own the algorithms and systems behind this intelligence layer. Near-term, your focus is the planning workbench and simulation engine: optimization algorithms for work order scheduling, resource allocation, and conflict resolution. Longer-term, you will lead integration of open-source and custom LLMs into the platform's AI assistant - including air-gapped and on-edge deployments for DoD contexts. 

What You'll Do 
  • Own the optimization algorithms behind the planning workbench and simulation engine - scheduling, resource allocation, constraint satisfaction, conflict detection. 
  • Design and implement the analytics layer of the platform: defect trends, yield analytics, throughput modeling, and operational intelligence. 
  • Lead the platform's AI assistant integration: selecting, evaluating, deploying, and fine-tuning open-source or custom LLMs for cloud, air-gapped, and on-edge contexts. 
  • Productionize optimization and ML systems in partnership with full-stack and infrastructure engineers - reliable services the platform depends on, not prototypes. 
  • Partner with domain experts in manufacturing engineering, quality, and planning to ground models and algorithms in real operational constraints. 
  • Evaluate and advocate for build-vs-buy decisions across optimization libraries, ML tooling, and model vendors. 

Required Qualifications 
  • 5+ years of engineering experience with substantial applied optimization, operations research, or ML systems work. 
  • Deep proficiency in Python; fluency with at least one optimization framework - MILP solvers, constraint solvers, OR-Tools, or equivalent. 
  • Track record of productionizing algorithmic systems - you have shipped optimization or ML into real users' hands, not just research artifacts. 
  • Strong applied math foundation: combinatorial optimization, heuristics, or statistical modeling relevant to scheduling and resource allocation problems. 
  • Demonstrated ability to partner with domain experts and translate operational constraints into model formulations. 
  • Demonstrated history of holding yourself and your teammates to a high standard, even when it creates discomfort. 

Preferred Qualifications 
  • Prior experience building scheduling, planning, or resource allocation systems for manufacturing, logistics, or similar domains. 
  • Hands-on experience deploying or fine-tuning open-source LLMs (Llama, Mistral, or similar) for constrained environments. 
  • Background with air-gapped or on-edge model deployment. 
  • Familiarity with discrete-event simulation or agent-based modeling. 
  • Prior experience in defense, aerospace, or regulated industry applications. 

Similar Jobs

Apply Now