Head of AI Engineering
Loft Orbital
Software Engineering, Data Science
Toulouse, France
AI for Space is Loft's newest business unit. We are building the Loft Mission Store: an open platform where AI companies publish their models, satellite operators deploy them to orbit, and the whole system produces structured intelligence instead of raw imagery. The team is small, the ambition is large, and the satellites are real.
This is the most senior technical hire on the AI for Space team. You own the technical strategy and execution for the AI Marketplace and Loft Labs ML tooling. You define what gets built and how, bridge the gap between research (Loft Labs) and production (implementation engineers), and set the technical bar for every AI service that ships on the platform. You report to the General Manager, AI for Space, and work closely with the broader Loft engineering, Solutions and Architecture (CTO platform) organization to ensure AI services integrate cleanly with Cockpit (the satellite operations platform) and the partner ecosystem. A critical part of this role is defining clear boundaries between what the AI team builds (marketplace-specific, ML-specific capabilities) and what lives in existing platform teams.
About this role:
Define and own the technical roadmap for the AI Marketplace and all AI services, working closely with the Head of Product Management.
Bridge Loft Labs research outputs and production-grade service delivery.
Hire and lead a team of AI services engineers (initially 2-3, growing to 5-7) responsible for the AI-specific services that partners and solution engineers need to build end-to-end solutions on the marketplace (data and model sandboxes, test environments, simulation, super resolution, multimodal orchestration).
Collaborate closely with the rest of engineering to set architecture standards for ML model deployment, inference pipelines, and edge compute (Hub Compute).
Work with partners (Helsing, Mistral, NVIDIA, Kayrros, and others) to integrate third-party AI models into the platform.
Define the sandbox, testing, and certification pipeline for AI services, leveraging existing TestBench/SEAL infrastructure where possible.
Collaborate with Cockpit and ground software teams on tasking interfaces, orchestration, and the regulatory compliance layer (export control, sensitive areas, data retention).
Must Haves:
7-15 years of hands-on engineering experience, with meaningful time building ML/AI systems.
Track record of shipping production ML systems, not just research prototypes.
Experience building and managing engineering teams (3-10 people).
Strong opinions on ML infrastructure, model serving, and deployment pipelines, held with intellectual honesty.
Comfortable across the stack: model optimization, API design, infrastructure, and team leadership.
Direct, low-ego communication. You explain your reasoning and change your mind when evidence warrants it.
Nice to Haves:
Experience with edge compute, embedded ML, or on-device inference (NVIDIA Jetson, TensorRT, ONNX).
Background in space, defense, or geospatial systems.
Familiarity with satellite operations, Earth observation data, or RF signal processing.
Some of our Awesome Benefits:
- Equity, we want you to have an active role in our success
- Up to 35 days of Paid Time Off (vacations & RTT ) and flexible working hours, we want you to be at your best
- Health and life insurance, we care about your health
- Lunch Vouchers, because let’s be honest, we love food! (we even have a slack channel about it #loft-gourmand)
- Cross-office travel opportunities between San Francisco, Colorado, and Toulouse to learn from our differences
- Company and team off-sites and many other events to work & celebrate together
- Relocation assistance to Toulouse when applicable