Senior Software Engineer, Machine Learning
Los Angeles, CA (in-person)
Engineering
About us
Moonware builds products to modernize airfield operations, providing the digital infrastructure to coordinate, optimize, and automate aircraft ground handling.
HALO, our flagship product, is used by airfield teams to optimize flight turnarounds. It serves as a centralized operating layer to manage and oversee tasks, communications, and performance. By enhancing operational visibility and control, HALO enables faster, more reliable, and standardized ground operations.
Our vision is to provide fully autonomous ground operations. HALO serves as the digital infrastructure to support that transition, connecting data, people, and machines to build toward automated airfields.
Moonware’s team combines aviation operations domain knowledge together with software and engineering expertise from top Silicon Valley tech companies. As we scale, we’re expanding the Moonware ecosystem to support the next generation of air transportation.
About the role
Moonware is seeking a Senior Software Engineer with deep expertise in Computer Vision to help build the next generation of AI-driven capabilities powering HALO, our Ground Traffic Control platform. In this role, you’ll design, train, and deploy machine learning models that interpret real-world airfield environments (vehicles, equipment, aircraft, and operations) to enable safer, more autonomous, and more efficient airfield coordination.
While computer vision will be your primary focus, this is a cross-functional ML role, touching applied AI, data science, multimodal inference, and ML infrastructure. You’ll take ownership of models end-to-end: from dataset creation and labeling pipelines, to model training and evaluation, to building robust, real-time inference systems running at airfields across the world.
Responsibilities
Lead the development of computer vision models for tasks such as object detection, tracking, segmentation, activity recognition, and scene understanding across airfield environments
Own the ML lifecycle end-to-end: dataset creation, training, experimentation, optimization, deployment, monitoring, and iteration
Build real-time perception systems capable of running at the edge or in the cloud with strict performance, accuracy, and latency requirements
Collaborate cross-functionally with product, infrastructure, and field teams to translate operational constraints into model and system design
Develop internal ML tooling, including data pipelines, evaluation frameworks, annotation workflows, and automated testing
Extend beyond CV to support other ML/AI initiatives at Moonware, such as predictive modeling, routing/optimization, time-series forecasting, and agent-based simulation
Implement scalable training infrastructure and MLOps best practices to accelerate model iteration
Continuously improve model reliability and robustness, especially under real-world noise, sensor variability, and environmental conditions
Requirements
4+ years of experience as an ML, CV, or applied AI engineer working on production systems
Strong expertise in computer vision, including one or more of: detection, tracking, segmentation, 3D geometry, multimodal fusion, or video understanding
Proficiency with modern deep learning frameworks (PyTorch, TensorFlow) and associated tooling
Experience building and deploying ML models in real-world production environments (edge devices, cloud APIs, low-latency systems, etc.)
Strong software engineering fundamentals and proficiency in Python; familiarity with Go or another backend language is a plus
Experience with MLOps and model deployment pipelines (containerization, CI/CD, inference optimization, GPU workflows)
Solid understanding of data pipelines, labeling strategies, dataset quality, and model evaluation
Excellent cross-functional collaboration and communication skills
Comfortable with ambiguity and rapid iteration in a startup environment
This role might be for you if
You’re energized by applying computer vision to noisy, real-world environments
You enjoy designing models that directly interact with physical operations and constrained edge systems
You thrive in roles where you own models end-to-end and move quickly from experiments to production
You want to help build the perception layer that enables autonomy and intelligent coordination at airfields worldwide
Aviation, mobility, robotics, or autonomy excite you
Nice to haves
Experience with multimodal ML (vision + GPS, telemetry, or sensor fusion)
Background in robotics, autonomous vehicles, or spatial computing
Experience optimizing models for edge inference (TensorRT, ONNX, quantization, pruning)
Familiarity with simulation environments or synthetic data generation
Previous experience at an early-stage startup
Understanding of geospatial data, fleet telemetry, or time-series prediction
Apply now
Email careers@moonware.com or fill out the form below: