Perception Lead
Kinisi Robotics
About the Role
We’re building next-generation robotic manipulation systems that do real work, have real impact, and operate robustly in warehouse environments. As Perception Lead, you’ll own and drive the design, development, and deployment of our real-time perception stack—from object segmentation, detection and tracking, to semantic 3D scene understanding and actionable grasp/placement proposals. You’ll collaborate with relevant stakeholders to understand requirements and develop prototypes, benchmarking, and optimisation of cutting-edge methods, while steering technical direction and mentoring a small team.
This is an excellent opportunity to define and deliver high-performance perception systems in a robotics company shipping its own full-stack platforms and seeing your work deployed in the real world.
What You’ll Do
- Own the perception architecture: sensor calibration and fusion, 3D understanding, object pose estimation, grasp proposal, etc.
- Prototype and deploy modern perception algorithms, including transformer-based models, across RGB-D, point cloud, and tactile modalities.
- Deliver real-time inference pipelines using PyTorch, TensorRT, and CUDA on embedded accelerators (e.g., Jetson).
- Integrate with ROS 2: clean modular nodes, lifecycle management, deterministic scheduling, robust fallback behavior.
- Collaborate tightly with control, planning, and hardware to ensure robust closed-loop performance in real-world robot tasks.
- Guide and mentor junior perception engineers, reviewing code and helping shape research and implementation direction.
- Own performance benchmarks and real-world evaluation loops, continuously improving speed, accuracy, and reliability.
Minimum Qualifications
- PhD or equivalent industry experience in Computer Vision, Robotics, or Machine Learning.
- 5+ years experience delivering perception systems for real-time robotics (e.g., manipulation, SLAM, autonomous navigation).
- Strong proficiency in modern C++ (17/20) and Python for high-performance robotics software.
- Deep experience with PyTorch (training & deployment), and GPU optimisation (CUDA/TensorRT).
- Strong working knowledge of ROS 2 (rclcpp, lifecycle nodes, real-time QoS, DDS).
- Hands-on experience with transformer-based models (e.g., DETR, SAM, DINOv2, ViT, CLIP, etc.) for visual understanding.
- Track record of high-impact real-world deployments in robotics.
Preferred Qualifications
- Experience leading perception efforts for manipulation platforms (e.g., bin picking, mobile manipulation, robot arms).
- Familiarity with multi-modal perception (vision + depth + tactile/force).
- Strong grasp of 3D geometry, calibration, and SLAM.
- Published in top-tier venues (e.g., CVPR, RSS, ICRA, CoRL).
- Contributions to open-source vision/robotics projects.
- Comfortable operating in a fast-paced, research-driven, product-oriented environment.
What We Offer
- Competitive salary and equity
- Comprehensive health and dental cover
- Conference opportunities
- Excellent office space
- A deeply technical and collaborative team
If you’re excited to lead perception in a company where research meets real-world deployment, we’d love to hear from you.