What are the responsibilities and job description for the Research Engineer — Foundation World Models for Robotics position at Gigascale Capital?
Location
Palo Alto
Employment Type
Full time
Department
Research
OverviewApplication
At Rhoda AI, we're building the full-stack foundation for the next generation of humanoid robots — from high-performance, software-defined hardware to the foundational models and video world models that control it. Our robots are designed to be generalists capable of operating in complex, real-world environments and handling scenarios unseen in training. We work at the intersection of large-scale learning, robotics, and systems, with a research team that includes researchers from Stanford, Berkeley, Harvard, and beyond. We're not building a feature; we're building a new computing platform for physical work — and with over $400M raised, we're investing aggressively in the R&D, hardware development, and manufacturing scale-up to make that a reality.
We're looking for Research Engineers to work closely with this team on end-to-end model development. This is a hands-on role spanning the full stack: data, infrastructure, model training, and deployment. You'll help turn research ideas into scalable, working systems — including learning and leveraging world models for planning, prediction, and control.
What You'll Do
We're looking for Research Engineers to work closely with this team on end-to-end model development. This is a hands-on role spanning the full stack: data, infrastructure, model training, and deployment. You'll help turn research ideas into scalable, working systems — including learning and leveraging world models for planning, prediction, and control.
What You'll Do
We're looking for Research Engineers to work closely with this team on end-to-end model development. This is a hands-on role spanning the full stack: data, infrastructure, model training, and deployment. You'll help turn research ideas into scalable, working systems — including learning and leveraging world models for planning, prediction, and control.
What You'll Do
Palo Alto
Employment Type
Full time
Department
Research
OverviewApplication
At Rhoda AI, we're building the full-stack foundation for the next generation of humanoid robots — from high-performance, software-defined hardware to the foundational models and video world models that control it. Our robots are designed to be generalists capable of operating in complex, real-world environments and handling scenarios unseen in training. We work at the intersection of large-scale learning, robotics, and systems, with a research team that includes researchers from Stanford, Berkeley, Harvard, and beyond. We're not building a feature; we're building a new computing platform for physical work — and with over $400M raised, we're investing aggressively in the R&D, hardware development, and manufacturing scale-up to make that a reality.
We're looking for Research Engineers to work closely with this team on end-to-end model development. This is a hands-on role spanning the full stack: data, infrastructure, model training, and deployment. You'll help turn research ideas into scalable, working systems — including learning and leveraging world models for planning, prediction, and control.
What You'll Do
- Design and implement foundational models and world models for large-scale robotic learning
- Build and maintain data pipelines (collection, curation, filtering, augmentation) for multimodal robotic data (vision, proprioception, actions, language, video)
- Work on pre-training and post-training (fine-tuning, alignment, evaluation) of large models and world models
- Implement and experiment with different model architectures
- Develop training and evaluation frameworks for world models, including rollout quality, long-horizon prediction, and downstream task performance
- Optimize training infrastructure and workflows (distributed training, efficiency, debugging)
- Collaborate closely with researchers to translate ideas into robust, scalable implementations
- Support experiments, ablations, and real-world deployment on robotic systems
- Strong software engineering skills with a research mindset
- Experience implementing ML models end-to-end, not just running existing code
- Familiarity with the full ML pipeline: data → pre-training → post-training → evaluation → deployment
- Solid foundation in deep learning and modern ML frameworks (e.g., PyTorch, JAX)
- Ability to reason about and debug complex learning systems, including world model training and usage
- Comfortable working in an ambiguous, fast-moving startup environment
- Publications at top ML/robotics conferences (e.g., NeurIPS, ICML, ICLR, CoRL, RSS, ICRA)
- PhD/Masters or equivalent research experience
- Experience with world models or generative models for control
- Experience working with large models (LLMs, vision-language models, video models, large-scale policy models)
- Experience with large-scale training infrastructure (distributed training, clusters, cloud or on-prem systems)
- Work with an elite research team from Stanford, Berkeley, Harvard, and beyond
- Work on foundational models and world models for real-world robotics — not toy environments
- Tight collaboration between research and engineering (no silos)
- Direct connection between research ideas and real robotic behavior
- High ownership and impact in a small, ambitious team
We're looking for Research Engineers to work closely with this team on end-to-end model development. This is a hands-on role spanning the full stack: data, infrastructure, model training, and deployment. You'll help turn research ideas into scalable, working systems — including learning and leveraging world models for planning, prediction, and control.
What You'll Do
- Design and implement foundational models and world models for large-scale robotic learning
- Build and maintain data pipelines (collection, curation, filtering, augmentation) for multimodal robotic data (vision, proprioception, actions, language, video)
- Work on pre-training and post-training (fine-tuning, alignment, evaluation) of large models and world models
- Implement and experiment with different model architectures
- Develop training and evaluation frameworks for world models, including rollout quality, long-horizon prediction, and downstream task performance
- Optimize training infrastructure and workflows (distributed training, efficiency, debugging)
- Collaborate closely with researchers to translate ideas into robust, scalable implementations
- Support experiments, ablations, and real-world deployment on robotic systems
- Strong software engineering skills with a research mindset
- Experience implementing ML models end-to-end, not just running existing code
- Familiarity with the full ML pipeline: data → pre-training → post-training → evaluation → deployment
- Solid foundation in deep learning and modern ML frameworks (e.g., PyTorch, JAX)
- Ability to reason about and debug complex learning systems, including world model training and usage
- Comfortable working in an ambiguous, fast-moving startup environment
- Publications at top ML/robotics conferences (e.g., NeurIPS, ICML, ICLR, CoRL, RSS, ICRA)
- PhD/Masters or equivalent research experience
- Experience with world models or generative models for control
- Experience working with large models (LLMs, vision-language models, video models, large-scale policy models)
- Experience with large-scale training infrastructure (distributed training, clusters, cloud or on-prem systems)
- Work with an elite research team from Stanford, Berkeley, Harvard, and beyond
- Work on foundational models and world models for real-world robotics — not toy environments
- Tight collaboration between research and engineering (no silos)
- Direct connection between research ideas and real robotic behavior
- High ownership and impact in a small, ambitious team
We're looking for Research Engineers to work closely with this team on end-to-end model development. This is a hands-on role spanning the full stack: data, infrastructure, model training, and deployment. You'll help turn research ideas into scalable, working systems — including learning and leveraging world models for planning, prediction, and control.
What You'll Do
- Design and implement foundational models and world models for large-scale robotic learning
- Build and maintain data pipelines (collection, curation, filtering, augmentation) for multimodal robotic data (vision, proprioception, actions, language, video)
- Work on pre-training and post-training (fine-tuning, alignment, evaluation) of large models and world models
- Implement and experiment with different model architectures
- Develop training and evaluation frameworks for world models, including rollout quality, long-horizon prediction, and downstream task performance
- Optimize training infrastructure and workflows (distributed training, efficiency, debugging)
- Collaborate closely with researchers to translate ideas into robust, scalable implementations
- Support experiments, ablations, and real-world deployment on robotic systems
- Strong software engineering skills with a research mindset
- Experience implementing ML models end-to-end, not just running existing code
- Familiarity with the full ML pipeline: data → pre-training → post-training → evaluation → deployment
- Solid foundation in deep learning and modern ML frameworks (e.g., PyTorch, JAX)
- Ability to reason about and debug complex learning systems, including world model training and usage
- Comfortable working in an ambiguous, fast-moving startup environment
- Publications at top ML/robotics conferences (e.g., NeurIPS, ICML, ICLR, CoRL, RSS, ICRA)
- PhD/Masters or equivalent research experience
- Experience with world models or generative models for control
- Experience working with large models (LLMs, vision-language models, video models, large-scale policy models)
- Experience with large-scale training infrastructure (distributed training, clusters, cloud or on-prem systems)
- Work with an elite research team from Stanford, Berkeley, Harvard, and beyond
- Work on foundational models and world models for real-world robotics — not toy environments
- Tight collaboration between research and engineering (no silos)
- Direct connection between research ideas and real robotic behavior
- High ownership and impact in a small, ambitious team