Demo

Research Scientist — Foundation World Models for Robotics

Gigascale Capital
Palo Alto, CA Full Time
POSTED ON 5/7/2026
AVAILABLE BEFORE 6/5/2026
Location

Palo Alto

Employment Type

Full time

Location Type

On-site

Department

Research

OverviewApplication

At Rhoda AI, we're building the full-stack foundation for the next generation of humanoid robots — from high-performance, software-defined hardware to the foundational models and video world models that control it. Our robots are designed to be generalists capable of operating in complex, real-world environments and handling scenarios unseen in training. We work at the intersection of large-scale learning, robotics, and systems, with a research team that includes researchers from Stanford, Berkeley, Harvard, and beyond. We're not building a feature; we're building a new computing platform for physical work — and with over $400M raised, we're investing aggressively in the R&D, hardware development, and manufacturing scale-up to make that a reality.

What You'll Do

  • Drive research on foundational models and world models for robotics (representation learning, dynamics/prediction, planning, control)
  • Formulate research problems and hypotheses grounded in real robotic autonomy needs
  • Design and run rigorous experiments at scale, including ablations, benchmarking, and evaluation methodology
  • Develop and evaluate model architectures for long-horizon prediction, rollout quality, and downstream robotic task performance
  • Explore and advance pre-training and post-training (fine-tuning, alignment, evaluation) of large multimodal models
  • Collaborate closely with Research Engineers to translate new ideas into scalable training pipelines and reliable systems
  • Communicate results clearly through internal writeups, talks, and research reviews
  • Publish and present work at top-tier venues

What We're Looking For (Required)

  • PhD in a relevant field (e.g., ML, Robotics, Computer Science, Electrical Engineering, Applied Math, Computer Vision, or closely related)
  • Strong publication record demonstrating high-quality research output (e.g., NeurIPS, ICML, ICLR, CoRL, RSS, ICRA, CVPR, etc.)
  • Deep understanding of modern machine learning, with relevance to at least several of:
    • Deep learning and representation learning
    • Sequence modeling / transformers
    • Generative modeling (e.g., diffusion, autoregressive, latent-variable models)
    • Model-based learning, planning, and/or control
    • RL / imitation learning for robotics
  • Strong research taste and independence: ability to define problems, execute, interpret results, and iterate quickly
  • Proficiency with at least one modern ML stack (e.g., PyTorch or JAX) and the ability to implement research ideas in code
  • Clear written and verbal communication skills
  • Comfort operating in ambiguity in a fast-moving startup environment
Nice To Have (But Not Required)

  • Prior work specifically on world models (latent dynamics, predictive models, model-based RL/planning, long-horizon rollouts)
  • Experience with large-scale multimodal training (VLMs, video models, action-conditioned models, large policy models)
  • Experience working with robotic learning data (real-world logs, teleop, simulation-to-real, multimodal sensor streams)
  • Hands-on experience deploying learning-based components on real robots
  • Familiarity with distributed training and performance debugging (multi-GPU / multi-node)

Why This Role

  • Work with an elite research team from Stanford, Berkeley, Harvard, etc.
  • Research that directly connects to real-world robotic autonomy — not toy benchmarks
  • Tight collaboration between research and engineering (no silos)
  • High ownership and ability to shape the research agenda
  • Opportunity to publish meaningful work while seeing it come alive on real robotic systems

At Rhoda AI, we're building the full-stack foundation for the next generation of humanoid robots — from high-performance, software-defined hardware to the foundational models and video world models that control it. Our robots are designed to be generalists capable of operating in complex, real-world environments and handling scenarios unseen in training. We work at the intersection of large-scale learning, robotics, and systems, with a research team that includes researchers from Stanford, Berkeley, Harvard, and beyond. We're not building a feature; we're building a new computing platform for physical work — and with over $400M raised, we're investing aggressively in the R&D, hardware development, and manufacturing scale-up to make that a reality.

What You'll Do

  • Drive research on foundational models and world models for robotics (representation learning, dynamics/prediction, planning, control)
  • Formulate research problems and hypotheses grounded in real robotic autonomy needs
  • Design and run rigorous experiments at scale, including ablations, benchmarking, and evaluation methodology
  • Develop and evaluate model architectures for long-horizon prediction, rollout quality, and downstream robotic task performance
  • Explore and advance pre-training and post-training (fine-tuning, alignment, evaluation) of large multimodal models
  • Collaborate closely with Research Engineers to translate new ideas into scalable training pipelines and reliable systems
  • Communicate results clearly through internal writeups, talks, and research reviews
  • Publish and present work at top-tier venues

What We're Looking For (Required)

  • PhD in a relevant field (e.g., ML, Robotics, Computer Science, Electrical Engineering, Applied Math, Computer Vision, or closely related)
  • Strong publication record demonstrating high-quality research output (e.g., NeurIPS, ICML, ICLR, CoRL, RSS, ICRA, CVPR, etc.)
  • Deep understanding of modern machine learning, with relevance to at least several of:
    • Deep learning and representation learning
    • Sequence modeling / transformers
    • Generative modeling (e.g., diffusion, autoregressive, latent-variable models)
    • Model-based learning, planning, and/or control
    • RL / imitation learning for robotics
  • Strong research taste and independence: ability to define problems, execute, interpret results, and iterate quickly
  • Proficiency with at least one modern ML stack (e.g., PyTorch or JAX) and the ability to implement research ideas in code
  • Clear written and verbal communication skills
  • Comfort operating in ambiguity in a fast-moving startup environment
Nice To Have (But Not Required)

  • Prior work specifically on world models (latent dynamics, predictive models, model-based RL/planning, long-horizon rollouts)
  • Experience with large-scale multimodal training (VLMs, video models, action-conditioned models, large policy models)
  • Experience working with robotic learning data (real-world logs, teleop, simulation-to-real, multimodal sensor streams)
  • Hands-on experience deploying learning-based components on real robots
  • Familiarity with distributed training and performance debugging (multi-GPU / multi-node)

Why This Role

  • Work with an elite research team from Stanford, Berkeley, Harvard, etc.
  • Research that directly connects to real-world robotic autonomy — not toy benchmarks
  • Tight collaboration between research and engineering (no silos)
  • High ownership and ability to shape the research agenda
  • Opportunity to publish meaningful work while seeing it come alive on real robotic systems

At Rhoda AI, we're building the full-stack foundation for the next generation of humanoid robots — from high-performance, software-defined hardware to the foundational models and video world models that control it. Our robots are designed to be generalists capable of operating in complex, real-world environments and handling scenarios unseen in training. We work at the intersection of large-scale learning, robotics, and systems, with a research team that includes researchers from Stanford, Berkeley, Harvard, and beyond. We're not building a feature; we're building a new computing platform for physical work — and with over $400M raised, we're investing aggressively in the R&D, hardware development, and manufacturing scale-up to make that a reality.

What You'll Do

  • Drive research on foundational models and world models for robotics (representation learning, dynamics/prediction, planning, control)
  • Formulate research problems and hypotheses grounded in real robotic autonomy needs
  • Design and run rigorous experiments at scale, including ablations, benchmarking, and evaluation methodology
  • Develop and evaluate model architectures for long-horizon prediction, rollout quality, and downstream robotic task performance
  • Explore and advance pre-training and post-training (fine-tuning, alignment, evaluation) of large multimodal models
  • Collaborate closely with Research Engineers to translate new ideas into scalable training pipelines and reliable systems
  • Communicate results clearly through internal writeups, talks, and research reviews
  • Publish and present work at top-tier venues

What We're Looking For (Required)

  • PhD in a relevant field (e.g., ML, Robotics, Computer Science, Electrical Engineering, Applied Math, Computer Vision, or closely related)
  • Strong publication record demonstrating high-quality research output (e.g., NeurIPS, ICML, ICLR, CoRL, RSS, ICRA, CVPR, etc.)
  • Deep understanding of modern machine learning, with relevance to at least several of:
    • Deep learning and representation learning
    • Sequence modeling / transformers
    • Generative modeling (e.g., diffusion, autoregressive, latent-variable models)
    • Model-based learning, planning, and/or control
    • RL / imitation learning for robotics
  • Strong research taste and independence: ability to define problems, execute, interpret results, and iterate quickly
  • Proficiency with at least one modern ML stack (e.g., PyTorch or JAX) and the ability to implement research ideas in code
  • Clear written and verbal communication skills
  • Comfort operating in ambiguity in a fast-moving startup environment
Nice To Have (But Not Required)

  • Prior work specifically on world models (latent dynamics, predictive models, model-based RL/planning, long-horizon rollouts)
  • Experience with large-scale multimodal training (VLMs, video models, action-conditioned models, large policy models)
  • Experience working with robotic learning data (real-world logs, teleop, simulation-to-real, multimodal sensor streams)
  • Hands-on experience deploying learning-based components on real robots
  • Familiarity with distributed training and performance debugging (multi-GPU / multi-node)

Why This Role

  • Work with an elite research team from Stanford, Berkeley, Harvard, etc.
  • Research that directly connects to real-world robotic autonomy — not toy benchmarks
  • Tight collaboration between research and engineering (no silos)
  • High ownership and ability to shape the research agenda
  • Opportunity to publish meaningful work while seeing it come alive on real robotic systems

Salary.com Estimation for Research Scientist — Foundation World Models for Robotics in Palo Alto, CA
$128,122 to $162,704
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Research Scientist — Foundation World Models for Robotics?

Sign up to receive alerts about other jobs on the Research Scientist — Foundation World Models for Robotics career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$108,245 - $136,486
Income Estimation: 
$136,683 - $171,343
Income Estimation: 
$77,900 - $95,589
Income Estimation: 
$101,387 - $124,118
Employees: Get a Salary Increase
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at Gigascale Capital

  • Gigascale Capital Denver, CO
  • Denver, CO Engineering – Design Integration / Full-Time / On-site apply for this job Xcimer Energy leverages decades of research on Inertial Fusion Energy ... more
  • 7 Days Ago

  • Gigascale Capital Denver, CO
  • Denver, CO Science – Target Design / Full-Time / On-site apply for this job Xcimer Energy leverages decades of research on Inertial Fusion Energy (IFE) com... more
  • 7 Days Ago

  • Gigascale Capital Denver, CO
  • Denver, CO Engineering – Controls Engineering / Full-Time / On-site apply for this job Xcimer Energy leverages decades of research on Inertial Fusion Energ... more
  • 8 Days Ago

  • Gigascale Capital Denver, CO
  • Denver, CO Science – Laser Architecture / Full-Time / On-site apply for this job Xcimer Energy leverages decades of research on Inertial Fusion Energy (IFE... more
  • 8 Days Ago


Not the job you're looking for? Here are some other Research Scientist — Foundation World Models for Robotics jobs in the Palo Alto, CA area that may be a better fit.

  • Google DeepMind Mountain View, CA
  • Snapshot Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine lear... more
  • 28 Days Ago

  • Gigascale Capital Palo Alto, CA
  • Location Palo Alto Employment Type Full time Department Research OverviewApplication At Rhoda AI, we're building the full-stack foundation for the next gen... more
  • 8 Days Ago

AI Assistant is available now!

Feel free to start your new journey!