What are the responsibilities and job description for the Multimodal Generative Modeling Research Engineer - SIML, ISE position at Apple?
Summary
Posted:
Weekly Hours: 40
Role Number:200591480
Do you believe generative models can transform creative workflows and smart assistants used by billions? Do you believe it can fundamentally shift how people interact with devices and communicate? Our Scene Understanding strives to turn cutting edge research into compelling user experiences that realize all these goals and more, working on Apple Intelligence technologies such as Image Playground, Genmoji, Generative Memories, Semantic Search, and many more.
We are looking for senior technical leaders experienced in architecting and deploying production scale multimodal ML. An ideal candidate has the ability to lead diverse cross functional efforts ranging from ML modeling, prototyping, validation and private learning. Solid ML fundamentals and an ability to place research contributions with respect to state of the art would be an essential part of the role. Experience with training and adapting large language models would be an important need. We are the Intelligence System Experience (ISE) team within Apple’s software organization.
The team works at the intersection between multimodal machine learning and system experiences. For example, experiences like Spotlight Search, Photos Memories, Generative Playgrounds, Stickers, Smart wallpapers, etc are all areas that the team has had a significant part in delivering through ML core technologies. These experiences that our users enjoy are backed by production ML workflows, which our team works to scale through distributed training. Additionally, our team also focuses on approaches to optimizing and adapting LLMs to best suit on-device user experiences.
SELECTED REFERENCES TO OUR TEAM’S WORK:
- https://machinelearning.apple.com/research/introducing-apple-foundation-models (https://machinelearning.apple.com/research/introducing-apple-foundation-models)
- https://machinelearning.apple.com/research/stable-diffusion-coreml-apple-silicon (https://machinelearning.apple.com/research/stable-diffusion-coreml-apple-silicon)
- https://machinelearning.apple.com/research/on-device-scene-analysis (https://machinelearning.apple.com/research/on-device-scene-analysis)
- https://machinelearning.apple.com/research/panoptic-segmentation (https://machinelearning.apple.com/research/panoptic-segmentation)
Description
We are looking for a candidate with a proven track record in applied ML research. Responsibilities in the role will include training large scale multimodal (2D/3D vision-language) models on distributed backends, deployment of compact neural architectures efficiently on device, and learning policies that can be personalized to the user in a privacy preserving manner. Ensuring quality in the wild, with an emphasis on fairness and model robustness would constitute an important part of the role. You will be interacting very closely with a variety of ML researchers, software engineers, hardware & design teams cross functionally. The primary responsibilities of the role would center on enriching multimodal capabilities of large language models. The user experience initiative would focus on aligning image/video content to the space of LMs for visual actions & multi-turn interactions.
Minimum Qualifications
* M.S. or PhD in Computer Science or a related field such as Electrical Engineering, Robotics, Statistics, Applied Mathematics, or equivalent experience.
* Hands on experience training LLMs/adapting pre-trained LLMs for downstream tasks & alignment
* Modeling experience at the intersection of NLP and vision
* Proficiency in ML toolkit of choice, e.g., PyTorch
* Strong programming skills in Python
Key Qualifications
Preferred Qualifications
* Familiarity with distributed training
* Strong programming skills in C/C or ObjC
Education & Experience
Additional Requirements
Pay & Benefits
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $175,800 and $312,200, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple’s discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple’s Employee Stock Purchase Plan. You’ll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses — including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We take affirmative action to ensure equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.
Keep a pulse on the job market with advanced job matching technology.
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution.
Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right.
Surveys & Data Sets
What is the career path for a Multimodal Generative Modeling Research Engineer - SIML, ISE?
Sign up to receive alerts about other jobs on the Multimodal Generative Modeling Research Engineer - SIML, ISE career path by checking the boxes next to the positions that interest you.
Sign up to receive alerts about other jobs with skills like those required for the Multimodal Generative Modeling Research Engineer - SIML, ISE.
Click the checkbox next to the jobs that you are interested in.
Not the job you're looking for? Here are some other Multimodal Generative Modeling Research Engineer - SIML, ISE jobs in the Cupertino, CA area that may be a better fit.
We don't have any other Multimodal Generative Modeling Research Engineer - SIML, ISE jobs in the Cupertino, CA area right now.