Summary
Posted: Feb 10, 2025
Weekly Hours: 40
Role Number:200590698
The System Intelligence and Machine Learning (SIML) organization at Apple is Looking for visual generative modeling research engineers to build the next generation of experiences and capabilities in Apple Intelligence. This is an opportunity to join a dedicated core group in the Intelligent System Experience group at Apple, which built the foundational image generation technologies behind Image Playground, Genmoji, Image Wand and Photos Clean Up experiences that shipped as part of Apple Intelligence with iOS 18.2. Join us as we transform the way billions of users express themselves, create and communicate, on Apple platforms! We are looking for world class technology leaders that have the ability to translate ideas to action, and the hands-on expertise to train, deploy, and optimize large-scale, generative ML-based features/workflows on device. In this role your focus will be on optimizing large visual generative models to enhance the performance of applications related to image/video generation, editing, personalized avatar, stylization, and many more. You will work in a highly cross-functional setting, provide critical technical expertise and leadership, and be responsible for delivering ML solutions that serve the intended experiences while respecting practical constraints such as memory, latency and power. Your innovations will significantly impact the entire ML model lifecycle of Apple intelligence. Selected references to our team's work: https://www.apple.com/newsroom/2024/12/apple-intelligence-now-features-image-playground-genmoji-and-more/ https://support.apple.com/guide/iphone/create-genmoji-with-apple-intelligence-iph4e76f5667/ios https://support.apple.com/guide/iphone/create-original-images-with-image-playground-iph0063238b5/ios
Want more jobs like this?
Get jobs in Cupertino, CA delivered to your inbox every week.
Description
We're looking for strong ML software engineers/leaders to drive the development of the on-device Apple Intelligence visual generation model developments. This includes defining and leading the execution of model compression, distillation, and integrating to the full Apple Intelligence user experiences. We expect the candidate to have strong, efficient ML model development experiences and a passion for shipping machine learning models on device. The primary responsibilities associated with this position range from algorithm design and implementation, ability to integrate research into production frameworks, and collaborating closely with ML researchers, software engineers, hardware & design teams cross functionally. Your primary responsibilities will include: Designing, implementing, and deploying innovative conditional visual generative models. Implementing novel and powerful model compression, distillation algorithms, and efficient model architectures Perform research in emerging areas of efficient neural network development including quantization, pruning, compression algorithms with a focus on visual generation models. Staying abreast with the latest trends, technologies, and best practices in machine learning, visual generative models, multi-modal foundation models, computer vision and natural language understanding. We encourage publishing novel research at top ML conferences. Effectively communicating results and insights in a highly cross-functional team
Minimum Qualifications
- Masters, or Ph.D. in Computer Science, or Computer Engineering; similarly related fields, or comparable professional experience
- Strong programming skills in Python and C++
- Proficiency in toolkits like PyTorch or other deep learning frameworks
- Hands on experience training or leveraging larges scale visual generative models (e.g. diffusion models) for real-world user experience and computer vision applications
Preferred Qualifications
- Strong background in research and innovation, demonstrated through publications in top-tier journals or conferences, patents, or impactful industry experience. Proven leadership in both applied research and development
- Excellent written and verbal communications skills, and have the ability to work hands-on in cross-functional teams
- Familiar with model compression algorithms including quantization, pruning, distillations, and experience on optimizing large diffusion models or language models
- Experience with hardware architecture, software & hardware co-design
Pay & Benefits
- At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $143,100 and $264,200, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
More
- Apple is an equal opportunity employer that is committed to inclusion and diversity. We take affirmative action to ensure equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.