Skip to main contentA logo with &quat;the muse&quat; in dark blue text.

Machine Learning Engineer - Platform, Monetization Generative AI

AT TikTok
TikTok

Machine Learning Engineer - Platform, Monetization Generative AI

San Jose, CA

Responsibilities

About the Generative AI Production Team
The Post-Training pod under Generative AI Production Team is at the forefront of refining and enhancing generative AI models for advertising, content creation, and beyond. Our mission is to take pre-trained models and fine-tune them to achieve state-of-the-art (SOTA) performance in vertical ad categories and multi-modal applications. We optimize models through fine-tuning, reinforcement learning, and domain adaptation, ensuring that AI-generated content meets the highest quality and relevance standards.

We work closely with pre-training teams, application teams, and multi-modal model developers (T2V, I2V, T2I) to bridge foundational AI advancements with real-world, high-performance applications. If you are passionate about pushing cognitive boundaries, optimizing AI models, and elevating AI-generated content to new heights, this is the team for you.

Want more jobs like this?

Get Data and Analytics jobs in San Jose, CA delivered to your inbox every week.

By signing up, you agree to our Terms of Service & Privacy Policy.


As a Machine Learning Platform Engineer, you will drive the development of our AI platform, ensuring scalability, efficiency, and robustness for training and serving large-scale diffusion models and multimodal generative AI systems. You will work closely with model researchers, infrastructure engineers, and data teams to optimize distributed training, inference efficiency, and production reliability.

Responsibilities
1) Architect and develop scalable and efficient AI infrastructure to support large-scale diffusion models and multi-modal generative AI workloads.
2) Optimize large model training and inference using PyTorch, Triton, TensorRT, and distributed training libraries (DeepSpeed, FSDP, vLLM).
3) Implement and optimize model using sequence parallelism, pipeline parallelism, and tensor parallelism etc to improve performance on high-throughput training clusters.
4) Scale and productionize generative AI models, ensuring efficient deployment on heterogeneous hardware environments (H100, A100, etc.).
5) Develop and integrate model distillation techniques to enhance the efficiency and performance of generative models, reducing computation costs while maintaining quality.
6) Design and maintain an automated model production pipeline for training/inference at scale, integrating distributed data processing frameworks (Ray, Spark, or custom solutions).
7) Enhance platform stability and efficiency by refining model orchestration, checkpointing, and retrieval strategies.
8) Collaborate with cross-functional teams (ML researchers, software engineers, infrastructure engineers) to ensure seamless model iteration cycles and deployments. Stay ahead of emerging trends in deep learning architectures, distributed training techniques, and AI infrastructure optimization, integrating best practices from academia and industry.

Qualifications

Minimum Qualifications:
1) B.S., M.S., or Ph.D. in Computer Science, Electrical Engineering, or a related field. 3+ years of hands-on experience in large-scale machine learning infrastructure and distributed AI model training.
2) Deep expertise in PyTorch, CUDA optimization, and ML frameworks such as DeepSpeed, FSDP, and vLLM. Proven experience in optimizing diffusion models, sequence parallelism, and large-scale transformer-based architectures.
3) Strong understanding of high-performance computing, low-latency inference, and GPU acceleration techniques.
4) Hands-on experience in scaling AI infrastructure, leveraging Kubernetes, Docker, Ray, and Triton inference servers. Deep understanding of AI model orchestration, scheduling, and optimization across large clusters. Proficiency in profiling and debugging large-scale model training and inference bottlenecks.

Preferred Qualifications:
1) Experience deploying multi-modal generative AI models in production.
2) Expertise in compiler-level optimizations, TensorRT, and hardware-aware model tuning.
3) Familiarity with large-scale AI workloads in cloud environments (AWS, GCP, Azure).
4) Strong software engineering background, with a focus on scalability, efficiency, and reliability.

Job Information

[For Pay Transparency] Compensation Description (annually)

The base salary range for this position in the selected city is $145000 - $250000 annually.

Compensation may vary outside of this range depending on a number of factors, including a candidate's qualifications, skills, competencies and experience, and location. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units.

Benefits may vary depending on the nature of employment and the country work location. Employees have day one access to medical, dental, and vision insurance, a 401(k) savings plan with company match, paid parental leave, short-term and long-term disability coverage, life insurance, wellbeing benefits, among others. Employees also receive 10 paid holidays per year, 10 paid sick days per year and 17 days of Paid Personal Time (prorated upon hire with increasing accruals by tenure).

The Company reserves the right to modify or change these benefits programs at any time, with or without notice.

For Los Angeles County (unincorporated) Candidates:

Qualified applicants with arrest or conviction records will be considered for employment in accordance with all federal, state, and local laws including the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Our company believes that criminal history may have a direct, adverse and negative relationship on the following job duties, potentially resulting in the withdrawal of the conditional offer of employment:

1. Interacting and occasionally having unsupervised contact with internal/external clients and/or colleagues;

2. Appropriately handling and managing confidential information including proprietary and trade secret information and access to information technology systems; and

3. Exercising sound judgment.

Client-provided location(s): San Jose, CA, USA
Job ID: TikTok-7477732590494664967
Employment Type: Other

Perks and Benefits

  • Health and Wellness

    • Health Insurance
    • Dental Insurance
    • Vision Insurance
    • HSA
    • Life Insurance
    • Fitness Subsidies
    • Short-Term Disability
    • Long-Term Disability
    • On-Site Gym
    • Mental Health Benefits
    • Virtual Fitness Classes
  • Parental Benefits

    • Fertility Benefits
    • Adoption Assistance Program
    • Family Support Resources
  • Work Flexibility

    • Flexible Work Hours
    • Hybrid Work Opportunities
  • Office Life and Perks

    • Casual Dress
    • Snacks
    • Pet-friendly Office
    • Happy Hours
    • Some Meals Provided
    • Company Outings
    • On-Site Cafeteria
    • Holiday Events
  • Vacation and Time Off

    • Paid Vacation
    • Paid Holidays
    • Personal/Sick Days
    • Leave of Absence
  • Financial and Retirement

    • 401(K) With Company Matching
    • Performance Bonus
    • Company Equity
  • Professional Development

    • Promote From Within
    • Access to Online Courses
    • Leadership Training Program
    • Associate or Rotational Training Program
    • Mentor Program
  • Diversity and Inclusion

    • Diversity, Equity, and Inclusion Program
    • Employee Resource Groups (ERG)

Company Videos

Hear directly from employees about what it is like to work at TikTok.