Minimum qualifications:
- Bachelor's degree in Computer Science, Electrical Engineering, or equivalent practical experience.
- 15 years of experience building software and distributed systems.
- 10 years of experience with machine learning algorithms and tools (e.g., PyTorch, TensorFlow, JAX), artificial intelligence, and deep learning models like LLMs, NLP, etc.
- 10 years of experience with hardware and software design, data structures and algorithms, machine learning, and with customer-facing products.
- 10 years of experience with private and public cloud design considerations and limitations in the areas of virtualization, global infrastructure, distributed ML and HPC systems, load balancing, networking, massive data storage, and security.
Want more jobs like this?
Get jobs delivered to your inbox every week.
Preferred qualifications:
- Master's degree in Computer Science, Electrical Engineering, or related field.
- Experience technically leading and delivering successfully large-scale cloud-based ML solutions for Training or Serving large models.
- Experience effectively bringing innovative software solutions to market.
Ability to work cross-functionally, partnering with groups such as Sales, Engineering, Product Management, Product Marketing, UX and UI, brokering trade offs with stakeholders and understanding their needs.- Excellent organization, problem-solving, and prioritization skills.
- Outstanding teamwork and communication skills.
About the job
Cloud ML Compute Services (CMCS) focuses on leveraging Google's leadership and expertise in AI/ML and cloud to build the best Cloud ML platform, capable of meeting the needs of the most demanding, innovative, and cutting edge ML workloads.
As a Principal Engineer, you will be responsible for driving the Google CMCS technical strategy for ML Frameworks and Models, enabling massive scale ML Services powered by both GPUs and TPUs. You will provide technical leadership in this critical emerging AI/ML cloud use-case serving the highly dynamic needs of AI/ML compute by accelerating Training and Serving of cutting edge models (e.g., LLMs, MoE, Diffusion, Ranking/Recommendation, etc.) atop exceptional AI hardware (e.g. Google TPUs and NVidia GPUs), over the most popular ML Frameworks (e.g., PyTorch, JAX, TensorFlow). The ML stack features realtime scalability via model/data parallelization of massively distributed training, development of latest ML models, performance tuning, accuracy optimization, and scalable, low latency serving of both first-party and third-party models.
In this role, you will drive, execute, and deliver the technical strategy for very large-scale training and inference services on GCP, and how these integrate with other Google services and product areas such as Core ML, GDM, Storage, GKE and VertexAI.
Google Cloud accelerates every organization's ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google's cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.
The US base salary range for this full-time position is $278,000-$399,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .
Responsibilities
- Design, build, and deploy solutions that leverage GPU, TPU and highly-scalable hardware and software infrastructure to deliver compelling solutions for GPU/TPU ML workloads.
- Build strategic alignment with major organizations across Google contributing to the ML landscape to create mutually beneficial joint goals and execute on them.
- Work across Engineering teams that build, design, and implement both hardware and software and that span across infrastructure including platforms, chip development, compute, storage, networking, and data analytics.
- Provide leadership for cloud developer technology inside Google and manage collaboration with cross-functional Engineering teams to streamline and improve adoption of Google Cloud Platform capabilities, both within Google as well as for the cloud industry at large.
- Optimize the latest emerging ML model types, benchmarks, as well as common ML frameworks such as PyTorch, TensorFlow, and JAX on GCP.