Minimum qualifications:
- Master's degree in Statistics, Data Science, Mathematics, Physics, Economics, Operations Research, Engineering, or a related quantitative field or equivalent practical experience.
- 8 years of experience in solving product or business problems, coding (e.g., Python, R, SQL), querying databases or statistical analysis, or 6 years of work experience with a PhD degree.
- Experience in scientific methodologies, data analysis, and operations research with machine learning and artificial intelligence.
- Experience in supervising junior to executive team members in other office locations.
- Experience in cross-functional collaboration across diverse teams with stakeholders management and working towards common goals.
- Experience with fraud, security and threat analysis in the context of Internet-related products or activities, especially with Generative AI.
- Excellent verbal and written communication skills with the ability to articulate concepts to technical and non-technical stakeholders.
- Excellent problem-solving and critical thinking skills with attention to detail.
Want more jobs like this?
Get jobs in Washington, DC delivered to your inbox every week.
About the job
Trust and Safety is Google's team of abuse fighting and user trust experts working to make the internet a safer place. A diverse team of Analysts, Policy Specialists, Technical Experts, and Program Managers, we work to reduce risk and fight abuse across all of Google's products, protecting our users, advertisers, and publishers across the globe. Within the Trust and Safety organization, Data Science and Analytics is part of the Insights and UX teams that leverage the power of data and research to deliver insights to inform selection-making, motivate operational excellence and foster user trust in Google products.
In this role, you will work with a global team of data scientists and analysts and motivate roadmaps and partnerships across the team pillars and foster growth and happiness of the performing team. You will grow in a cross-product environment, enjoy problem solving and understand how to use the power of data science to quantify and optimize operations. You will be protecting the user, doing the right thing and ensuring compliance requirements. You will collaborate and communicate with a multi-disciplinary team of engineers and abuse analysts on a wide range of problems. You will use your communications skills to represent the team with a global audience, translating the team's technical work and connecting it to business impact.
The US base salary range for this full-time position is $177,000-$266,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .
Responsibilities
- Lead a team of high performing data scientists and analysts across Americas (AMER) and Europe, Middle East, and Africa (EMEA) regions. Foster team growth, culture, innovation and collaboration for a team who consistently partners with other teams to motivate impact.
- Define and motivate team roadmap and partnerships, identifying areas where data science can multiply Trust and Safety's impact in keeping users safe and prioritizing team's engagements across numerous opportunities
- Partner with teams of diverse disciplines and focus areas.
- Articulate the team's work and deliverables to a diverse audience, translating technical concepts to business impact.
- Identify and support the team in solving for both existing and emerging data science opportunities (e.g., in AI testing standards, responsible AI and risk identification and quantification).