Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 7 years of experience in adversarial testing, red teaming, GenAI/AI safety, GenAI/AI ethics or responsibility, or similar fields.
- Master's degree in Computer Science, Information Security, Artificial Intelligence, or a related field.
- Understanding of content moderation policies and best practices.
- Proficiency in multiple languages, especially those relevant to Google's global user base.
- Ability to think strategically and identify emerging threats and vulnerabilities.
- Excellent leadership skills and ability to influence and inspire others.
Want more jobs like this?
Get jobs in Hyderabad, India delivered to your inbox every week.
About the job
Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what's right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
At Google we work hard to earn our users' trust every day. Trust & Safety is Google's team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A diverse team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google's products, protecting our users, advertisers, and publishers across the globe in over 40 languages.
Responsibilities
- Identify and mitigate high-complexity content risks through innovative red teaming strategies.
- Partner with diverse teams (product, engineering, research) to understand vulnerabilities and develop actionable solutions.
- Guide and support the development of analysts, sharing knowledge and expertise to build a high-performing team.
- Conduct analysis of complex issues, providing clear recommendations for decision-making.
- Lead development and implementation of AI safety programs, advocating forbest practices and AI safety initiatives.