Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 10 years of experience in Trust and Safety, risk mitigation, cybersecurity, or related fields.
- 6 years of experience in adversarial testing, red teaming, jailbreaking for trust and safety, or a related field, with a focus on AI safety.
Preferred qualifications:
- Master's degree or PhD in a relevant field (e.g., computer science, information security, artificial intelligence).
- Experience in an individual contributor role within a technology company, focused on product safety or risk management.
- Experience working closely with both technical and non-technical teams on complex, dynamic solutions or automations to improve user safety.
- Understanding of AI systems/architecture including specific vulnerabilities, machine learning, and AI responsibility.
- Ability to effectively articulate complex concepts to both technical and non-technical stakeholders.
- Exceptional written and verbal communication skills.
About the job
Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
In this pivotal role, you will use your direct experience in adversarial testing and red teaming, particularly of Generative AI, so that you design and direct complex red teaming operations, creating innovative methodologies to uncover novel content abuse risks. You will act as a key advisor to executive leadership, leveraging your influence across Product, Engineering, and Policy teams to drive strategic safety initiatives.
As a senior member of the team, you will mentor analysts, fostering a culture of continuous learning and sharing your deep expertise in adversarial techniques. You will also represent Google's AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.
Responsibilities
- Design, develop, and oversee the execution of innovative and highly complex red teaming strategies to uncover content abuse risks. Create and refine new red teaming methodologies, strategies and tactics.
- Influence across Product, Engineering, Research and Policy to drive the implementation of strategic safety initiatives. Be a key advisor to executive leadership on complex content safety issues, providing actionable insights and recommendations.
- Mentor and guide junior and senior analysts, fostering excellence and continuous learning within the team. Act as a subject matter expert, sharing deep knowledge of adversarial and red teaming techniques, and strategic risk mitigation.
- Represent Google's AI safety efforts in external forums and conferences. Contribute to the development of industry-wide best practices for responsible AI development.