Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 4 years of experience in data analytics, Trust and Safety, policy, cybersecurity, or a related field.
Preferred qualifications:
- Bachelor's degree in a related field, or equivalent practical experience.
- Experience in using statistical analysis and hypothesis testing with analyzing Machine Learning (ML) models performance or working on Large Language Models (LLMs).
- Excellent investigative, communication and problem-solving skills with the knowledge in innovation, technology, and Google products.
- Excellent presentation skills with the ability to collaborate cross-functionally at multiple levels.
- Excellent thinking skills with attention to detail in a changing environment.
About the job
The Trust and Safety team's mission is to protect and respect Google's users by ensuring online safety.In this role, you will collaborate with teams within and outside of Trust and Safety. You will be a thought leader and partner with the Engineering, Product, Legal, Policy, and Scaled Operations teams to lay down strategy, enable integration of teams and will be solving ecosystems and first-party abuse. You will be driving intent and execution excellence to deliver cross-functional initiatives.You will be responsible for reducing policy violating activity across all Generative AI products for Search and Google Assistant. You will also enable the deployment of defenses to stop abuse, and lead process improvement efforts to improve speed and quality of response to abuse. You will identify platform needs or influence enforcement capability design.At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.In this role, you will collaborate with teams within and outside of Trust and Safety. You will be a thought leader and partner with the Engineering, Product, Legal, Policy, and Scaled Operations teams to lay down strategy, enable integration of teams and will be solving ecosystems and first-party abuse. You will be driving intent and execution excellence to deliver cross-functional initiatives.You will be responsible for reducing policy violating activity across all Generative AI products for Search and Google Assistant. You will also enable the deployment of defenses to stop abuse, and lead process improvement efforts to improve speed and quality of response to abuse. You will identify platform needs or influence enforcement capability design.
Responsibilities
- Initiate and lead projects to protect Gemini users from abuse, inappropriate content and fraud through investigation, prevention and removal of safety issues. Analyze and identify new abuse trends.
- Work across multiple Gemini features in multimodal input space and find abuse patterns. Enhance operational workflows by process improvements. Identify automation and efficiency opportunities.
- Develop and communicate strategy and goals focusing on user trust and safety issues in Gemini. Manage responsibilities across multiple product areas. Build strong cross-functional partnerships with Product, Policy, Legal, Engineering teams and other teams within Trust and Safety to execute solutions.
- Work with other members of the team to prevent abuse issues. Apply and share best practices on product or system knowledge. Work with graphic, controversial or upsetting content.