Added: 2025-08-29 14:30.43
Updated: 2025-09-01 19:05.03

Principal Analyst, Content Adversarial Red Team

Dublin, Ireland

Type: FULL_TIME

Category: BUSINESS_STRATEGY

Advertisement
Is hybrid: No
Is remote: No
Employer: Google

Minimum qualifications:


Preferred qualifications:

About the job

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

In this pivotal role, you will use your direct experience in adversarial testing and red teaming, particularly of Generative AI, so that you design and direct complex red teaming operations, creating innovative methodologies to uncover novel content abuse risks. You will act as a key advisor to executive leadership, leveraging your influence across Product, Engineering, and Policy teams to drive strategic safety initiatives.

As a senior member of the team, you will mentor analysts, fostering a culture of continuous learning and sharing your deep expertise in adversarial techniques. You will also represent Google's AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.

This role may be exposed to graphic, controversial, and/or upsetting content.

Responsibilities

  • Design, develop, and oversee the execution of innovative and highly complex red teaming strategies to uncover content abuse risks. Create and refine new red teaming methodologies, strategies and tactics.
  • Influence across Product, Engineering, Research and Policy to drive the implementation of strategic safety initiatives. Be a key advisor to executive leadership on complex content safety issues, providing actionable insights and recommendations.
  • Mentor and guide junior and senior analysts, fostering excellence and continuous learning within the team. Act as a subject matter expert, sharing deep knowledge of adversarial and red teaming techniques, and strategic risk mitigation.
  • Represent Google's AI safety efforts in external forums and conferences. Contribute to the development of industry-wide best practices for responsible AI development.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Advertisement
Click here to apply and get more details about this job!
It will open in a new tab.
Terms and Conditions - Webmaster - Privacy Policy - Pro coding!