Is hybrid: No
Is remote: No
Employer: Google
Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 2 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
- 2 years of experience managing projects and defining project scope, goals, and deliverables.
Preferred qualifications:
- Experience working with abuse, spam, fraud, or malware.
- Experience working on product policy analysis and identifying policy risks.
- Experience using statistical analysis and hypothesis testing.
- Experience analyzing ML models performance or working on LLMs.
- Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
About the job
At Google, we work hard to earn our users' trust every day. Gaining and retaining this trust is important to Google’s success. We defend Google's integrity by fighting spam, fraud, and abuse, and develop and communicate product policies. The trust and safety team reduces risk and protects the experience of our users and business partners across Google's base of products. We work with a variety of teams from engineering to legal and public policy to set policies and combat fraud and abuse.
As an engineering analyst, you will discover, measure, and mitigate user trust risks in Search products through solutions. You will build relationships and partner with engineers, product managers, data scientists, and other functions. You will work with a team of analysts creating metrics, templates, and datasets to improve trust and safety protections. You will learn about product design details, product policies, and quality signals. You will analyze existing product protections, evaluate content, and help improve policy definitions. You will also enable the deployment of defenses to stop abuse, and manage process improvement efforts to improve response to abuse. You will identify platform needs, influence enforcement capability design, and enable professional and career success for your team. We resolve problems either by working with engineers on automated product protections or through vendor support. This role will be exposed to graphic, controversial, or upsetting content.
Responsibilities
- Design and implement product metrics to benchmark user trust risks and track improvements over time and creating datasets for engineers to evaluate and improve sensitive content classifiers while delivering leadership and impact as for the broader Trust and Safety Search, Assistant and Geo team.
- Review or be exposed to sensitive or violative content as part of the core role.
- Perform on-call responsibilities on a rotating basis, including weekend coverage/holidays.
- Partner with Search teams on project scoping, risk assessments, and prioritization, and manage projects and cross-functional initiatives within Google, interacting with executive stakeholders from engineering, legal, and product teams.
- Build large language model-based models that can evaluate content safety according to our product policies, while becoming a specialist in Search infrastructure, ranking signals, and Search features.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also
Google's EEO Policy and
EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our
Accommodations for Applicants form.