Is hybrid: No
Is remote: No
Employer: Google
Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 2 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
- 2 years of experience managing projects and defining project scope, goals, and deliverables.
Preferred qualifications:
- Experience working with abuse, spam, fraud, or malware.
- Experience working on product policy analysis and identifying policy risks.
- Experience using statistical analysis and hypothesis testing.
- Experience analyzing ML models performance or working on LLMs.
- Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
About the job
At Google we work hard to earn our users trust every day. Gaining and retaining this trust is critically important to Google’s success. We defend Google's integrity by fighting spam, fraud and abuse, and develop and communicate product policies. The Trust and Safety team reduces risk and protects the experience of our users and business partners across Google's expanding base of products. We work with a variety of teams from Engineering to Legal and Public Policy to set policies and combat fraud and abuse.
As an Engineering Analyst, you will work to discover, measure and mitigate user trust risks in Search products through scalable solutions. You will have to build relationships and partner closely with Engineers, Product Managers (PMs), Data scientists and other functions. You will work with a team of high-performing analysts creating metrics, templates and datasets to improve trust and safety protections. The work will require learning about product design details, product policies and relevant quality signals. You will work on analyzing existing product protections, evaluating content and helping to improve policy definitions. You will also enable the deployment of key defenses to stop abuse, and lead process improvement efforts to improve speed and quality of response to abuse. You will identify platform needs/influence enforcement capability design and enable professional/career success for your team. Our aim is to resolve problems at scale either by working with engineers on automated product protections or through vendor support. This role will be exposed to graphic, controversial, or upsetting content.
Responsibilities
- Partner with Search teams on project scoping, risk assessments and prioritization and lead projects and cross-functional initiatives within Google, interacting with executive stakeholders from Engineering, Legal, Product teams and more while becoming an expert in Search infrastructure, ranking signals and Search features to Build Large Language Model (LLM) based models that can evaluate content safety according to our product policies.
- Design and implement product metrics to benchmark user trust risks and track improvements over time and creating datasets for engineers to evaluate and improve sensitive content classifiers while delivering leadership and impact as for the broader Trust and Safety Search, Assistant and Geo team.
- Review or be exposed to sensitive or violative content as part of the core role.
- Perform on-call responsibilities on a rotating basis, including weekend coverage/holidays.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also
Google's EEO Policy and
EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our
Accommodations for Applicants form.