Research Scientist

Robustintelligence in San Francisco Bay Area

Robust Intelligence 's mission is to eliminate AI Risk. As the world increasingly adopts AI into automated decision processes, we inherit great risk.

Our flagship product is built to be integrated with existing AI systems to enumerate and eliminate risks caused by unintentional and intentional (adversarial) failure modes. With Generative AI becoming increasingly popular, new vulnerabilities and attacks present a significant threat to AI companies and their consumers. Our Generative AI Firewall provides a safety net against these failure modes.

At Robust Intelligence, we have built a multidisciplinary team of ML Engineers, AI security experts, and software engineers to advance the state of AI security. Together, we're building the future of secure, trustworthy AI.

About The Role

Are you passionate about the future of AI and committed to ensuring its safety and security? We're seeking Research Scientists to pioneer our efforts in tracking and mitigating emerging threats to AI systems.
Robust intelligence (RI) is a people-first company (this role can be remote). We offer an array of perks and benefits that ensures our employees’ health and well-being. With RI’s in-office company culture, we offer our employees free daily lunches and dinners (if you are working late), free snacks and beverages, commuter benefits, and an office gym.

Our leaders recognize that all of our employees are humans first. Employees at the company are parents, pet owners, siblings, and a long list more. Hence, RI ensures that employees have the benefits and resources needed to spend time with their families and live a fulfilling life outside of work. Benefits include a flexible time off policy, paid parental/family leave, child care allowance, 401(k) retirement plan, and market-leading health, dental, and vision insurance for employees and dependents. We also have an education reimbursement program for individual learning.

What we offer:

We offer the opportunity to significantly contribute to shaping AI's future. This opportunity is a collaborative effort between impossibly talented individuals who share a passion for this mission. Our biggest asset is inclusion, as building a diverse community is the key to succeeding in our mission. In addition to the goal and environment we offer:
Competitive salary and company ownership through equity
Market-leading health, dental, and vision insurance for employees and dependents
Flexible Time Off Policy and Paid Parental/Family Leave
Education reimbursement program for Individual Learning
401(k) Retirement plan
Commuter benefits
Immigration sponsorship - H1B and Green Card
Company lunches, dinners, and kitchens stocked with snacks and drinks
On-site gym and wellness program

EEOC - Robust Intelligence is an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We believe diversity enriches our team so we hire people with a wide range of identities, backgrounds, and experiences. Even if you don't meet 100% of the qualifications for this job, we strongly encourage you to apply!
    • Track and Analyze Threats: Monitor and analyze emerging threats to AI/ML models, applications, and environments.
    • Deep Understanding: Possess expertise in foundation models like Large Language Models and diffusion models.
    • Develop AI Firewall: Develop and implement strategies to detect and mitigate identified threats, including prototyping innovative approaches.
    • Lead Red-Teaming Exercises: Lead red-teaming exercises and vulnerability assessments for generative AI technologies, addressing safety and security vulnerabilities.
    • Publish Insights: Author blog posts, white papers, or research papers on emerging threats in AI safety and security.
    • Collaborate and Innovate: Collaborate with cross-functional teams to translate research into product features and shape our machine learning culture.
    • Undergraduate degree in EECS, Math, or Physics.
    • Strong programming skills in Python and deep knowledge of machine learning tools like PyTorch.
    • Strong algorithmic and problem-solving skills.
    • Fluency in reading academic papers on AI/ML.
    • Graduate degree in ML or related field with an established research record.
    • Experience in AI/ML safety and security risks (e.g., data poisoning, adversarial attacks).
    • Experience developing proof-of-concept exploits for new attacks.
Apply