AI Security Researcher

Robustintelligence in Remote

Our mission at Robust Intelligence is to enable every organization on the planet to adopt AI securely. As the world increasingly adopts AI into automated decision processes, we inherit a great deal of risk.

Our flagship product is built to be integrated with existing AI systems to enumerate and eliminate risks caused by both unintentional and intentional (adversarial) failure modes. With Generative AI becoming increasingly popular, new vulnerabilities and attacks present a significant threat to AI companies and their consumers. Our Generative AI Firewall provides a safety net against these failure modes.

At Robust Intelligence we have built a multidisciplinary team of ML Engineers, AI security experts and software engineers to advance the state of AI security. Together, we're building the future of secure, trustworthy AI.
Robust Intelligence (RI) is a people-first company (this role can be remote). We offer an array of perks and benefits that ensure our employees’ health and well-being. With RI’s in-office company culture, we offer our employees free daily lunches and dinners (if you are working late), free snacks and beverages, commuter benefits, and an office gym.

Our leaders recognize that all of our employees are humans first. Employees at the company are parents, pet owners, siblings, and more. Hence, RI ensures that employees have the benefits and resources needed to spend time with their families and live a fulfilling life outside of work. Benefits include a flexible time off policy, paid parental/family leave, child care allowance, 401(k) retirement plan, and market-leading health, dental, and vision insurance for employees and dependents. We also have an education reimbursement program for individual learning.

What we offer:

We offer the opportunity to significantly contribute to shaping AI's future. This opportunity is a collaborative effort between impossibly talented individuals who share a passion for this mission. Our biggest asset is inclusion, as building a diverse community is the key to succeeding in our mission. In addition to the goal and environment we offer:

- Competitive salary and company ownership through equity
- Market-leading health, dental, and vision insurance for employees and dependents
- Flexible Time Off Policy and Paid Parental/Family Leave
- Education reimbursement program for Individual Learning
- 401(k) Retirement plan
- Commuter benefits
- Immigration sponsorship - H1B and Green Card
- Company lunches, dinners, and kitchens stocked with snacks and drinks
- On-site gym and wellness program

EEOC - Robust Intelligence is an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We believe diversity enriches our team so we hire people with a wide range of identities, backgrounds, and experiences. Even if you don't meet 100% of the qualifications for this job, we strongly encourage you to apply!
    • Track and analyze emerging threats to AI systems, focusing on AI/ML models, applications, and environments.
    • Develop and implement detection and mitigation strategies for identified threats, including prototyping new approaches.
    • Lead comprehensive red-teaming exercises and vulnerability assessments for generative AI technologies, identifying and addressing potential security vulnerabilities.
    • Develop and maintain security tools and frameworks using Python or Golang.
    • Curate and generate robust datasets for training ML models.
    • Author blog posts, white papers, or research papers related to emerging threats in AI security.
    • Collaborate with cross-functional teams of researchers and engineers to translate research ideas into product features. You'll also have the opportunity to contribute to our overall machine learning culture as an early member of the team.
    • 3+ years of proven experience
    • Experience on applied red and/or blue team roles, such as threat intelligence, threat hunting, red teaming, etc.
    • Strong understanding of common application security vulnerabilities and mitigations.
    • Strong programming skills in generic programming languages such as Python or Golang.
    • Excellent written and verbal communication skills, strong analytical and problem-solving skills.
    • Ability to quickly learn new technologies and concepts and to understand a wide variety of technical challenges to be solved.
    • Experience with AI/ML security risks such as data poisoning, privacy attacks, adversarial inputs, etc.
    • Fluency in reading academic papers on AI/ML and security and translating to prototype systems
    • Have experience with modern application stacks, infrastructure, and security tools.
    • Experience developing proof-of-concept exploits for new or theoretical attacks.
Apply