Senior Software Security Engineer, Cloud Security

Anthropic in San Francisco, CA

You want to protect Anthropic's most valuable assets from both opportunists and nation states. You live in a Secure by Design mindset, and understand how to communicate that vision to software engineers and leaders. You understand how developers work, and how to safeguard source code and deployed assets without making their jobs frustrating and unappealing. You can think outside of the box and implement meaningful security rather than being limited to the compliance playbook.
Hybrid policy: For this role, we prefer candidates who are able to be in our office more than 25% of the time , though we encourage you to apply even if you don’t think you will be able to do that.

Compensation and Benefits*
Anthropic’s compensation package consists of three elements: salary, equity, and benefits. We are committed to pay fairness and aim for these three elements collectively to be highly competitive with market rates.

Equity - On top of this position's salary (listed above), equity will be a major component of the total compensation. We aim to offer higher-than-average equity compensation for a company of our size, and communicate equity amounts at the time of offer issuance.

Benefits - Benefits we offer include:
- Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
- Comprehensive health, dental, and vision insurance for you and all your dependents.
- 401(k) plan with 4% matching.
- 21 weeks of paid parental leave.
- Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
- Stipends for education, home office improvements, commuting, and wellness.
- Fertility benefits via Carrot.
- Daily lunches and snacks in our office.
- Relocation support for those moving to the Bay Area.

* This compensation and benefits information is based on Anthropic’s good faith estimate for this position, in San Francisco, CA, as of the date of publication and may be modified in the future. The level of pay within the range will depend on a variety of job-related factors, including where you place on our internal performance ladders, which is based on factors including past work experience, relevant education, and performance on our interviews or in a work trial.

How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research . This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3 , Circuit-Based Interpretability , Mulitmodal Neurons , Scaling Laws , AI & Compute , Concrete Problems in AI Safety , and Learning from Human Preferences .

---

Company-wide hybrid policy: Currently, we expect all staff to be in our office at least 25% of the time. However, different roles may have different requirements - if this role has a different preference, it will be noted above.

Deadline to apply : None. Applications will be reviewed on a rolling basis.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Come work with us! Anthropic is a public benefit corporation based in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
    • Partner with software engineers and researchers to implement a vision for a Secure Development Environment.
    • Design and implement controls to secure a primarily open source software supply chain.
    • Design and test software security architecture recommendations for existing and future systems.
    • Perform internal penetration testing.
    • Manage software vulnerability remediation program.
    • Strong grasp of cloud attack surface area, gaps, and best practices
    • Build internal security tooling
    • Design control and sandboxing systems for AI research
    • Have experience maintaining and/or contributing to bug bounty and responsible disclosure programs
    • Have experience training developers in software security topics
    • 100% of the skills needed to perform the job
    • Formal certifications or education credentials
    • Machine learning experience or knowledge
    • Have experience supporting fast-paced startup engineering teams
    • Care about AI safety risk scenarios
    • The expected salary range for this position is $270k - $445k.
Apply