Software Engineer, Model Inference

Open Ai in San Francisco, California, United States

$200,000 - $370,000

About the Team The Applied team safely brings OpenAI's technology to the world. We released the GPT-3 API and Codex , which powers GitHub's Copilot . There's a lot more on the immediate horizon. Our customers build fast-growing businesses around our APIs, which power product features that were never before possible. We simultaneously ensure that our powerful tools are used responsibly. Safe deployment is more important to us than unfettered growth. The Applied Engineering team wraps a massive fleet of GPUs in scalable, robust, infrastructure powered by Kubernetes, Go, Python, Terraform, Redis, Kafka, Postgres, and Snowflake. Our APIs are powered by Python Flask and OpenAPI with a React frontend. About the Role We are looking for an engineer who wants to take the world's largest and most capable AI models and optimize them for use in a high-volume, low-latency, and high-availability production environment. In this role, you will: Work alongside machine learning researchers, engineers, and product managers to bring our latest technologies into production. Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models. Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues. Optimize our code and fleet of Azure VMs to utilize every FLOP and every GB of GPU RAM of our hardware. You might thrive in this role if you: Have an understanding of modern ML architectures and an intuition for how to optimize their performance, particularly for inference. Own problems end-to-end, and are willing to pick up whatever knowledge you're missing to get the job done. Have at least 3 years of professional software engineering experience. Have or can quickly gain familiarity with PyTorch, NVidia GPUs and the software stacks that optimize them (e.g. NCCL, CUDA), as well as HPC technologies such as InfiniBand, MPI, etc. Have experience architecting, observing, and debugging production distributed systems. Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed. Have needed to rebuild or substantially refactor production systems several times over due to rapidly increasing scale. Are self-directed and enjoy figuring out the most important problem to work on. Have a good intuition for when off-the-shelf solutions will work, and build tools to accelerate your own workflow quickly if they won’t. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.    At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology. Compensation, Benefits and Perks The annual salary range for this role is $200,000 – $370,000. Total compensation also includes generous equity and benefits. Medical, dental, and vision insurance for you and your family Mental health and wellness support 401(k) plan with 4% matching Unlimited time off and 18+ company holidays per year Paid parental leave (20 weeks) and family-planning support Annual learning & development stipend ($1,500 per year) We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status. Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.  We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link . OpenAI US Applicant Privacy Policy

Apply