Machine Learning Engineer, Evaluation

Cohere in Remote

Who are we?
We’re a small, diverse team working at the cutting edge of machine learning. At Cohere, our mission is to build machines that understand the world and to make them safely accessible to all. Language is at the crux of this, but it can be difficult and expensive to parse the syntax, semantics, and context that all work together to give words meaning. The Cohere platform provides access to Large Language Models through its APIs that read billions of web pages and learns to understand the meaning, sentiment, and intent of the words we use in a richness never seen before.

We've raised our Series B , signed a multi-year partnership with Google Cloud , and we are focused on bringing our technology to market. We will partner with customers so they can build natural language understanding and generation into their products with just a few lines of code.

We’re ambitious — we believe our technology will fundamentally transform how industries interact with natural language. And we have the technical chops to back it up - Cohere’s CEO, Aidan Gomez, is a co-author of the groundbreaking paper “Attention is all you need” , (over 53k citations) and was previously part of Google Brain. Our entire technical team is world-class.

We are focused on creating a diverse and inclusive work environment so that all of our team members can thrive. We welcome kind and brilliant people to our team, from wherever they come.

Why this role?
At Cohere, we strive to continually improve our large language models. Thorough model evaluation is, therefore, core to our modeling workflow to verify that our language models are broadly improving, as well as identify their relative strengths and weaknesses. Furthermore, as model capabilities continue to evolve, we need to continually refresh our evaluation strategies so that they are informative in guiding modeling and product development.

We are looking for a Member of Technical Staff to support our model evaluation infrastructure. This role would be part of the Data and Evaluation team, which broadly provides data for training models at Cohere and evaluation protocols for measuring the abilities of these models. The main responsibility of this role is to improve our internal evaluation infrastructure, which includes an evaluation harness for large language models, support for a broad range of NLP datasets and evaluation metrics, and tooling to clearly communicate evaluation results. This role would also work closely with different teams at Cohere to support their evaluation needs, as well as engage in more experimental work to develop highly informative evaluation signals.

The ideal candidate for this role would have experience in both machine learning and software engineering.

Please Note : We have offices in Toronto, Palo Alto, and London but embrace being remote-first! There are no restrictions on where you can be located for this role.
If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! If you consider yourself a thoughtful worker, a lifelong learner, and a kind and playful team member, Cohere is the place for you.

We welcome applicants of all kinds and are committed to providing both an equal opportunity process and work environment. We value and celebrate diversity and strive to create an inclusive work environment for all.

Our Perks:
🤝 An open and inclusive culture and work environment
🧑‍💻 Work closely with a team on the cutting edge of AI research
🍽 Free daily lunch
🦷 Full health and dental benefits, including a separate budget to take care of your mental health
🐣 100% Parental Leave top-up for 6 months for employees based in Canada, the US, and the UK
🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement
🏙 Remote-flexible, offices in Toronto, Palo Alto, and London and coworking stipend
✈️ 6 weeks of vacation and shared Canada/US/UK holidays

#LI-Remote
    • Maintain and improve a large-scale, multi-user machine learning codebase for evaluating large language models.
    • Work closely with research and product teams to implement new datasets, evaluation metrics, and evaluation settings.
    • Experiment with internal data sources and experimental evaluation metrics to develop high-quality evaluation signals.
    • Maintaining a large machine learning codebase with many stakeholders.
    • With common NLP tasks, datasets, and evaluation metrics.
    • Working with NLP models and tokenizers, especially large language models.
    • Visualizing data and creating reports to communicate results.
    • Benchmarking multiple machine learning models under standardized settings.
    • Thinking critically about dataset quality and evaluation metrics utility.
    • Communicating diverse results with different teams, writing clear documentation on supported evaluation settings, etc.
Apply