About Anyscale:
Anyscale provides a development platform intended to simplify distributed computing. This enables software developers of all skill levels to build applications that run at any scale from a laptop to the data center.
We're commercializing a popular open source project called
Ray - which is a framework for distributed computing as well as an ecosystem of libraries for scalable machine learning.
Anyscale is based in San Francisco, CA.
About the role:
Ray aims to provide a universal API for building distributed applications (e.g. a machine learning pipeline of feature engineering, model training, and evaluation). Data is usually a core element connecting these different stages, and therefore plays a critical role in Ray’s usability, performance, and stability. We are looking for strong engineers to build, optimize, and scale Ray’s
Datasets library and data processing capabilities in general.
About the Ray Data team:
The Ray Data team currently develops and maintains the Ray
Datasets library, which is already powering critical production use cases (e.g.
large scale data compaction at Amazon , and
ML pipeline at Alibaba ). Ray Datasets is a Python library built on top of Apache Arrow and Ray Core (Ray’s C++ backend), and the Ray Data team interacts closely with Ray Core components including the scheduler and the memory & I/O subsystems. The Ray Data team also works closely with Ray’s ML libraries including Train, RLlib, and Serve.
A snapshot of projects you will work on:
- Performance of Ray
Datasets at large scale (leveraging Arrow primitives, optimizing Ray object manager, etc.)
- Integration with ML training and data sources
- Stability and stress testing infrastructure
- Lead future work integrating streaming workloads into Ray such as Beam on Ray
- Differentiate Data operations in Anyscale hosted Ray service