Software Engineer, Data Infrastructure (SF, NYC, Remote)

Tecton in San Francisco, CA

At Tecton, we are on a mission to bring world-class Machine Learning to every product and customer experience. Tecton’s founders developed the first Feature Platform when they created Uber’s Michelangelo ML platform. In pursuit of bringing ML to every production application, we have since brought the leading commercial feature store to market and built the most popular open-source feature platform.

We are funded by Sequoia Capital and Andreessen Horowitz and have a fast-growing team that works out of SF, NYC, and remotely. Our team has extensive years of experience building and operating business-critical machine learning systems at leading tech companies like Uber, Google, Facebook, Airbnb, Twitter, and Quora, and we’re now bringing those same capabilities to every organization in the world.

Tecton's ability to scale and process high volumes of data while being performant and resilient to failures is a key component of the product and central to design decisions. Our team’s data culture is driven by engineers who have worked on major projects such as Google Search and Indexing, Apache Airflow, and Instagram's ML platform.

As an early member of Tecton's Data Infrastructure team, you will help lay the foundation for scaling Tecton. We are looking for exceptional software engineers with a systematic problem-solving approach who are driven to find simple solutions to complex challenges.

This position is open to candidates based anywhere in the United States. You have the opportunity to work in one of our hub offices in San Francisco or New York City, or to work fully-remote from outside those areas within the US.
#LI-Remote

Tecton values diversity and is an equal opportunity employer committed to creating an inclusive environment for all employees and applicants without regard to race, color, religion, national original, gender, sexual orientation, age, marital status, veteran status, disability status, or other applicable legally protected characteristics.  If you would like to request any accommodations from application through to interview, please contact us at recruitingteam@tecton.ai
    • Designing, building and maintaining our real-time, streaming and batch data pipelines
    • Optimizing the end-to-end performance of our distributed systems
    • Improving real-time stream compute capabilities
    • Building the Spark/Kafka/Flink ecosystem
    • Pioneer new approaches to data pipelines and workflow orchestration
    • Build and maintain scalable and reliable storage/compute services to serve our growing customer list
    • Automated capacity management and tracking
    • 4+ years of professional software engineering experience
    • Experience with building large scale, distributed data pipelines and data applications
    • Experience with building batch or streaming machine learning inference pipelines
    • Experience with Spark, Kafka, Flink and similar tools
    • Experience with cloud technologies, e.g. AWS, GCP, Kubernetes
    • Experience with open-source and commercial products in the data, MLOps and cloud infrastructure space
Apply