Forward is a consulting firm that works with independent consultants form around the world to help startups build their engineering organizations.
Working exclusively with independent consultants aligns the interests of ourselves, our clients, and the individuals we collaborate with in a way not possible with the traditional consulting model. It also gives us uncommon range and flexibility.
About the Work
Company Stage: Seed
Keywords: PyTorch, Ops
Term: 3-6 months
Hours per week: 20
Timezone Constraints: None
Geographic Constraints: None
We’re working with a company that’s building an open source framework for developing and self-hosting LLMs.
They’re looking for an engineer who can help them iterate on the platform, specifically with an eye towards optimizing the prediction performance and resource consumption of the models it produces.
You’d be reporting directly to the CTO who is also the CEO and working alongside 4 other globally distributed engineers in a heavily async work environment.
Roughly in order of importance:
- You have deep experience optimizing the performance of ML Models
- You have experience with one or more deep learning accelerator frameworks
- You have experience working with multiple cloud hosting providers
- You have 10+ years experience
If this sounds interesting to you, we’d love to hear from you here.
Research shows that certain folks are less likely to apply for work when they don’t meet 100% of the outlined qualifications. If this role excites you, we want to hear from you. If not and you’re looking for contract work, feel free to reach out to us or take a look at our other roles.