About Forward
Forward is a consulting firm that works exclusively with independent consultants to help startups build their engineering organizations.
About the Work
Company Stage: Seed
Keywords: PyTorch, Ops
Term: 3-6 months
Hours per week: 20
Timeline: Immediate
Timezone Constraints: None
Geographic Constraints: None
Context:
We’re working with a company that’s building an open source framework for developing and self-hosting LLMs.
They’re looking for an engineer who can help them iterate on the platform, specifically with an eye towards optimizing the prediction performance and resource consumption of the models it produces.
You’d be reporting directly to the CTO who is also the CEO and working alongside 4 other globally distributed engineers in a heavily async work environment.
About You
Roughly in order of importance:
- You have deep experience optimizing the performance of ML Models
- You have experience with one or more deep learning accelerator frameworks
- You have experience working with multiple cloud hosting providers
- You have 10+ years experience
If this sounds interesting to you, we’d love to hear from you here.
Research shows that certain folks are less likely to apply for work when they don’t meet 100% of the outlined qualifications. If this role excites you, we want to hear from you. If not and you’re looking for contract work, feel free to reach out to us or take a look at our other roles. You can also view our privacy policy here.