Member of Technical Staff (Fine-tuning)

18266
  • 350000
  • Bay Area
  • Permanent

What We're Creating


As we embark on a new phase of expansion, our focus is on partnering with commercial clients to adapt and enhance our advanced models to meet their distinct business needs. Our achievements in developing, aligning, and deploying cutting-edge models in our high EQ consumer-facing chatbot have laid a solid foundation for success. With strong financial backing and substantial H100 resources, we have established a resilient infrastructure and streamlined processes to support top-tier finetuning. By joining our team, you'll have the chance to contribute your skills while being part of an innovative organization that values creativity and teamwork.



About Us


We are a small, interdisciplinary AI studio that has trained several state-of-the-art language models and developed a personal assistant. Currently, our studio is dedicated to finetuning and deploying models for specific applications for our commercial partners.

We believe that artificial intelligence signifies the onset of a period of exponential change. Our name reflects this moment of transformation, and our status as a public benefit corporation gives us the legal mandate to prioritize the well-being and happiness of our partners, users, and broader stakeholders above all else.


About the Position Research Engineer, Member of Technical Staff (Finetuning)


We have trained state-of-the-art models, but to tailor these models for deployment to our enterprise partners, they require further finetuning and alignment. Research engineers gather and refine datasets, explore innovative finetuning techniques, and assess the resulting models to ensure they meet our criteria for safety, usefulness, and reliability in corporate settings.


This role is ideal for you if you:


  • Have experience in finetuning and evaluating large language models, either with proprietary models or via API.
  • Thrive in a fast-paced environment and can adapt to rapidly changing technical requirements.
  • Are skilled in managing large compute clusters using tools like Slurm.
  • Have a strong grasp of modern machine learning methodologies (transformer architectures, RLHF, DPO, etc.) and are proficient with PyTorch.
  • Knowledge of SQL and data tools like Snowflake, Dagster, and Airbyte is a plus but not mandatory.


We do not require a specific educational background or a set number of years of experience. We are eager to see what you have been building. Please send us examples of your best work, including but not limited to links to open-source contributions, personal projects, or a cover letter describing past projects that you are proud of.



Keywords: Finetuning, Advanced models, language models, Research Engineer, Datasets,

Large language models, (LLMs), Proprietary models, API, Compute clusters, Slurm, Transformer architectures, RLHF, PyTorch, SQL, Snowflake, Dagster, Airbyte, Open-source contributions, Personal projects, Fine Tuning

Derek Gemski Recruitment Consultant

Apply for this role