Research Engineer (Inference)

BBBH17963_1724768640
  • US$200000 - US$400000 per annum
  • Palo Alto, California

Member of Technical Staff, Research Engineer (Inference) - Palo Alto, CA

Join a team at the forefront of AI innovation, where your expertise in model inference can make a tangible impact. This role is ideal for engineers who thrive in a focused, high-tech environment, solving complex challenges related to large-scale AI deployments. As a Member of Technical Staff, Research Engineer (Inference), you'll play a pivotal role in optimizing and deploying state-of-the-art models for real-world applications.

About the Company

This AI studio, recognized for its groundbreaking work in developing and deploying highly effective language models, is now focused on scaling its technology for enterprise use cases. With a strong foundation in model alignment and fine-tuning, the team is well-funded and equipped with cutting-edge resources, offering a unique environment for those passionate about pushing AI boundaries. Their culture is centered on collaboration, technical excellence, and a pragmatic approach to AI advancements.

About the Role

As a Member of Technical Staff, Research Engineer (Inference), you'll be involved in optimizing AI models for enterprise deployment, ensuring they perform efficiently under varying conditions. Your work will focus on reducing latency, improving throughput, and maintaining model performance during inference. Engineers in this role should have a deep understanding of the trade-offs in model inference, including balancing hardware constraints with real-time processing demands.

What We Can Offer You:

  • Competitive compensation aligned with your experience and contributions.
  • Unlimited paid time off and flexible parental leave.
  • Comprehensive medical, dental, and vision coverage.
  • Visa sponsorship for qualified hires.
  • Professional growth opportunities through coaching, conferences, and training.

Key Responsibilities:

  • Optimize and deploy large language models (LLMs) for inference across cloud and on-prem environments.
  • Utilize frameworks like ONNX, TensorRT, and TVM to accelerate model performance.
  • Troubleshoot complex issues related to model scaling and performance.
  • Collaborate with cross-functional teams to refine and deploy inference pipelines using PyTorch, Docker, and Kubernetes.
  • Balance competing demands, such as model accuracy and inference speed, in enterprise settings.

If you have experience with LLM inference, model optimization tools, and infrastructure management, this role aligns perfectly with your skills.

Tyler Long Recruitment Consultant

Apply for this role