Senior Product Manager - Enterprise Inference

Company: NVIDIA
Company: NVIDIA
Location: US, CA, Santa Clara
Commitment: Full time
Posted on: 2023-09-08 05:58
To realize value with AI, neural networks need to be deployed for inference powering applications running in the cloud, datacenter or at the edge. Common services that invoke AI inference include recommender systems, virtual assistants, large language models and generative AI. NVIDIA is at the forefront of advancing the latest research and optimizations to make the cost-efficient inferencing of customized models a reality for everybody. To keep pace with this multidimensional field, we seek a passionate product manager who understands inference and its ecosystem. We need a self-starter to continue to grow this area and work with customers, to define the future of inference. We're looking for the rare blend of both technical and product skills and passion about groundbreaking technology. If this sounds like you, we would love to learn more about you!What You'll be Doing:Develop NVIDIA's enterprise inference strategy in alignment with NVIDIA's portfolio of AI products and servicesDistill insights from strategic customer engagements and define, prioritize and drive execution of product roadmapCollaborate cross-organization with machine learning engineers and product teams to introduce new techniques and tools that improve performance, latency, throughput while optimizing for costBuild outstanding developer experience with inference APIs providing seamless integration with the modern software development stack and relevant ecosystem partnersEnsure operational excellence and reliability of distributed inference serving systems - build processes around a robust set of analytics and alerting tooling focused on uptime SLAs and overall QoSDevelop industry and workload focused GTM strategy and playbook with marketing, sales and in partnership with NVIDIA's ecosystem of partners to drive enterprise adoption and establish leadership in inferenceWhat We Need to See:BS or MS degree in Computer Science, Computer Engineering, or similar field or equivalent experience6+ years of product management, or similar, experience at a technology company3+ years of experience in building inference softwareSolid understanding of Kubernetes and DevOpsStrong communication and interpersonal skillsWays to Stand Out from the Crowd:Understanding of modern ML architectures and an intuition for how to optimize their TCO, particularly for inferenceAdvanced knowledge of NVIDIA Triton Inference Server, TensorRT or other inference acceleration libraries, such as Ray and DeepSpeedFamiliarity with the MLOps ecosystem and experience building integrations with popular MLOps tooling such as MLflow and Weights & BiasesThe base salary range is $156,000 - $310,500. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.You will also be eligible for equity and benefits.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
View Original Job Posting