We are looking for an ML Engineer! NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world! Today, we are increasingly known as “the AI computing company.” Are you willing to challenge yourself, and build phenomenal software by working with the smartest people in the world? Join us at the forefront of technological advancement.We are accelerating enterprise AI efforts and creating innovative solutions by harnessing the power of machine/deep learning techniques to address the challenges posed by high-volume and high-velocity data. The team is looking for candidates with a strong understanding of machine learning, particularly focusing on times-series, NLP, deep learning (e.g. transformers), and reinforcement learning. Candidates should ideally have hands-on experience working with deep learning frameworks such as PyTorch, TensorFlow, Keras, or similar tools. Moreover, it would be beneficial if candidates have experience in designing and deploying complex ML pipelines in production environments.What you'll be doing:Develop large-scale anomaly detection models aimed at detecting critical security alerts.Collaborate with cross-functional teams to identify, design and build new machine learning solutions that drive business growth.Design and implement machine learning architectures to support business requirements.Building a real AI product by taking advantage of NVIDIA AI solutions and GPUs.Exploring state-of-the-art deep learning and NLP techniques.What we need to see:Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience)8+ years of experience building and deploying machine learning models and architectures in production environmentKnowledge of NLP and deep learning techniques.Proven Python and deep learning programming skills using Pytorch, TensorFlow, Keras, or similar.You are extremely motivated, highly innovative, and curious about new technologies.Ability to think independently and handle your research & development efforts.Ways to stand out from the crowd:Familiarity working with large scale log-based time-series models for anomaly detection and forecasting.Experience with big data technologies such as Spark.Proven ability with CI/CD, MLOps, and Docker.Master’s degree in Computer Science, Engineering, or related field.Experience with AIOps, Cybersecurity, or IT operations management domains.NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!The base salary range is $168,000 - $264,500. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.You will also be eligible for equity and benefits.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
View Original Job Posting