NVIDIA – Test and Tools Development Engineer

May 1, 2026
6 ₹ LPA - 10 ₹ LPA / year

Job Description

NVIDIA pioneered accelerated computing. Today, our AI infrastructure powers global intelligence, transforming every industry.

Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the metaverse. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry.

🚨 Before You Apply: Your Resume Needs to Shine!

Did you know? 75% of applications get rejected before reaching a human recruiter – all because of poorly formatted resumes that fail ATS scans!

🔥 Get Interview-Ready in Minutes with Our Professionally Designed Resume Template!

✅ ATS-Friendly Designs – Beat the bots and get noticed
✅ Recruiter-Approved Layouts – Highlight your skills the right way
✅ Easy-to-Edit (Word & Google Docs) – No design skills needed
✅ Free Bonus: Cover Letter Template + Resume Writing Guide

🎁 Limited-Time Offer: Get yours for just ₹49 (originally ₹999)
📥 Instant Download – Apply to Google with confidence today!

👉 Grab Your Resume Template Now: Tap Here to get your resume Templates

Responsibilities

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.

What does it look like to build infrastructure that thinks? It triages failures, files bugs, and finds root causes without waiting for humans. As a new graduate, you’ll help build the agentic infrastructure powering test automation and quality workflows for the NVIDIA Omniverse platform. This is a rare chance to initiate your career at the intersection of AI agents and production software quality. You will learn to build tests and tools other engineers depend on to ship quickly and confidently.
What you’ll be doing:
Build multi-agent pipelines for automated test generation, log analysis, failure triage, and bug-filing workflows, working alongside senior engineers on well-scoped pieces of the system
Contribute to evaluation systems that measure agent output quality — writing test cases, analyzing failure patterns, and extending eval frameworks under senior mentorship
Add instrumentation, logging, and monitoring to agentic workflows so failures are visible and debuggable — learning the systems-thinking that makes infrastructure trustworthy
Grow your judgment on where LLMs help and where they fail. Learn how to build solutions around both with mentorship.
What we need to see:
Pursuing or recently completed a Bachelor’s Degree in Computer Science or equivalent
Strong Python fundamentals — able to write clean, testable code and reason about structure beyond single scripts
Hands-on exposure to AI-native development workflows — Claude Code, Cursor, Codex, or prompt engineering through coursework, internships, hackathons, or personal projects
At least one project, open-source contribution, or coursework example where you coordinated an LLM into a working system end-to-end
Foundational understanding of software testing, CI/CD concepts, or quality engineering principles
Awareness of common LLM failure modes — hallucination, context limits, tool misuse — and curiosity about how to mitigate them
Ways to stand out from the crowd:
Built a side project, hackathon entry, or open-source contribution involving multi-agent systems, MCP servers, or custom LLM tool integrations that you can walk through end-to-end
Experimented with evaluating LLM outputs — even a small eval harness or scoring script for a personal project demonstrates the right instinct
You have shipped something others actually used. It could be a tool, script, or bot adopted by a club, lab, or open-source community. You also provided documentation that let people use it without you
You show intellectual integrity about where your projects break and have built in recovery paths rather than hiding failures