Nvidia
Based on Wikipedia: Nvidia
In late 1992, three engineers met at a Denny's diner on Berryessa Road in East San Jose to discuss quitting their jobs and starting a company. They had no name, no product, and forty thousand dollars between them. Within three decades, their startup would become the most valuable company on Earth.
This is the story of Nvidia, pronounced "en-VID-ee-uh," and how a graphics card company became the backbone of the artificial intelligence revolution.
The Denny's Founders
Jensen Huang was running his own division at LSI Logic, a semiconductor company, when he started having conversations with two engineers from Sun Microsystems: Chris Malachowsky and Curtis Priem. Both were frustrated with Sun's management and looking to leave, but Huang was in a more comfortable position. He didn't need to jump ship.
What changed his mind was the vision they developed together.
The three founders believed that graphics-based processing could solve computational challenges that had stumped conventional approaches. They also noticed something unusual about video games: they were simultaneously one of the most demanding computational problems and a market with enormous sales volume. Those conditions rarely occur together. Video games, Huang later explained, would be their "killer app"—a way to fund massive research and development while reaching millions of customers.
But before any of that could happen, someone had to quit first.
Huang's wife, Lori, refused to let him resign from LSI unless Malachowsky resigned from Sun at the same time. Malachowsky's wife, Melody, felt exactly the same way about Huang. Neither would let their husband jump into uncertainty alone while the other played it safe.
Curtis Priem broke the deadlock. He resigned from Sun effective December 31, 1992. This put pressure on the other two not to leave him, in Priem's words, to "flail alone." Huang gave notice and officially joined Priem on February 17—his thirtieth birthday. Malachowsky followed in early March.
They started working out of Priem's townhouse in Fremont, California.
The Name That Almost Wasn't
For months, the company had no name. Priem's first suggestion was "Primal Graphics," combining syllables from Priem and Malachowsky. But that left out Huang. They tried working all three names together—"Huaprimal," "Prihuamal," "Malluapri"—and quickly gave up.
The breakthrough came from a different direction entirely. Priem wanted to call their first product the "GXNV," meaning the next version of the GX graphics chips he'd worked on at Sun. Huang told him to drop the GX. That left "NV."
Priem made a list of words containing those letters. At one point, both he and Malachowsky wanted to call the company NVision. There was just one problem: that name was already taken by a toilet paper manufacturer.
Both Priem and Huang have claimed credit for the final name, derived from "invidia"—the Latin word for envy.
Thirty Days from Failure
In the late 1990s, seventy companies were chasing the same idea: that graphics acceleration for video games represented the future of computing. Only two would survive—Nvidia and ATI Technologies, which would later merge into AMD.
Nvidia very nearly wasn't one of them.
Their first product, the NV1, made a bold technical bet. Most competitors processed graphics using triangles as their basic building block. The NV1 used quadrilaterals instead. This might sound like an obscure technical detail, and it was—until Microsoft released DirectX, the software platform that would define PC gaming for decades. Microsoft's Direct3D interface supported only triangles.
The NV1 was suddenly obsolete before it really began.
Around the same time, Nvidia had partnered with Sega to supply the graphics chip for the Dreamcast video game console. They worked on the project for about a year. But Nvidia's technology was falling behind competitors, and the company faced an impossible choice: keep working on a chip that would probably fail, or abandon the project and risk financial collapse.
Then Sega's president, Shoichiro Irimajiri, flew to California to deliver the news in person: Sega had chosen another vendor. The Dreamcast contract was gone.
But Irimajiri did something unexpected. He believed in Nvidia's potential, and he convinced Sega's management to invest five million dollars in the company anyway. Huang later said this funding was all that kept Nvidia alive. Irimajiri's "understanding and generosity," he reflected, "gave us six months to live."
Huang used that time ruthlessly. He laid off more than half the company, reducing headcount from one hundred employees to forty. Every remaining resource went into developing a new graphics accelerator—one designed for triangles this time. They called it the RIVA 128.
By the time RIVA 128 was ready for release in August 1997, Nvidia had exactly one month's payroll left in the bank. The sense of impending doom became so pervasive that it gave rise to the company's unofficial motto: "Our company is thirty days from going out of business." For years afterward, Huang opened internal presentations with those words.
The RIVA 128 sold a million units in four months.
The Graphics Processing Unit
Nvidia went public on January 22, 1999. That investment Sega made after canceling the Dreamcast contract? It turned out to be the best decision Irimajiri ever made as president. After he left the company in 2000, Sega sold its Nvidia shares for fifteen million dollars—triple the original investment.
Later that year, Nvidia released a product that would reshape the industry: the GeForce 256.
The GeForce 256 was the first product Nvidia explicitly marketed as a "GPU"—a graphics processing unit. The term might seem obvious now, but it marked a conceptual shift. This wasn't just a specialized chip for drawing pictures on a screen. It was a new category of processor.
What made it special was a feature called transformation and lighting, usually abbreviated as T&L. To understand why this mattered, think about how 3D graphics work. When a game renders a scene, it needs to figure out where every object sits in three-dimensional space and how light bounces off each surface. Traditionally, the computer's main processor—the CPU, or central processing unit—handled these calculations, then passed the results to the graphics hardware for final rendering.
The GeForce 256 moved transformation and lighting calculations onto the graphics card itself. This freed the CPU to do other work, and the GeForce was specifically designed to handle these mathematical operations extremely fast. The result outperformed existing products by a wide margin.
This success brought a major contract: Microsoft hired Nvidia to develop the graphics hardware for a new video game console called the Xbox. The deal included a two hundred million dollar advance.
The CUDA Gambit
Graphics processors have a fundamentally different architecture than the CPUs that run most computer software. A CPU is designed to execute complex instructions very quickly, one after another. It's like having one extraordinarily capable worker who can handle any task but does them sequentially.
A GPU is designed for a different kind of problem. It contains thousands of simpler processing units that can all work simultaneously. It's like having an army of workers, each one less versatile individually, but together capable of enormous throughput—as long as the task can be broken into many parallel pieces.
Rendering graphics is exactly this kind of problem. When you draw a scene, you need to calculate the color of millions of pixels, and each pixel's calculation is largely independent of the others. Perfect for parallel processing.
In the early 2000s, researchers began noticing that GPUs might be useful for more than just graphics. Any problem that could be broken into thousands of parallel calculations might run faster on a GPU than a CPU. The challenge was that programming GPUs required specialized graphics knowledge. You had to trick the hardware into treating your scientific calculation as if it were a rendering problem.
Nvidia made a bet. They invested over a billion dollars developing CUDA—Compute Unified Device Architecture—a software platform that let programmers write ordinary code that would run on Nvidia's GPUs. No graphics tricks required. Released in 2006, CUDA transformed GPUs from specialized graphics hardware into general-purpose parallel processors.
It was a massive gamble. Nvidia was spending heavily on technology that wouldn't immediately sell more gaming cards. Many investors were skeptical.
Then came deep learning.
The AI Revolution
Deep learning is a branch of artificial intelligence where software learns patterns from data rather than following explicit rules. The technique had existed for decades, but it was considered impractical for most applications. Training a deep learning model requires performing astronomical numbers of mathematical operations, and traditional computers simply weren't fast enough to make it work at scale.
GPUs changed that equation entirely.
The mathematical operations at the heart of deep learning—matrix multiplications and similar calculations—are exactly the kind of parallel workload that GPUs excel at. Researchers discovered they could train models on Nvidia's GPUs using CUDA, and what had taken weeks on CPUs now took hours or days. Problems that had been theoretically interesting but practically impossible became solvable.
The breakthrough moment came in 2012, when a deep learning model trained on Nvidia GPUs won an image recognition competition by a dramatic margin. The technology that would eventually power ChatGPT, self-driving cars, and countless other applications had found its hardware platform.
Nvidia was positioned perfectly. They hadn't predicted deep learning specifically—nobody had—but their billion-dollar bet on CUDA meant they had the only ecosystem for running these new AI systems at scale. By 2025, Nvidia controlled more than eighty percent of the market for GPUs used in training and deploying AI models.
Triangles All the Way Down
In 2013, Nvidia announced plans for a new headquarters: two giant triangle-shaped buildings across the highway from their existing campus in Santa Clara. The company chose triangles as its design theme. As Huang explained, the triangle is "the fundamental building block of computer graphics."
It was also, perhaps unintentionally, a monument to the moment that nearly destroyed them—when they bet wrong on quadrilaterals and almost ran out of money.
The company that emerged from those near-death experiences kept diversifying. They developed Tegra, a line of mobile processors for smartphones, tablets, and automotive systems. They created the Shield line of gaming devices and launched GeForce Now, a cloud gaming service that lets players stream games to any device. They partnered with Toyota and Baidu on autonomous vehicle technology.
But it was the data center business—selling GPUs for AI training and high-performance computing—that transformed Nvidia from a successful chipmaker into something unprecedented.
The Four Trillion Dollar Company
In 2023, Nvidia became the seventh American company to reach a one trillion dollar market valuation. This alone was remarkable for a graphics card company.
Then things accelerated.
The release of ChatGPT in late 2022 triggered what journalists began calling an "AI boom." Suddenly every technology company wanted to train and deploy AI models, and almost all of them needed Nvidia's hardware to do it. Demand for data center GPUs exploded.
In 2025, Nvidia became the first company in history to surpass four trillion dollars in market capitalization. Then it passed five trillion. It had become, by some measures, the most valuable company on Earth.
The numbers are difficult to contextualize. As of early 2025, Nvidia held ninety-two percent of the discrete GPU market for desktops and laptops. Their chips powered over seventy-five percent of the world's top five hundred supercomputers. Bloomberg added them to a list they called the "Magnificent Seven"—the biggest companies on the stock market.
Thirty years earlier, the company had forty employees and one month's payroll in the bank.
The Diner on Berryessa Road
There's something fitting about Nvidia's origin at a Denny's. The chain is famous for being open twenty-four hours, serving anyone who walks in, at any time of night. It's where you go when you need a place to think and plan without pretension.
Jensen Huang, the Taiwanese-American engineer who gave up a comfortable corporate job to chase a vision discussed over coffee and pancakes, remains Nvidia's CEO more than three decades later. This kind of founder longevity is extremely rare in technology companies, which typically cycle through executives every few years.
Huang's management philosophy reflects those early near-death experiences. He's known for being intensely demanding, for maintaining what he calls a "flat" organization where information flows freely, and for never quite shaking the sense that the company is always thirty days from going out of business.
That paranoia might seem irrational for a five trillion dollar company. But Nvidia has watched dozens of competitors rise and fall. They've seen technologies they dominated become obsolete overnight. They remember being days from running out of money while their product strategy collapsed around them.
The Denny's where they founded the company still exists, by the way. It looks like any other Denny's—unremarkable, functional, open all night. There's no plaque commemorating what happened there. Just a booth where three engineers decided to bet everything on a future they could barely imagine, and where the most valuable company in the world began with forty thousand dollars and a shared dream about the power of graphics.
What Comes Next
Nvidia's dominance isn't guaranteed to last forever. Competitors are investing billions in alternative AI chips. Major customers like Google, Amazon, and Microsoft are developing their own processors to reduce dependence on a single supplier. The Chinese market, which represents enormous potential demand, faces restrictions on accessing Nvidia's most advanced chips due to American export controls.
But for now, the company occupies a position unlike anything in technology history. They don't just sell the shovels during a gold rush—they invented the only shovel that works for this particular kind of digging, and they've been refining it for two decades while everyone else was focused on other things.
The CUDA ecosystem they built, that billion-dollar gamble that seemed so risky at the time, created what economists call a moat: software developers learned to write programs for Nvidia's platform, which meant researchers preferred Nvidia hardware, which meant companies standardized on Nvidia systems, which meant the next generation of developers learned CUDA. Each cycle reinforced the others.
Whether this advantage proves permanent or temporary, Nvidia's journey from a diner booth to the pinnacle of global capitalism remains one of the most improbable stories in business history. Three engineers who couldn't agree on a name, who nearly failed multiple times, who bet wrong on fundamental technology choices and somehow survived anyway—they built something that powers the artificial intelligence systems reshaping the world.
Not bad for a company that started with forty thousand dollars and no name.