In November of this year, NVIDIA's market capitalization soared to a staggering $3.64 trillion, setting a new record that saw it briefly surpass giants like Apple and MicrosoftThe transformative power of NVIDIA's AI technology has permeated every corner of our existence, rendering it impossible to overlookCurrently, it commands over 95% of the AI training chip market, with nearly all significant AI models worldwide being trained using NVIDIA's graphics cardsThe advancements in their GPU technology have catalyzed breakthroughs in areas such as deep learning, autonomous driving, and intelligent healthcare.
Driving this monumental change is the visionary leader and co-founder of NVIDIA, Jensen HuangIn a newly released work by trend observer and renowned tech journalist Stephen Witty, titled "Jensen Huang: The Heart of NVIDIA," Huang emerges as a seemingly futuristic figure, accurately anticipating each pivotal technological milestone.
As I often say, it seems like they’re simply handling data.
Since 2018, NVIDIA’s graphics cards have incorporated the advanced feature of ray tracing
This technology convincingly simulates light reflections on surfaces, resulting in extraordinarily lifelike visualsFor years, ray tracing has been a coveted goal in computer graphics, and NVIDIA has now perfected it to the point of real-time rendering, allowing instant visual gratification.
Inside the executive suite at NVIDIA, a product demonstration expert showcased to me a stunning three-dimensional rendering of a Japanese ramen shopThe scene was breathtaking; light glinted off the metallic counter while steam wafted from a simmering pot of brothTo the untrained eye, there was no indication that this was anything but real.
Next, the expert introduced me to Diane—a hyper-realistic digital avatar fluent in five languagesThis composite entity was birthed from powerful generative AI that analyzed millions of real-life videos
- A-Share M&A Boom: What's Driving the Surge?
- Ensuring Reasonable Prices for Agricultural Products
- Booming AI Venture Capital!
- 10-Year Treasury Yield Hovers around 1.7%
- Concerns Arise Over "Low-Bid" Procurement Practices
While Diane's beauty was certainly striking, what left an even greater impression was the detail—the tiny blackheads on her nose, the delicate peach fuzz on her upper lipOnly the subtle glimmer in the whites of her eyes hinted at her non-human nature“We’re working on improving that,” the product expert stated.
Jensen Huang's grand vision is to merge NVIDIA's extensive research in computer graphics with generative AIHe envisions a future where image-generating AI can seamlessly render complex, habitable three-dimensional worlds, populated with incredibly lifelike charactersIn addition, AI focused on language processing is expected to interpret voice commands in real-timeOnce these technologies converge, users will be empowered to create entire universes simply through spoken commands.
As I exited the product demonstration area, I felt a rush of exhilaration mixed with dizzying awe
My imagination wandered into realms reminiscent of science fiction tales and biblical creation narrativesSeated on a meticulously trimmed triangular couch in the corner, I strained to envision the future world that my daughter would inhabitWhile they were deeply engrossed in significant advancements in the realm of computer science, I couldn’t help but question NVIDIA’s executives whether wielding such immense power was truly wiseThe condescending looks I received from them seemed to suggest I was questioning the utility of a washing machine.
I voiced my concerns about the potential for artificial intelligence to cause human casualtiesBrian Katanza responded, “Well, electricity kills people every year.” Pressing further, I questioned if such technology could lead to an artistic demiseDwight Dix asserted, “It will elevate the realm of art!” He added, “It will make you perform better at work.”
I probed further about whether AI could gain self-awareness in the near future
Huang clarified, “To be a sentient being, you must have consciousnessYou need a certain level of self-recognition, right?” He continued, “So, noI don’t know when that might happen.”
Throughout every interview with Huang, I pressed him on this issue, and his responses seldom changedI referenced Jeffrey Hinton’s concerns“Humans are merely a transitional phase in the evolution of intelligence,” he remarked in an interview with PBSHuang scoffed, “A lot of researchers don’t understand why he says thatMaybe he just wants attention for his work.”
This flippant dismissal caught me by surpriseContext is important; Hinton is one of the most forward-thinking researchers in AI history, with NVIDIA's financial success intricately tied to his lab's work—a fact Huang has acknowledged multiple times
Hinton is no casual street demonstrator, but rather a leading mind in the AI domain, a descendant of George Boole, warning us to be deeply concerned.
Yet Huang remained dismissive“You see, you bought a hot dog, and then the machine recommended adding ketchup and mustard,” Huang explained, “Does that mean the end of humankind?” He cited society’s rapid adaptation to inventions like cars, alarm clocks, and smartphones, claiming we would similarly grow accustomed to robotic vacuum cleaners.
“Robots aren’t doing anything extraordinary,” he stated“As I mentioned, they’re just processing dataOnce you understand how they operate, the world seems far less strange.” However, with continued prodding, Huang eventually appeared somewhat irritated
“I’m tired of all the assumptions about things with no evidence,” he retorted.
Regardless of how advanced artificial intelligence may become, it should serve us.
Globally renowned AI scientist Hinton once estimated the probability of AI leading to catastrophic consequences for humanity to be as high as fifty percentYet when I spoke with him in 2024, he lowered that probability to 10% to 20%. He elaborated that he adjusted his estimate because many intelligent individuals he respects vehemently opposed his views“The most notable among them is certainly Jensen Huang,” he remarked.
Huang posits that the probability of AI leading to disastrous consequences is zero—in fact, despite the fact that probabilities for catastrophic outcomes cannot technically be negative, Huang seems to have achieved that
He publicly dismisses the entire evaluative framework as foolish, accusing those discussing the topic of hindering human progressAt a restaurant, he even insinuated that Hinton's contemplation of such speculation damages his academic credibility.
This stance seemed to cause Hinton to retreat, while Yann LeCun maintained his steadfast positionNotably, LeCun is the only leading AI scientist who has never accepted funding from Silicon ValleyBoth Hinton and LeCun support a California bill that calls for regulating AI models with training costs exceeding $100 millionThis initiative, dubbed the "Frontier AI Model Safety Innovation Act" (SB 1047), faced significant opposition in Silicon ValleyVenture capitalists, tech company alliances, and Sacramento lobbyists collectively fought against itSome politicians spoke out against the proposal, and scholars like Stanford's AI professor Fei-Fei Li also held opposing views, asserting that the bill would stifle innovation without effectively reducing risk.
Andrew Ng, who gained fame at Google for synthesizing cat images, likened public worries about AI takeover to concerns over hypothetical overpopulation on Mars
Polls indicate that nearly 80% of the public supports the SB 1047 bill, yet in September 2024, California Governor Gavin Newsom vetoed the measure.
Huang has refrained from making any public comments regarding SB 1047, but he continues to emphasize that currently, there is no data to support the rather wild speculations surrounding AI risksWhen I relayed Huang's objections to LeCun, his emotions flared“Clearly, there’s no data!” he exclaimed, “Humanity hasn’t been wiped out yet! Are we supposed to wait until humans perish multiple times before we say, 'Oh, now we have data'?” His argument was compellingAcross the globe, all data failed to foresee the breakthroughs of AlexNet or the Transformer architectureIn just a decade, AI has witnessed two unforeseen and transformative leaps in capabilityLeCun maintains that the current models do not pose an immediate threat to human life, but who can predict the next breakthrough? No one can assert what it will bring or when it will arrive.
Even if superintelligent AI does not emerge within the next decade, its emergence within twenty or even a hundred years seems inevitable
This timeline stretches beyond investment return predictions but falls within the bounds of human historyIn a generation or two, Homo sapiens may no longer be the dominant species on Earth—yet venture capitalists are not considering such long-term horizons.
The mismatch between short-term profits and long-term potential risks has ignited turmoil within AI startupsTake OpenAI, for instance; in November 2023, the company staged an extraordinary coup within its boardSam Altman was initially voted out by Igor Suzhkevich, who soon after requested his reinstatementUltimately, Altman emerged victorious, with other nonprofit board members being replaced.
After Altman reclaimed his position, it became clear: despite OpenAI's nonprofit mission statement, Microsoft’s somewhat ‘restricted’ profitability model is pushing the organization to develop the most complex AI models in history
Following these events, Suzhkevich shied away from media engagement, but when I spoke with him in September 2023, he indicated a shift in focus from creating larger language models to ensuring that superintelligent AI aligns with human interests“I can’t comment on specific models,” Suzhkevich explained, “but I’m conceptualizing something I believe can address concerns regarding AI potentially going rogue and committing highly undesirable acts.”
However, as of the end of 2024, despite OpenAI's products becoming increasingly advanced, that solution remains elusiveIn May 2024, OpenAI launched GPT-4o—the “o” indicating “omni,” a multimodal AI capable of accepting any combination of text, audio, images, and video while returning any mix of text, audio, and image outputsThis model achieves seamless response times, thanks to the lightning-fast reasoning capabilities provided by NVIDIA's next-generation chips
On the very next day following GPT-4o's launch, Suzhkevich stepped down from his role.
OpenAI quickly packaged a dialogue module with GPT-4o, creating responses so instantaneous that conversing with it felt akin to engaging with a super-intelligent entity—many compared it to the AI featured in the movie "Her." Coincidentally, on the day OpenAI released GPT-4o, Google showcased its Astra, an augmented reality AI assistant capable of instantly answering any questions, recalling any details, and depicting any environmentAdditionally, another startup, Anthropic, launched its Claude model, matching or even surpassing GPT-4o in numerous benchmark tests.
National governments, cautious about lending sensitive data to the cloud, are now embarking on establishing large-scale sovereign AI training centers
Elon Musk’s xAI project has struck a monumental deal with Oracle, worth up to $10 billion, to lease its GPU servers—Musk seems to have tempered the survival concerns he expressed about AI back in 2015. Mark Zuckerberg’s Meta has made the largest investment in history, announcing it will spend $30 billion to procure one million NVIDIA chips while ensuring a dedicated nuclear reactor supplies its power.
To consider that any of these initiatives might be hindered by a bill in the California Senate reflects an overly optimistic faith in state authorityAs the speed of NVIDIA’s GPUs escalates, so do expectations for economic growthIndustry titans in Silicon Valley have all made their bets, and any limitations placed on their ascendance would surely spark a stock market crashNo politician commands sufficient influence to realize that.
However, despite the marginalization of LeCun, Hinton, and Suzhkevich by capital, their perspectives remain paramount