QCOM AI Accelerators, NVIDIA in 6G, X-Ray Litho as EUV Killer?, Skyworks-Qorvo Merger
Deep Dives
Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:
-
Extreme ultraviolet lithography
13 min read
The article discusses X-ray lithography as a potential 'EUV killer' and references ASML's monopoly on EUV machines. Understanding the technical foundations, historical development, and current state of EUV lithography provides essential context for evaluating whether Substrate's X-ray approach could truly disrupt the industry.
The last few weeks have been action packed in semi! Summary of content published in this newsletter this past month. If you’re new, start here!
October has been a crazy month in the semis! Nvidia is now a $5T company, after hitting the $4T mark just 3 months ago, and according to BBC news:
Nvidia’s value now exceeds the GDP of every country except the US and China, according to data from the World Bank, and is higher than entire sectors of the S&P 500.
A quick check indicates that this is indeed true! There have also been so many circular AI financing deals that I’ve lost track, and won’t mention it. I just get the feeling that everyone is betting on everyone else’s horse at the same time. In other news, Amazon laid of tens of thousands of workers, and seemingly all from the corporate sector and not warehousing. While strictly not semi news, I only mention it because there are theories online that the layoffs are to fund GPU purchases; aka “Trading talent for GPUs.” Whatever the reason, that’s a lot of people out of a job and I feel for them.
Now let’s get to four stories that really got my attention this month.
Qualcomm enters AI datacenter market for inference
This month Qualcomm announced their AI200 and AI250 rack-scale accelerators for AI inference - a bid to enter the AI market. Given that Qualcomm has high-end Arm CPUs at its disposal and Hexagon NPU cores for AI workloads, it seems entirely reasonable to try and piece together a portion of the datacenter market. Wall street seemingly liked the news, with jumping over 15% on the announcement. Interestingly, they also have a customer lined up; Humain AI from the Kingdom of Saudi Arabia is targeting 200MW of AI inference racks from Qualcomm. At an estimated 160W per AI200 rack, we are looking at 1,250 racks to be delivered to Humain AI in the next few years.
What was missing from the announcement was any sort of technical specs on what these racks are capable of. It is estimated that the system uses LPDDR memory and not HBM. The mention of near-memory compute and 10x memory bandwidth claim is reminiscent of what d-Matrix is doing in their Pavehawk platform. But, The Next Platform has pieced together what they think are the specifications based on ...
This excerpt is provided for preview purposes. Full article content is available on the original publication.