RockstarMarkets
All news
Markets · Narrative··Updated 2h ago
Part of: AI Capex

Memory Chip Stocks Soar on AI Demand But Valuations Compress; NVDA, AVGO, MU Defy P/E Logic

Insatiable demand for AI training and inference memory chips has lifted semiconductor stocks to record highs, yet they are becoming cheaper by traditional metrics. NVDA trades at stretched valuations but memory suppliers (MU, SK Hynix) offer valuation relief as demand lifts all boats.

R
Rocky · RockstarMarkets desk
Synthesised from 8 wires · 53 mentions in the last 24h
Sentiment
+70
Momentum
85
Mentions · 24h
53
Articles · 24h
83
Affected sectors
Related markets

Key facts

  • NVDA jumped $22 in one week; POET surged 20-25 percent on short squeeze
  • $100k invested in NVDA in early 2023 now worth $1.5 million
  • Memory chip stocks cheaper by traditional metrics despite price soars, earnings revisions outpacing valuations
  • Broadcom citing capacity constraints in switches and optics; data-center capex could extend timelines
  • JPMorgan noted AI buildout expanding beyond training into inference and networking infrastructure

What's happening

The memory chip rally represents one of the sharpest disconnects between price momentum and valuation expansion in the current cycle. AI data centers are burning through every H100, H200, and next-gen GPU compute resource available, which in turn demands massive DRAM and NAND inventories. NVDA has posted a $22 jump in a single week; a $300 investment in POET (optical interconnect) would have turned into $3,050 on the week; ARM surged post-earnings on AI momentum. Yet Bloomberg's analysis reveals that despite soaring share prices, memory stocks are becoming *cheaper* to own, price-to-book and price-to-earnings ratios are compressing as earnings revisions upward outpace stock price appreciation.

This creates a bifurcation in market narratives. On one hand, retail and some institutional traders are euphoric: if you bought $100k of NVDA in early 2023, it would be worth $1.5 million today. On the other hand, sophisticated allocators are hedging: Broadcom's latest commentary suggests chip capacity constraints are widening (AVGO "may present a constraint"), and supply-chain bottlenecks in optics, switches, and cooling could force data-center operators to stretch capex timelines. JPMorgan and others flagged that AI infrastructure demand is broadening beyond GPU training into long-term inference capacity and networking, which is less margin-accretive than H200 sales. The key risk: if AI training capex reaches "peak efficiency" (dollar-per-FLOP spent plateaus), memory demand could normalize and growth rates decelerate sharply.

The data reflects cautious optimism with execution risk. Cisco's latest guide showed AI networking demand is strong but not blockbuster; POET's short squeeze was driven by extreme short interest, not fundamental visibility; and NVDA's denial of chip export restrictions to China, paired with Nvidia CEO's casual Beijing noodle moment, signal both confidence in market access and anxiety about geopolitical chokepoints. Foreign investors fear that Japan's corporate governance reforms could face rollback, threatening the ARM and semiconductor supply-chain anchor.

The narrative holds if: (1) AI capex accelerates through 2027; (2) memory margins stay fat despite increased competition; (3) geopolitical restrictions don't narrow China access. It breaks if AI training capex peaks sooner, memory prices compress, or US-China tension snaps semiconductor supply chains.

What to watch next

  • 01NVDA Q2 2026 earnings and China revenue guidance: late July
  • 02Broadcom Q2 guidance on capacity and backlog: June
  • 03TSMC capacity utilization and pricing commentary: Q2 earnings, May
Mention velocity · last 24 hours
Coverage from these sources
Previously on this story

Related coverage

More about $NVDA

Topic hub
AI Capex: Who's Spending, Who's Earning, and What's at Risk

Tracking AI infrastructure capex — hyperscaler spend, data center buildouts, memory demand and the margin compression risk.