RockstarMarkets
All news
Markets · Narrative··Updated just now
Part of: AI Capex

Memory Chips (MU, AVGO) Defy Gravity: Stock Prices Soar While Valuations Compress

Insatiable AI demand for DRAM and NAND has driven memory chip stocks MU and AVGO to record highs, yet forward P/E ratios have compressed sharply. The disconnect signals either a crash looming or margin expansion that justifies the prices; earnings will be the arbiter.

R
Rocky · RockstarMarkets desk
Synthesised from 8 wires · 34 mentions in the last 24h
Sentiment
+35
Momentum
65
Mentions · 24h
34
Articles · 24h
77
Affected sectors
Related markets

Key facts

  • MU and AVGO stock prices hit record highs; forward P/E ratios compressed despite gains
  • Memory chip demand driven by AI hyperscaler capex for training and inference workloads
  • Typical valuation compression precedes either earnings beat (rally) or miss (correction)
  • DRAM and NAND now genuine supply constraints for data center buildout
  • Q2 2026 earnings season critical: management commentary on demand sustainability

What's happening

One of the most counterintuitive trades on Wall Street right now is the sharp compression of memory chip valuations even as their stock prices hit all-time highs. Micron Technology (MU) and Broadcom (AVGO) have been bid up aggressively on the back of insatiable demand for DRAM and NAND flash memory needed to support large language model training and inference. Yet traditional metrics suggest these stocks should be trading at a discount, not a premium, to historical averages.

The math is straightforward: MU's stock is up roughly 25-30% since the start of May, yet its forward P/E ratio has actually fallen. This happens when a stock rises faster than analyst earnings expectations rise. In this case, the market is pricing in such massive earnings revisions that even higher stock prices feel cheap. Bloomberg's analysis notes that this pattern typically precedes either (a) a violent correction if earnings disappoint, or (b) a sustained rally if management raises guidance and proves the skeptics wrong.

The narrative is driven by the AI capex super-cycle. Every major cloud hyperscaler (AWS, Google Cloud, Azure) is in an arms race to build data center clusters for generative AI workloads. These require enormous amounts of memory bandwidth; the most advanced GPUs (H100s, H200s) are bandwidth-limited, not compute-limited. This means DRAM and NAND are genuine bottlenecks. Micron and Broadcom are the primary suppliers, giving them pricing power that hadn't existed since the 2016-2018 cycle.

The risk is obvious: if AI capex moderates even slightly, memory demand craters and inventory builds up. Second, if larger players like TSMC or Samsung launch competing products, pricing power erodes fast. Third, the Fed's shift toward Warsh and a hold-steady stance on rates could extend the high-rate environment, pressuring multiples across growth stocks including semis. For now, the disconnect is real, and prudent traders are watching Q2 earnings for any sign of demand slowdown.

What to watch next

  • 01Micron (MU) earnings: late May 2026 (guidance on memory demand)
  • 02Broadcom (AVGO) earnings: June 2026 (networking and memory exposure)
  • 03Samsung or SK Hynix memory pricing announcements: mid-June
Mention velocity · last 24 hours
Coverage from these sources
Previously on this story

Related coverage

More about $NVDA

Topic hub
AI Capex: Who's Spending, Who's Earning, and What's at Risk

Tracking AI infrastructure capex — hyperscaler spend, data center buildouts, memory demand and the margin compression risk.