SuperX Launches XN9160-B300 AI Server with NVIDIA for Next-Gen Compute

0
307

SuperX AI Technology Limited (NASDAQ: SUPX) has announced its latest flagship, the SuperX XN9160-B300 AI Server, equipped with eight NVIDIA Blackwell B300 GPUs to deliver peak performance for AI training, inference, and high-performance computing (HPC) workloads.

Designed for scalability, efficiency, and modularity, the XN9160-B300 is built to address the demands of data centers, AI factories, and scientific research environments.

Key Specifications & Architecture

  • Hardware & Chassis: The AI server is housed in an 8U form factor and packs dual Intel Xeon 6 CPUs, 32 DDR5 memory slots, and high-speed networking.

  • GPU & Memory: It integrates NVIDIA’s HGX B300 module with eight B300 GPUs. The system deploys 2,304 GB of unified HBM3E memory (288 GB per GPU), eliminating the need for memory offloading and enabling large model training and inference.

  • Interconnect & Networking: Connectivity includes 8 × 800 Gb/s InfiniBand or dual 400 Gb/s Ethernet, plus 5th-generation NVLink, ensuring ultra-low latency and high throughput for distributed workloads.

  • Performance Gains: NVIDIA’s Blackwell Ultra architecture boosts compute with ~50% more NVFP4 throughput and ~50% more HBM memory compared to previous generation chips.

Use Cases & Target Markets

SuperX positions the XN9160-B300 server for a wide range of high-demand applications, such as:

  • AI model training & inference: Particularly for foundation models, multimodal systems, and long-context models

  • Scientific & HPC work: Climate modeling, genomics, physics simulations, and large-scale research

  • Enterprise & financial analytics: Real-time risk modeling, quantitative simulations, and data-intensive workflows

  • Edge & data center transformation: Building AI “superpods” or next-gen compute clusters

Why This Matters

  • Pushing AI infrastructure forward: This server marks a step toward hardware that can support ever-larger models with fewer bottlenecks.

  • Efficiency under load: With memory unified across GPUs and high-speed interconnects, performance and scaling become more seamless.

  • Modular & future proof: The design supports upgrades, maintainability, and integration into evolving AI data center ecosystems.

Discover IT Tech News for the latest updates on IT advancements and AI innovations.

Read related news  - https://ittech-news.com/supabase-raises-100m-at-5b-valuation-co-led-by-accel-and-peak-xv/

 

Search
Categories
Read More
Sports
Kheloyar Fantasy League: Play Cricket & Earn Rewards in 2025
Introduction Cricket isn't just a sport played in India. It's an experience. Many millions of...
By Kheloyaarrr 012 2025-09-16 09:29:13 0 211
Other
Airline Ancillary Services Market Demand will reach USD 800.76 Billion by 2031 from USD 188.81 Billion
Market Overview: According to the most recent research study by Kings Research, the...
By Abhishek Singh 2025-07-07 07:31:30 0 542
Other
SEO India Outsourcing
Outsourcing SEO to India: A Smart Strategy for Scalable Business Growth Discover why outsourcing...
By PureVibes Tech 2026-04-17 11:07:57 0 14
Games
Kabuto King: 1,000+ Pokémon Card Collection
A dedicated Pokémon TCG enthusiast, known online as the "Kabuto King," has captivated...
By Xtameem Xtameem 2025-12-19 11:18:00 0 74
Home
The Evolution of Online Slots Through Time
Online slots have become one of the most popular forms of situs slot88 gacor digital...
By Liam Henry 2025-05-27 18:15:20 0 670