Insights from the AI Lab

Sponsored by Dell Technologies

Fresh insights from our datacenter in Colorado. Bringing real-world performance and analysis with hands-on testing of enterprise technologies.

AI Network Icon

Dell AMD Instinct Series GPU Cluster with Dell Networking

Accelerate innovation. With AMD Instinct GPUs and Broadcom Ethernet, the Dell PowerEdge XE9680 powers faster model training, efficient scaling, and simplified operations for AI at scale.

AI Networking

GPU Acceleration

Server Platforms

GPU Acceleration Icon

Dell PowerEdge XE9680 H200 Cluster with Dell 400GbE Networking

Engineered for efficiency and speed. The Dell PowerEdge XE9680 with NVIDIA H200 GPUs and Broadcom Ethernet achieves near perfect scaling and low latency throughput for AI training and inference.

AI Networking

GPU Acceleration

Server Platforms

GPU Acceleration Icon

AI On-Premises: A Look at OpenAI GPT-OSS-120B

Bring AI in-house without compromise. OpenAI’s OSS-120B model, tested on Dell PowerEdge with NVIDIA and AMD GPUs, proves enterprises can achieve cloud-class performance on-prem while keeping full control of their data.

Emerging Tech

GPU Acceleration

AI Network Icon

Network Topology Analysis: Scaling Considerations for Training and Inference

Rethink your AI fabric. Discover how rail-based architectures outperform traditional networks, reducing cost and complexity while enabling massive scale for LLMs and MoE models.

AI Networking

GPU Acceleration

Emerging Tech Icon

Enterprise Digital Twins Transforming Modern Data Center Development and Operations

Reimagine your data center lifecycle. Enterprise digital twins accelerate design, cut costs, and optimize operations with AI-driven insights and real-time visualization.

Emerging Tech

No more posts available to show.

Latest Insights

About the Lab

Introducing our AI Performance and PoC lab in Colorado:

  • Built for benchmarking, iteration, and real-world results
  • 3MW+ capacity with air-cooled systems today, liquid-cooled tomorrow
  • Supports LLM training and inference, RAG pipelines, and computer vision
  • Ready for next-gen accelerators with 400/800G fabrics, high-memory nodes, and scalable NVMe/object storage
  • Engineers get secure remote access, reproducible testbeds, and deep observability
  • Optimized for altitude efficiency and deployment flexibility
  • Mission: Acceleration innovation to real-world AI impact