Insights from the AI Lab

Sponsored by Dell Technologies

Fresh insights from our datacenter in Colorado. Bringing real-world performance and analysis with hands-on testing of enterprise technologies.

AI Network Icon

Network Topology Optimization for AI Workloads

From bottleneck to breakthrough. Dell PowerEdge XE9680 with Broadcom networking delivers congestion-aware fabrics that improve efficiency, streamline training, and turn network optimization into a competitive advantage.

AI Networking

Server Platforms

GPU Acceleration Icon

Scaling AI with Dell PowerEdge XE9680

Choice built in. Dell PowerEdge XE9680 with Broadcom networking empowers enterprises to scale AI their way and balance performance, cost, and ecosystem flexibility on one unified platform.

GPU Acceleration

Server Platforms

AI Network Icon

Dell AMD Instinct Series GPU Cluster with Dell Networking

Accelerate innovation. With AMD Instinct GPUs and Broadcom Ethernet, the Dell PowerEdge XE9680 powers faster model training, efficient scaling, and simplified operations for AI at scale.

AI Networking

GPU Acceleration

Server Platforms

GPU Acceleration Icon

Dell PowerEdge XE9680 H200 Cluster with Dell 400GbE Networking

Engineered for efficiency and speed. The Dell PowerEdge XE9680 with NVIDIA H200 GPUs and Broadcom Ethernet achieves near perfect scaling and low latency throughput for AI training and inference.

AI Networking

GPU Acceleration

Server Platforms

GPU Acceleration Icon

AI On-Premises: A Look at OpenAI GPT-OSS-120B

Bring AI in-house without compromise. OpenAI’s OSS-120B model, tested on Dell PowerEdge with NVIDIA and AMD GPUs, proves enterprises can achieve cloud-class performance on-prem while keeping full control of their data.

Emerging Tech

GPU Acceleration

No more posts available to show.

Latest Insights

About the Lab

Introducing our AI Performance and PoC lab in Colorado:

  • Built for benchmarking, iteration, and real-world results
  • 3MW+ capacity with air-cooled systems today, liquid-cooled tomorrow
  • Supports LLM training and inference, RAG pipelines, and computer vision
  • Ready for next-gen accelerators with 400/800G fabrics, high-memory nodes, and scalable NVMe/object storage
  • Engineers get secure remote access, reproducible testbeds, and deep observability
  • Optimized for altitude efficiency and deployment flexibility
  • Mission: Acceleration innovation to real-world AI impact