Insights from the AI Lab

Sponsored by Dell Technologies

Fresh insights from our datacenter in Colorado. Bringing real-world performance and analysis with hands-on testing of enterprise technologies.

GPU Acceleration Icon

AI On-Premises: A Look at OpenAI GPT-OSS-120B

Bring AI in-house without compromise. OpenAI’s OSS-120B model, tested on Dell PowerEdge with NVIDIA and AMD GPUs, proves enterprises can achieve cloud-class performance on-prem while keeping full control of their data.

Emerging Tech

GPU Acceleration

AI Network Icon

Network Topology Analysis: Scaling Considerations for Training and Inference

Rethink your AI fabric. Discover how rail-based architectures outperform traditional networks, reducing cost and complexity while enabling massive scale for LLMs and MoE models.

AI Networking

GPU Acceleration

Emerging Tech Icon

Enterprise Digital Twins Transforming Modern Data Center Development and Operations

Reimagine your data center lifecycle. Enterprise digital twins accelerate design, cut costs, and optimize operations with AI-driven insights and real-time visualization.

Emerging Tech

AI Network Icon

Optimizing AI Workloads with Dell Ethernet Infrastructure

Unlock 400G networking at scale. Dell PowerSwitch Z series plus 400G NICs deliver lower latency, higher throughput fabrics that keep GPUs busy and jobs moving.

AI Networking

GPU Acceleration

Storage Icon

AI Storage Pipeline Acceleration with Dell PERC H975i (PERC13)

Meet the RAID controller built for AI. Dell PERC13 delivers higher IOPS, fast rebuilds, and balanced throughput so data keeps up with accelerated training and inference.

Server Platforms

Storage Systems

No more posts available to show.

Latest Insights

About the Lab

Introducing our AI Performance and PoC lab in Colorado:

  • Built for benchmarking, iteration, and real-world results
  • 3MW+ capacity with air-cooled systems today, liquid-cooled tomorrow
  • Supports LLM training and inference, RAG pipelines, and computer vision
  • Ready for next-gen accelerators with 400/800G fabrics, high-memory nodes, and scalable NVMe/object storage
  • Engineers get secure remote access, reproducible testbeds, and deep observability
  • Optimized for altitude efficiency and deployment flexibility
  • Mission: Acceleration innovation to real-world AI impact