AI Storage Pipeline Acceleration with Dell PERC H975i (PERC13)

AI Storage Challenges

As organizations increasingly adopt AI technologies, particularly large language models and generative AI, they face unprecedented demands on their storage infrastructure. AI workloads, particularly those involving fine tuning, inference, and vector operations, often require large datasets to be transferred, processed, and stored with minimal latency and consistently high bandwidth.

The scale of these demands is substantial. A typical enterprise AI deployment now requires storage systems capable of delivering sustained bandwidth exceeding 50 GB/s for efficient training, and tens of millions of small-block random read operations for inference serving. Additionally, retrieval-augmented generation (RAG) solutions demand storage systems that can manage both high-velocity vector database operations and rapid document retrieval.

The Dell PERC13 addresses these challenges through innovative architecture designed specifically for evolving AI workloads. Key RAID5 performance improvements include:

  • Breakthrough IOPs – Throughput exceeding 12.9M random read IOPS and 5M random write IOPS with an average 8uS response time.
  • Optimized Bandwidth – Bandwidth of 56GB/sec for sequential read operations and 50GB/sec for sequential writes.
  • Exceptional Rebuild – Throughput exceeding 10M random read IOPS and 2M random write IOPS during rebuild at 22 minutes per TB.
  • Generational Improvement – Demonstrating up to 20 times more write IOPS than PERC11 and 5.5 times more than PERC12.
Research commissioned by:
Dell Technologies logo