Meeting Enterprise AI Challenges and Building AI Factories – Evaluating Lenovo’s Hybrid AI Server Platforms
-
Russ Fellows
Technologists and business executives continue to learn more about the potential uses for AI every day. AI’s ability to help augment company processes, people and productivity is becoming clear. In the latest Futurum Group CIO Insights survey, 89% of CIOs report leveraging AI for strategic improvements, with 71% re-evaluating their cloud workloads.
While the awareness and adoption rates of casual AI use among people generally has been astoundingly rapid compared to other new technologies, the integration rate for companies is still low. In market adoption terms, AI is in the early adopter phase, indicating that while companies have started AI projects, many have not yet implemented these solutions in a widespread or structured manner. Stated another way, we expect the use of AI within organizations to dramatically increase over the next 3 to 5 years.
One of the most important considerations for many organizations is not if they will use AI, but how they can do so while protecting their valuable corporate data, maintaining security, and adhering to regulations regarding privacy and sovereignty. This challenge explains why firms may have begun with cloud-based AI, and are beginning to implement AI solutions at scale with hybrid and private AI factories. Hybrid and private AI are options that can help companies maintain control of their private data.
Lenovo asked Signal65 to evaluate their latest hybrid AI platform option in the context of running typical enterprise AI workloads and building AI factories. We utilized the Signal65 AI inferencing test suite, developed in conjunction with Kamiwaza.ai, to run a variety of workloads. Our focus was on scenarios for enterprises who are looking to begin deploying AI tools on-premises in order to boost their productivity, while also maintaining control of corporate data.
Through our testing, we found that the Lenovo ThinkSystem SR680a V3 with NVIDIA H200 GPUs provides an excellent foundation for AI deployments, including support for the largest language models—such as Llama-405B and DeepSeek-R1—while also supporting use for RAG and fine-tuning models to improve accuracy and relevance. In short, this system is a flexible and powerful entry point for private AI, for companies of any size, and across industry verticals.
Research commissioned by: