Lenovo AI server achieves local deployment of DeepSeek full-blooded large model for the first time, with less than 1TB, supporting 100 concurrency.

On March 3, Jinshi Data reported that recently, Lenovo Group announced that based on the Lenovo Wentian WA7780 G3 server, it has successfully implemented the single-machine deployment of the DeepSeek-R1/V3 671B large model for the first time in the industry, carrying 100 concurrent users with less than the industry-recognized 1TGB memory (actually 768GB) for a smooth experience. According to Lenovo's test data, in a standard test environment with 512 TOKENs, the system can support 100 concurrent users to continuously obtain a stable output of 10 TOKENs per second, compressing the first TOKEN response time to within 30 seconds.

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • 1
  • Share
Comment
0/400
No comments