🔥 Gate.io Launchpool $1 Million Airdrop: Stake #ETH# to Earn Rewards Hourly
【 #1# Mainnet - #OM# 】
🎁 Total Reward: 92,330 #OM#
⏰ Subscription: 02:00 AM, February 25th — March 18th (UTC)
🏆 Stake Now: https://www.gate.io/launchpool/OM?pid=221
More: https://www.gate.io/announcements/article/43515
Lenovo AI server achieves local deployment of DeepSeek full-blooded large model for the first time, with less than 1TB, supporting 100 concurrency.
On March 3, Jinshi Data reported that recently, Lenovo Group announced that based on the Lenovo Wentian WA7780 G3 server, it has successfully implemented the single-machine deployment of the DeepSeek-R1/V3 671B large model for the first time in the industry, carrying 100 concurrent users with less than the industry-recognized 1TGB memory (actually 768GB) for a smooth experience. According to Lenovo's test data, in a standard test environment with 512 TOKENs, the system can support 100 concurrent users to continuously obtain a stable output of 10 TOKENs per second, compressing the first TOKEN response time to within 30 seconds.