🔥 Gate.io Launchpool $1 Million Airdrop: Stake #ETH# to Earn Rewards Hourly
【 #1# Mainnet - #OM# 】
🎁 Total Reward: 92,330 #OM#
⏰ Subscription: 02:00 AM, February 25th — March 18th (UTC)
🏆 Stake Now: https://www.gate.io/launchpool/OM?pid=221
More: https://www.gate.io/announcements/article/43515
AI technology innovation, a new narrative on the application side beyond DeepSeek
Image source: Generated by Unbounded AI
The Spring Festival holiday in 2025 has just passed, but the shock wave caused by DeepSeek is still lingering.
Through methods such as FP8 training, multi-token prediction, improved MOE architecture, multi-head latent attention mechanism (MLA), and SFT-free reinforcement learning, DeepSeek-V3 has achieved performance surpassing top open-source models such as Qwen2.5-72B and Llama-3.1-405B at extremely low training cost, as well as some closed-source models. DeepSeek-R1 has demonstrated inference performance surpassing OpenAI o1.
The success of the DeepSeek series of models has opened up a new path for the large-scale model industry that was originally driven by computing power as the core logic, and brought the world's basic large-scale models to a new level.
However, in addition to basic large-scale models such as DeepSeek, which focus on "technical narrative", there is another type of large-scale model worth paying attention to, that is, application-oriented large-scale models that innovate AI technology around core products and core scenarios.
China has always been a big country in application.
In 2024, against the backdrop of gradually catching up with computing power supply and a significant drop in inference prices, domestic AI applications have emerged in various fields. Whether it's the dream AI, Miao Ya camera, Kuaishou Kelin in the field of Wenshengtu and Wensheng video, or the Nano Search (formerly 360AI Search) and Tiangong AI Search in the AI search field, or Xingye and Cat Box in the AI companion field, or Dou Bao, Quark, Kimi, Tongyi, etc. in the AI assistant category, all of them have experienced a surge in user base in 2024.
These AI applications rely on the support of underlying model capabilities. For AI applications, the competition between application-oriented large models is not about model parameters, but about application effects.
For example, the reason why Kimi was able to gain high attention in a short period of time is closely related to the long-text reading and parsing capabilities of its large model; Quark's 200 million users and 70 million monthly active users benefit from the "user-friendly" Quark large model behind it; Cool AI's powerful text-to-video and image-to-video functions rely on the support of the Cool large model.
The evolution of basic large-scale models is far from over, but as more and more companies begin to deploy AI applications in 2025, the development of application-based large-scale models will be a necessary prerequisite for the comprehensive outbreak of AI applications.
With the maturity and breakthrough of large model technology, the increasingly perfecting of computing power infrastructure, the continuous strengthening of national policies, the emergence of killer applications such as Sora/Suno, and the strong growth in investment and financing in fields such as AI Agent/embodied intelligence/AI toys/AI glasses, 2025 is the year of the explosion of AI applications, which has almost become a widely accepted consensus in the tech industry.
And this consensus has also been accelerated by the popularity of DeepSeek. Because DeepSeek pushes up the level of industry-based model capabilities, it creates a better development environment for AI applications.
According to 'Jiazi Light Years' observation, since the second half of 2024, well-known investment institutions such as Hillhouse Capital, Sequoia Capital, Baidu Ventures, and InnoVenture have increased their investment in AI applications, especially targeting early-stage projects in the AI application field; some investors have indicated that by the end of 2024, the actual number of AI application projects that have obtained financing in the primary market is at least twice as many as the number of projects publicly disclosed.
Sensor Tower data also shows that in 2024, global mobile phone users spent $1.27 billion on AI applications, with AI-related apps being downloaded from the iOS and Google Play stores a staggering 17 billion times.
However, a cruel reality is that there are millions of AI applications, but only a few can sustain long-term operation, and even fewer can become popular.
"Jiazi Light Years" once reported a website called "AI Graveyard," which contains 738 dead or discontinued AI applications, including some former star projects: such as OpenAI's AI speech recognition product Whisper.ai, Stable Diffusion's well-known shell website FreewayML, StockAI, and the AI search engine Neeva, which was once seen as a "Google competitor" (see "AI Graveyard, and 738 dead AI projects | Jiazi Light Years").
So, what kind of AI applications can run for a long time and have vitality?
"Jiazi Lightyear" believes that the first is to take the model as the core and give full play to the capabilities of the model; The second is to have a strong enough insight into user needs.
Microsoft CEO Satya Nadella once said when looking at the AI industry trends for 2025, "Applications with AI models at their core will redefine various application fields in 2025." In other words, applications with fewer shell levels, closer proximity to the model, and maximum utilization of the model's capabilities will attract more user usage and retention.
Looking at the list of AI products on the new list in January 2025, it is not difficult to find that among the top 10 on the domestic list, 8 are AI assistant applications directly built on models.
Image source: NewRank
To have a strong enough insight into user needs, it is necessary to rely on a huge user base - only with enough users can user data and tags accumulate enough and thick enough, and enterprises can dig out the most real pain points of users.
These two points also mean that large manufacturers have more advantages in AI applications.
Large manufacturers have sufficient computing power and talents to develop their own models, so they can directly deploy AI applications on top of their self-developed models without the need for layers of shelling. Large manufacturers also have a huge user base and mature traffic entrances, which not only make user data richer and easier to mine demand, but also provide natural advantages for the promotion of AI applications. In addition, the strong ecological integration capabilities of large manufacturers will also help provide richer functions for products and enhance the user stickiness of AI applications.
The previously mentioned product rankings also prove this. Among the top ten applications, six of them are from major manufacturers.
In a recent interview with Tencent Technology, Zhu Xiaohu also mentioned that the data barriers for startups are not as high, and it is not suitable to build underlying models, but rather to capture 'customers' more tightly on top of the underlying models. This also indirectly confirms the advantages of tech giants in developing AI applications.
On the whole, the models and applications of large manufacturers also cause and effect each other, and together constitute a growth flywheel:
The data accumulation provided by the large user base provides high-quality expectations for model research and development, which helps to enhance the model's capabilities, making it better suited to specific scenarios and user needs; while the growth of model capabilities then enhances applications, enabling applications to have stronger product power and attract more users.
This kind of model, which has a large user base, is driven by user demand for research and development direction, and performs better in segmented scenes. Perhaps we can give it a name called "Application Big Model". The more AI applications are built on the basis of "Application Big Model", theoretically, the more chances they have to succeed.
For example, quarks, which are ranked just below DeepSeek in the list, are typical representatives.
"Jiazi Lightyear" observed that in the recent melee of the gods using AI, Quark, which was rarely mentioned before, is silently leading the way. According to the latest data from Analysys, at the end of 2024, Quark topped the list of mobile AI applications with 71.02 million monthly active users, surpassing the well-known Doubao and Kimi.
Source: Analysys Analytics
What is more worth paying attention to is the metric of 'user stickiness'.
According to third-party report statistics, Quark's three-day retention rate is more than 40%, compared to about 25% of the high-profile bean bags and Kimi smart assistants in the market during the same period; According to the "2024 Powerful AI Product List" released by Qimai Data, Quark ranks first in the "Annual Strength AI Product App List" and "Annual Product Download List", with a cumulative download volume of more than 370 million in 2024, which is unique among all kinds of AI products and has achieved a faulty lead.
Among the many AI products on the list, Quark is not the first to release large models, but it has quietly achieved a leading position in terms of traffic, downloads, and user stickiness. Why can Quark emerge in a fiercely competitive market?
Everything benefits from Quark's 'application-first' product and model strategy.
Quark has focused on "intelligent and accurate search" since the first day of searching, not only relying on a simple and ad-free interface and more accurate search results to quickly tear a hole in the market, but also based on the search business, around the student party and office worker groups derived quark network disk, quark scanning king, quark document, quark learning and other vertical products, the scene is gradually subdivided into the field of learning and work.
Taking the field of learning as an example, in 2020, Koala introduced the 'photo search for questions' function. During the epidemic, in response to many students being locked at home for online classes and facing difficulties in effective learning, the Koala learning team has upgraded the 'photo search for questions' function multiple times.
In the office field, Quark has also launched a series of related functions such as extracting text, tables, removing handwriting, document scanning, and document format conversion from the vertical scenario of "scanning".
The simple tool background, the increasingly rich scene applications, coupled with the new user ecology without advertising or charging in the early stage, have enabled the number of Quark users to skyrocket, from millions to tens of millions, with a cumulative service user base of over a hundred million.
In November 2023, Quark released a 100-billion-level parameter large model "Quark Large Model".
The quark large model is a multi-modal large model independently developed by Quark based on the Transformer architecture, which will train and fine-tune hundreds of millions of graphic data every day, and has the characteristics of low cost, high response and strong comprehensive ability. Facing user needs and quark product vertical scenarios, the quark large model pays more attention to practical applications, and derives vertical models such as general knowledge, medical care, and education to provide more professional and accurate technical capabilities.
At the same time as the launch of the quark large model, Quark upgraded the AI recognition effect of scanning products and the AI search capability of network disk products.
The first landing scenario of the quark model is health and medical treatment.
In December 2023, Quark announced a comprehensive upgrade to its health search function and launched the 'Quark Health Assistant' AI application in December 2023. The 'Quark Health Assistant' integrates medical knowledge graph and generative dialogue capabilities, providing users with more comprehensive and accurate health information, and supports multi-round questioning and dialogue for health issues.
In January 2024, Quark successively launched functions such as "AI Learning Assistant", "AI Dictation", and "AI PPT", and in July 2024, it launched a one-stop AI service centered on AI search on the mobile terminal, and in August 2024, it released a new Quark PC terminal with "system-level full-scene AI" capabilities.
For example, when a user searches for "Black Myth: Wukong, which scenic spots in Shanxi are inspired by", Quark Super Search box integrates AI answers, original sources, and historical searches into one - not only can it generate intelligent summaries like other AI searches, but also provides source display in the sidebar, and retains traditional search engine entry-style webpage presentation under the AI search answers. This improves user information retrieval efficiency and enhances the credibility of AI answers.
In addition, Quark has also built a one-stop information service system around the "super search box", including network disk, scanning, document processing, health assistant and other intelligent tools, realizing the whole process service from retrieval to creation, summarization, editing, storage, and sharing, bringing users a seamless information service experience.
Unlike many large manufacturers imitating ChatGPT to launch "All in One" Chatbot-like AI assistants, Quark's strategy is "AI in All" - integrating AI capabilities into every aspect of the product and landing it in specific application scenarios.
From the initial photo search for questions, to the college entrance examination registration consultation, and then to intelligent office assistance, Quark's product evolution has always revolved around the specific user needs in particular scenarios. Subsequently, Quark has successively launched and updated AI question search, AI academic search, AI tips, and other functions, creating differentiated AI applications around learning and office scenarios.
The development process of quark AI in the past year, mapping: Jiazi light year
Among them, the "AI Search" function, which will be upgraded in November 2024, is a typical representative of Quark's AI capabilities.
In fact, as early as December 2023, Quark had launched the AI topic assistant. At that time, the AI topic assistant mostly relied on the question bank as a "knowledge base", and AI could only teach users how to solve the questions in the question bank. The upgraded AI search product now has a stronger "intelligence", not only can it answer the original questions in the question bank, but it also handles new and difficult questions with ease. The use of the large model "Chain of Thought (CoT)" allows Quark AI search to present the problem-solving ideas and steps one by one, providing users with more detailed content analysis and learning guidance.
Compared with similar question search products, which mostly rely on question banks and can only answer questions in the K12 field, Quark's AI question search products can not only answer new questions in the K12 field, but also answer professional questions in the postgraduate entrance examination, public entrance examination, and various qualification examinations. Users only need to take a photo or screenshot, and Quark can search for the corresponding question and give professional content in graphics, videos, and AI answers step by step. In addition, for questions in subdivided fields such as law and medicine, Quark's "AI search questions" can also give answers.
Quark's answers to the real questions of the bar exam
At the same time, Quark's "AI Question Search" can also use AI capabilities to explain the knowledge points and test points in the questions in depth, accurately locate the key steps, so that users can not only learn this question, but also learn this type of question by "drawing inferences from one another".
The powerful ability of Quark's "AI search" not only relies on Quark's years of search precipitation, accumulated enough high-quality information and user needs in learning scenarios, but also is inseparable from the support of Quark's "Gnostic" learning model launched during the same period.
The "Lingzhi" large model is trained by the Quark technology team based on the "Quark large model" through years of high-quality data training accumulated in the field of education. It not only has the thinking chain ability possessed by many top models, but also can transform the thinking process into language that students can understand and is more in line with their learning process.
In other words, they are all about explaining a problem to students, and the "Gnostic" model knows what knowledge points to explain and how to construct the solution ideas.
Take the 2024 Beijing College Entrance Examination math questions as an example, enter them into DeepSeek and quark respectively, and get the following answers:
The answer given by DeepSeek
Quark's answer
It can be seen that compared with DeepSeek's long-winded chain of thought narrative and official, detailed answers, Quark's answer is more concise and more like explaining a question.
The education industry has a high demand for the multimodal ability of models due to the large number of scenes involving 'knowledge explanation' and 'popular science'. However, existing multimodal models have poor recognition of formulas, handwritten notes, and especially poor understanding of fine-grained graphics.
In order to solve this problem, the quark "Gnostic" large model constructs a large-scale domain professional training corpus through a large-scale multi-modal pre-training base, and at the same time, it ensures a better understanding effect in the model structure.
In the latest evaluation, the accuracy and scoring rate of Quark's "Gnostic" learning model in postgraduate math problems are comparable to OpenAI-o1, and far exceed other models in China. In a number of important tests such as domestic mathematics competitions and college entrance examinations, Quark's accuracy rate and scoring rate are also in an absolute leading position.
The results of the mathematical evaluation of the "Gnostic" large model are displayed
Image source: Quark
Unlike companies such as DeepSeek that focus on developing pure basic model capabilities, Quark's model development is user-oriented. Taking AI writing as an example, Quark's technical team has developed a Quark creative writing model that can generate long articles of more than 8000 words using multi-stage CoT and retrieval enhancement technologies to meet the needs of Quark's young users for writing reports, papers, and other 'long writing', ensuring the effectiveness of word count compliance. Even DeepSeek, at present, can only generate articles of up to 3000 words.
In addition, Quark's AI writing function is also equivalent to a "text online editor", where users can delete, polish, expand and other complex operations on the generated articles, which is also inseparable from the support of Quark's cultural and creative model capabilities.
It can be said that while the world is 'rolling' large model parameters, Quark has already put more emphasis on practical application scenarios, starting from user needs to directionally upgrade and optimize model capabilities. As of now, Quark has formed a system-level AI capability across all scenarios.
Source: Quark
As one of Alibaba's four strategic innovative businesses, Quark's every move represents not only itself, but also the direction of the entire Alibaba AI To C business.
On January 15, Quark upgraded its brand Slogan, an "AI all-round assistant for 200 million people", showing a new business trend of accelerating the exploration of AI To C applications. Recently, Alibaba's founder Jack Ma suddenly "flashed" to Alibaba's Hangzhou campus, and also went to the office area where Quark and other AI To C businesses are located.
Recently, Ali has made frequent moves in the field of AI To C: first, Wu Jia, an executive of the "Young Zhuang School", returned to Alibaba Group to explore the AI To C business; Then, Alibaba's AI application "Tongyi" was officially spun off from Alibaba Cloud and merged into Alibaba Intelligent Information Business Group; Recently, according to media reports, the hardware team of Tmall Genie has also been working on the integration with the Quark product team, and its focus includes the planning and definition of a new generation of AI products, as well as the integration with Quark's AI capabilities. After the team is integrated, the new team will also explore new hardware directions, including AI glasses.
From now on, Quark, Tongyi App, and Tmall Genie will respectively serve as forms of productivity tools, Chatbot, and AI hardware, providing differentiated services to users.
On February 6, Alibaba's ToC field welcomed a heavyweight figure - Professor Steven Hoi, a top global artificial intelligence scientist, officially joined Alibaba as the Vice President of Alibaba Group, reporting to Wu Jia, and is responsible for the multimodal basic model and Agents related basic research and application solutions of AI To C business.
According to insiders, Professor Xu Zhuhong will focus on the multi-modal basic model of the AI To C business and the basic research and application solutions related to Agents, which will greatly enhance the leap in the end-to-end closed-loop capability of Alibaba's AI application C-end products in the combination of models and applications. Once the capabilities of multimodal basic models have achieved breakthroughs, C-end applications such as quarks have new room for exploration in business.
At the same time, Alibaba's AI To C business is building a top AI algorithm research and engineering team, attracting a large number of outstanding talents in the industry to join. Industry insiders analyze that the joining of world-class top scientists at the beginning of 2025 can be seen as an important signal for Alibaba's AI To C to increase its investment in talents and resources. The top talent team of large models will support Alibaba's AI To C in the in-depth exploration of multi-modal Agents and other directions, and also open up imagination space for the construction of a user-oriented AI application platform in the next stage.
Today, Byte has invested heavily in the field of AI applications, restarting the "App Factory" strategy by vigorously investing in streams, internal horse racing, and actively going overseas. Tencent has launched two products, "Yuanbao" and "Yuanqi", in the direction of AI assistants and intelligent twins, and has regained public attention through the newly launched personal knowledge management tool ima.copilot, while Baidu has launched an AI product matrix including Wenxin Yiyan, Wenxin Yige, Orange AI, and Super Canvas, etc., to carry out "saturation attacks" on friends with a "big and complete" approach. In addition, new startups such as the large model "Six Little Tigers" and DeepSeek have also made efforts to apply AI, and Alibaba's AI To C business can be described as surrounded by strong enemies, and the pressure can be imagined.
However, there must be a solution to the problem. Through the strategy of "AI in All" and the precise control of user needs, Quark has proved that strong product power can be achieved without spelling parameters, relying on "application large models" and accurate grasp of user needs, which is also another version of "low cost and high efficiency"; And more than 200 million users and the top monthly active ranking. It also proves the correctness of Quark's playing style and the bright future of Ali's AI To C business.
At the moment when AI technology has entered the "deep water area of application", Quark's innovation paradigm has given us a key enlightenment: the real technological advancement lies not only in how many technological peaks can be climbed, but also in how many scientific and technological achievements can be transformed into value that can be touched by users' fingertips. And only when users really make choices and vote for AI applications with practical actions, this breakthrough battle related to the practical application of AI technology may come to the real competition point that will determine the future industrial pattern.