Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
IBM Evaporates $40 Billion, Block Cuts Half Its Staff While Stock Price Rises: In the AI Era, What Assets Deserve Tokenization?
On February 23, 2026, a seemingly calm Monday, IBM’s stock experienced its worst single-day decline since October 2000. By the close, it had fallen 13.2%, with approximately $40 billion in market value evaporating within hours. The trigger was not a disastrous earnings report or regulatory crackdown, but a product announcement: AI startup Anthropic announced that its Claude Code tool can modernize COBOL programs running on IBM systems, even though COBOL is precisely IBM’s profitable “moat” business.
Three days later, a similar event unfolded in a completely opposite manner. On February 26, Jack Dorsey’s fintech company Block announced layoffs of about 4,000 employees, nearly 50%, citing AI-driven efficiency improvements. However, market reactions were entirely different—Block’s stock surged over 24% in after-hours trading. Dorsey admitted in a letter to shareholders, “I believe that most companies will reach the same conclusion within the next year and make similar structural adjustments.”
These two events, driven by the same factor—AI—elicited vastly different market responses: one plummeted, the other soared. What is really happening behind the scenes? The answer may point to a deeper question: AI is redefining “what constitutes a valuable asset.” For corporate executives, investors, and traditional decision-makers, understanding this revaluation logic is no longer a strategic foresight but an urgent matter of survival.
To understand the stark contrast between these two cases, we must first examine their respective asset structures.
IBM’s plunge appears to be a technical threat posed by Claude Code, but in essence, it is a market re-pricing of its core asset model. COBOL, a programming language developed in the late 1950s, still supports about 95% of global ATM transactions and numerous critical systems in finance, aviation, government, and other sectors. Anthropic stated in a blog that “trillions of lines of COBOL code run in production daily, powering vital systems. Yet, the number of people who understand COBOL is decreasing year by year.”
Modernizing COBOL systems has long been a complex and costly endeavor, forming IBM’s profitable “moat.” However, Anthropic claims, “With AI, teams can modernize COBOL codebases in just a few quarters without spending years.” The market’s underlying message is that IBM’s labor-intensive maintenance revenue and mainframe service income are being eroded by AI technology.
Interestingly, IBM’s stock rebounded 2.68% the next day. Wall Street analysts from Wedbush and Evercore ISI quickly defended, calling the plunge an “unfounded overreaction.” Their reasoning hits at the core: enterprise clients are unlikely to abandon their mainframe systems just because a new AI tool can translate legacy code. There is a huge gap between code syntax translation and the deep integration of hardware and software in system modernization.
IBM also responded on the same day, emphasizing a key point: the challenge of modernization is not the COBOL language itself but the IBM Z platform—since translating code cannot capture the platform’s complexity, which derives its value from decades of hardware-software integration that code translation cannot transfer.
Turning to Block’s event, the same large-scale layoffs driven by AI resulted in a 24% market increase. The key difference is that Block’s asset structure is changing. Since 2024, Block has been restructuring its business model and personnel, heavily investing in AI tools like its proprietary Goose platform to improve operational efficiency.
Block CFO Amrita Ahuja explained the layoffs by saying, “We are taking bold and decisive actions, but they are built on strength.” This “strength” is supported by data: in 2025, gross profit reached $10.36 billion, up 17% year-over-year. Strong financials provided a buffer for the company’s large-scale restructuring.
Market interpretation is clear: Block is not passively shrinking under AI impact but proactively optimizing its asset structure—using fewer “human assets” to generate higher “technological asset” output efficiency. Cutting 50% of staff while raising full-year guidance indicates that AI is amplifying the value per employee.
These cases reveal an emerging trend: AI is becoming a “re-pricer” of asset value. Different asset types exhibit sharply different value trajectories under AI evaluation frameworks.
The first category is labor-intensive assets. The value of IBM’s COBOL maintenance teams, traditional analysts, programmers, and other “information processors” is being diluted by AI. Anthropic mentioned that Claude Code can identify “risks that would take human analysts months to find.” This does not mean humans are no longer important, but that jobs relying on information asymmetry and procedural knowledge are being compressed in value by technology.
However, caution is needed: AI replaces “information processing,” not “value creation.” Futurum Group analyst Mitch Ashley pointed out that successful COBOL modernization requires multiple dimensions—business scope definition, technical assessment, data migration planning, behavioral equivalence verification, observability, and organizational change management—of which code translation is only one part. The ability of humans to handle complex systems, understand business essence, and make strategic judgments remains scarce.
The second category is data assets, which are becoming high-value in the AI era. With the rapid development of generative AI, the value attributes of data are being reshaped. A study published in PLOS One by Tang and colleagues states that generative AI changes how data is acquired, processed, and utilized. The value of data assets depends not only on intrinsic quality and relevance but also on their application scenarios, transformation capabilities, and market demand within generative AI frameworks.
This means that the uniqueness, continuity, and governability of data are becoming core value dimensions. A dataset may be highly valuable in one scenario but useless in another. Companies capable of providing exclusive, high-quality, continuous data for AI model training are gaining new pricing power.
The third category is algorithm and model assets. OpenAI and Paradigm’s collaboration on EVMbench, which evaluates AI’s ability to detect, repair, and exploit smart contract vulnerabilities, illustrates that algorithms are becoming quantifiable assets. Model weights, algorithm frameworks, and training methodologies are becoming intangible assets that can be identified, controlled, and monetized.
The fourth category is traditional tangible assets, which are experiencing divergence. Assets relying on “information asymmetry” and “human intermediary” are under depreciation pressure, while physical assets with “AI-resistant” properties—such as energy facilities, scarce resources, and critical infrastructure—maintain relatively stable value. The reason is simple: AI can analyze and optimize their operations but cannot replace their physical existence and intrinsic value.
Based on the above analysis, companies need a systematic framework to determine whether their assets are appreciating or depreciating in the AI era. RWA Research Institute proposes an “AI Immunity” asset identification framework, which includes three core features.
The first feature is non-encodability. This refers to value elements that are difficult for AI to fully learn or replicate. COBOL code can be translated by AI, but the transaction processing capabilities of IBM’s Z series mainframes built at the chip level, quantum-safe encryption, and reliability of nine nines are beyond AI’s replication. Futurum Group’s research notes that “code translation cannot capture actual complexity; platform value comes from decades of hardware-software integration.” Similarly, offline scene control, implicit industry knowledge, and complex relationship networks—elements difficult to “encode”—constitute the first immune barrier.
The second feature is data moat. Does the enterprise possess exclusive, continuous, and governable data assets? Is it merely using public data, or can it generate data that others cannot access? CITIC Bank has begun exploring using large models to evaluate data asset value and is attempting to “bring data assets onto the balance sheet.” The logic is that in the AI era, data is not only raw material for production but also an asset itself. However, not all data have a moat—public web data can be quickly consumed by AI models, while enterprises with exclusive data sources can command a premium under AI valuation frameworks.
The third feature is AI-enabled resilience. Can the asset be enhanced rather than replaced by AI? This is key to distinguishing IBM-style impact from Block-style transformation. IBM’s core business—maintaining COBOL legacy systems—is subject to AI “replacement”; whereas Block’s business model—payments and financial services—can be AI “enabled.” In fact, IBM has developed Watsonx Code Assistant for Z, a dedicated tool allowing clients to securely refactor and modernize legacy code directly on the platform while maintaining enterprise-grade security. When assets can synergize with AI rather than oppose it, their value increases.
Conversely, AI-vulnerable assets exhibit three features: reliance on “information processing” as core value, susceptibility to standardization and automation, and lack of data generation and accumulation ability. Enterprises can perform a “stress test” of their asset portfolios against these features.
Extending the above framework to the RWA (Real-World Asset Tokenization) field leads to a clear conclusion: RWA is not about “which assets can be on-chain,” but about selecting hard assets that can withstand AI revaluation cycles.
By March 2026, the total on-chain RWA value exceeded $25 billion, nearly quadrupling from a year earlier. However, the Hong Kong Web3.0 Standardization Association’s white paper on RWA industry in August 2025 explicitly states: “The notion that everything can be RWA is a fallacy.” Successful large-scale implementation requires meeting three thresholds: value stability, clear legal rights, and verifiable off-chain data.
Applying the “AI Immunity” framework, we can further specify that assets suitable for tokenization are those with stable value in AI revaluation.
The first category is physical assets with “AI immunity” features, including energy assets, infrastructure, and scarce resources. Their value does not depend on information processing but on physical existence and utility. The white paper mentions new energy RWA (such as charging stations, photovoltaic assets) and computing assets like GPUs, which are ideal anchors for RWA due to their tangible demand and credible “digital DNA.” GPU computing assets, driven by AI industry demand, are becoming prime candidates for RWA.
The second category is programmable data assets. Assets with exclusive data sources that can be automatically monetized via smart contracts combine a “data moat” with “AI-enabled resilience.” The white paper classifies data, intellectual property, and carbon credits as intangible assets. However, not all data are suitable for tokenization—only those that can be continuously generated, properly rights-verified, and verifiable form a solid foundation.
The third category is hybrid assets, combining “non-encodable” physical control rights with “programmable” digital rights. For example, property rights of commercial real estate can be tokenized, but actual management, maintenance, and leasing—offline scene control—remain with specialized institutions. This “physical + digital” dual-layer structure leverages blockchain’s liquidity while anchoring offline value immune to AI.
Conversely, two asset types require caution in AI era tokenization: highly reliant on human intermediaries, whose value can be compressed by AI; and standardized assets lacking data moats, which have limited bargaining power under AI valuation.
The $40 billion evaporation of IBM’s market cap signals an era—assets relying on information asymmetry and human accumulation are being re-priced by AI. Block’s countertrend rise signals another—companies that embrace AI and optimize their asset structure are being revalued by the market.
For decision-makers in listed and traditional companies, this is not just technological anxiety but a fundamental restructuring of asset valuation systems. CEOs must answer an unavoidable question: How much are my assets worth in the eyes of AI?
Based on this analysis, three actionable recommendations are proposed:
First, immediately conduct an “AI stress test” on assets. Using the “AI Immunity” framework’s three features—non-encodability, data moat, and AI-enabled resilience—assess core business units. Identify which are most vulnerable to value erosion under AI impact and which may benefit from AI amplification.
Second, establish a dynamic asset portfolio management mechanism. In the context of AI revaluation, asset allocation should shift from a static “buy-and-hold” strategy. Deliberately increase the proportion of “AI-immune” assets and develop transformation or divestment plans for AI-vulnerable assets. This requires coordination across strategy, technology, and business departments.
Third, revisit RWA strategies. Before tokenizing assets, use the “AI Immunity” framework to screen underlying assets. The core value of RWA is not just “on-chain” but achieving better liquidity and pricing efficiency through tokenization. If the underlying assets are devalued in the AI era, tokenization merely accelerates value loss.
Finally, it is important to note that, according to China’s joint issuance of Document No. 42 by eight departments, token issuance and tokenized transactions are strictly prohibited within mainland China. The discussion of RWA tokenization here refers only to compliant offshore digital practices. Enterprises exploring related businesses must strictly adhere to the regulatory red line of “strictly prohibited domestically, registered offshore.”
When AI begins to price assets, the only true safety lies in things AI cannot price—not code, not data, but human judgment of value itself.
(This article is based on publicly available information and data, including sources such as Nasdaq, Tencent News, Futurum Group, PLOS One, 21st Century Business Herald, and Industrial and Commercial Times. The viewpoints expressed do not constitute investment advice.)