Habr AI→ original

Self-recovery metric: researchers created an ASI criterion for evaluating AI

A new idea has been published on Habr: the ASI criterion for a deeper evaluation of AI models. The metric is based on a system's ability to recover and develop

Self-recovery metric: researchers created an ASI criterion for evaluating AI
Source: Habr AI. Collage: Hamidun News.
◐ Listen to article

Researchers have proposed a new approach to evaluating AI models — the ASI-criteria, based on the concept of informational autonomy. This is an attempt to go beyond traditional performance metrics and measure deeper qualities of artificial intelligence that manifest in the system's ability to adapt and recover.

The Essence of ASI-Criteria

The ASI-criteria proposes evaluating a model's ability to recover and develop from a minimal informational "seed" — the smallest set of data necessary for the system to function and evolve. The name alludes to the concept of Artificial Super Intelligence, but it refers not so much to superintelligence as to the system's ability for autonomous development and informational density. The idea is borrowed from biology: organisms can recover from a minimal unit (a cell, DNA).

The authors propose applying a similar approach to evaluating AI — what is the minimum complexity of an information set from which a model can restore its capabilities? This reflects the idea of "technotropic AI" — a system that can sustain itself on a minimal amount of information and develop from this foundation.

How It Works

The metric measures several key system parameters:

  • Minimal information base — the size of the minimum data set for model initialization
  • Recovery speed — how quickly the system will restore its functions after "restart"
  • Recovery completeness — what percentage of initial capabilities and knowledge will be restored
  • Expansion capacity — can the system develop and acquire new capabilities beyond the original parameters

In practice, this means that a model capable of restoring its abilities from a very small amount of information will receive a higher score on the ASI-criteria. This reflects its adaptability, informational compactness, and potential for operation under resource-constrained conditions. The higher the criteria, the "smarter" the system uses available information.

Where to Apply the Metric

ASI-criteria can be particularly useful when developing models for operation under resource-constrained conditions — for example, on mobile devices, in edge computing, or in environments with unstable access to central databases. The metric also helps assess how resilient a model is to information loss, knowledge decay, and how quickly it can adapt to changing environmental conditions. For researchers, this becomes a tool for understanding the internal structure of models and their organization: models with high ASI-criteria may be more "interpretable" in terms of information compression and contain denser and more efficient knowledge distribution.

What This Means

ASI-criteria is proposed as a step toward more comprehensive and objective evaluation of AI systems. Traditional metrics (accuracy, recall, F1-score) tell us how well a model solves a specific task on a specific dataset. ASI-criteria adds a measurement of the "internal efficiency" of the system and its capacity for independent development. If the metric gains traction in the scientific community, it could change the approach to designing and training AI models in the direction of greater compactness and adaptability.

ZK
Hamidun News
AI news without noise. Daily editorial selection from 400+ sources. A product by Zhemal Khamidun, Head of AI at Alpina Digital.
What do you think?
Loading comments…