AGI – My Perspective

AGI – My Perspective

1. Introduction: The Fog Around the Big Words

There are words that stand like ancient statues in the room. “Life.” “Consciousness.” “Intelligence.” And more recently: “AGI” – Artificial General Intelligence. Their very sound evokes awe, sometimes even a slight chill. They seem larger than what we can grasp, and yet they are part of everyday life: everyone lives, everyone is aware, everyone recognizes intelligence in others, and today almost everyone has an opinion about AGI. That very closeness makes them so puzzling.

Not so long ago (let us say about a century and a half) the word “life” was considered a deep mystery. People spoke of a special force, a spark of life that permeated the bodies of the living. “Vital force” was an attempt at explanation, a kind of stopgap: where no understanding was available, a mysterious something was inserted. Today that sounds almost naïve. We know that “life” is not a spark, but the result of certain conditions: metabolism, reproduction, evolution, homeostasis, information processing. No single element carries the label “life” on its own. Yet in combination something arises that we call living.

What we see here is not only progress in biology but also a pattern. A term that was once enigmatic was gradually broken down into its components. Not by dissolving it, but by showing that it is not a thing, but a composition of processes. The word remained, the “vital spark” disappeared, and in its place came system descriptions.

The same may hold for other big words. For “consciousness”, which we often treat like an invisible substance. For “intelligence”, which we interpret sometimes as a gift, sometimes as a measurable score. For “emotion”, which we sometimes experience as a mystical current, although it is an expression of bodily states, valence, and memory. And perhaps also for “AGI” – a term that today functions like a modern mystery. Some fear it, some celebrate it, but few can define it clearly.

The crucial point is this: such words are condensations. They compress a multitude of processes into a single word. The word becomes a label, a tag – a marker under which we store an entire category of processes. It is a linguistic trick that gives us orientation. Yet it can also mislead us: we begin to mistake the label for a thing.

Our brain plays a central role in this. It is tuned to patterns. It perceives repetitions, similarities, structures. These patterns are bundled, condensed, and tagged internally. Language strengthens this mechanism: it turns the label into a symbol, often a noun. And nouns give the impression of being things. “Consciousness” – or “AGI” – as if they were standing somewhere in the room, tangible like a chair.

We therefore live in a world shaped by such linguistic monuments. Some of them carry us far, others block our way. Terms like “life” have now been demystified, drawn by science out of the fog. Others, such as “consciousness”, “intelligence”, and perhaps also “AGI”, still stand in the middle of it.

And perhaps this is exactly where a new approach begins: not by searching for the spark, but by understanding that what appears mysterious is already made of many layers. The fog does not clear through one grand definition, but through the patience of breaking concepts down into their building blocks.

2. Why We Can Name Emergent Phenomena

When we ask why we were able to form concepts such as “life”, “intelligence”, or “consciousness” at all, we encounter a simple but profound fact: our brain is a machine for compressing patterns.

We do not experience the world as an endless stream of isolated events. We do not perceive every leaf in the forest as a singular phenomenon. We search for what repeats, what resembles, what can be condensed. In this repetition the first step toward abstraction appears: “tree.”

This ability to recognize patterns is not a philosophical luxury, but a necessity for survival. A being that must calculate every encounter anew is too slow. It needs categories: edible or poisonous, friend or foe, near or far. In this binary condensation lies the principle of turning processes into things.

Yet abstraction does not end with hunting or protection. It continues in memory. Memories are not precise recordings, but clusters. We do not recall every detail, but rather what is shared, recurring, essential. Only then can we say: “this is like something I have seen before.” Here, in memory, the first label is attached.

Language amplifies this mechanism infinitely. By pouring patterns into symbols, we create a social storehouse. We agree on words that hold not only in one mind, but between many. Language forces us to press complexity into a noun, and with each noun arises the impression of thingness. “Intelligence” then sounds like a substance, although in truth it is a label for many processes.

A fourth step is perhaps the most important: our capacity for meta-cognition. We can recognize not only patterns in the world, but also in our own act of recognizing. We can look at our thinking and say: “something is happening there, and I call it this.” In this way even the invisible becomes nameable: feelings, thoughts, consciousness itself. It is as if we place the world a second time over ourselves and repeat the condensation there.

Finally, there is the social dimension. A concept is rarely a private creation. It is reinforced, altered, and stabilized in exchange with others. “Intelligence” is not only an internal label, but a cultural one. It is a product of negotiation, shaped through mutual understanding, and with each agreement the label becomes harder, more real.

Perhaps here lies the reason why we so often fall into the trap of substantivization:

  • Our brain detects patterns and similarities.
  • Our memory stabilizes them as clusters.
  • Our language turns them into symbols, most often nouns.
  • Our meta-cognition applies this condensation to ourselves.
  • And our culture cements the words into monuments.

This is how concepts arise that seem to be things, although in reality they are only signposts into a network of processes.

So when we speak today of “intelligence”, it helps to remember: we are not talking about a substance, but about an emergent pattern. A word we place over a multitude of processes in order to grasp them at all.

3. The Two Paths of AI

Anyone speaking about Artificial Intelligence today usually sees two directions. One is driven by language models such as GPT or Claude. The other comes from agent-based learning, often called Reinforcement Learning (RL). Both approaches have made impressive progress in recent years. Yet both remain incomplete. And it is precisely in this incompleteness that a key lies for thinking about a future architecture.

3.1 LLMs: Symbolic Compression through Statistics

Large language models are built on a simple principle. They learn from countless examples which continuation in a context is likely. From this simplicity arises an astonishing ability: patterns that lie in very different islands of knowledge can be connected. A model can weave together a thought from philosophy with a procedure from computer science and form an idea that had not been expressed before. This feels like intelligence because a genuine synthesis takes place.

The strength of this path lies in its reach. Language is a global storehouse. Within it is condensed what generations have learned, debated, and ordered. Whoever can statistically master this storehouse can enter thought spaces in a short time that would take years for individuals. This is where the impression of “new laws of thought” often appears in conversations with such models.

The limits, however, are clear. This path lives from secondary material. It does not draw on its own sensory experience, but on already compressed reality. It does not feel, it does not mark importance through valence, it has no homeostatic goals. Continuity arises from context windows, not from a lived memory. Stability under changes in the world is limited, because there is no ground on which deviations could be felt as real surprises. Language carries far, but it does not carry everything.

3.2 Reinforcement Learning: Experience as Teacher

The second current begins at the other end. Here we have agents. An agent perceives, acts, receives feedback, stores traces. From the coupling of perception, memory, and drive emerges adaptation. Learning here is not recall of symbols, but the transformation of behavior. A cat does not need a dictionary entry for “hunt.” It feels its way, fails, tries again, builds an inner dynamic where simulation and motor skills come together.

The strength of this path lies in its robustness toward reality. It is grounded. Action brings causality into play. It can deal with disorder because disorder is the very source from which it learns. In narrower domains it shows great power, especially where perception, timing, and body coordination matter.

Yet here too the limits are evident. Experiential learning is slow. It devours data, even if it does not appear so in humans. What looks effortless in us rests on an enormous biological scaffold. In artificial systems this scaffold is absent. Rewards are hard to shape, memory is fragile, abstraction does not arise on its own. Where high-level symbolic ability is needed, it remains difficult.

3.3 Simulation as Bridge

At this point things become interesting, because both directions touch, often without naming it, on the same concept: simulation.

Simulation means that a system internally plays out possible states before they actually occur.

  • A cat “tries” the jump toward the mouse inwardly before making it.
  • An RL agent evaluates which action in the environment might lead to which reward.
  • An LLM generates scenarios in the space of language – stories, explanations, hypotheses – without them happening in reality.

Simulation is therefore the key that connects both approaches. It is not limited to language and not limited to perception, but a general principle: rehearsal of action within the inner space of a system.

3.4 Two Half Systems

From this perspective the current camps are not opponents. Even if they sometimes want to portray themselves as such, which is in fact a very human trait. In reality they are preparatory schools. LLMs show how far symbolic condensation can reach when driven by vast statistics. Experiential learning shows how far adaptation can reach when connected to perception and drives. A true intelligence in the full sense needs both. Not as an add-on, but as an architecture in which both currents continuously correct and nourish each other.

4. Emergence Instead of Spark

If we look into history, we find the same idea returning again and again: that there must be some kind of spark, a hidden essence that makes life alive, intelligence intelligent, or consciousness conscious. A “breath of life,” a “soul,” an “essence.”

This explanation sounds powerful and satisfies the human need for something mystical, yet it has a problem: it explains nothing. It only shifts the puzzle to another level, where it remains just as mysterious.

Biology has taught us a different picture. Life is not a spark. It arises when certain conditions come together: energy flow, metabolism, reproduction, homeostasis, information processing. Each of these systems is interesting on its own, but not “alive.” Only in their interplay does something appear that we call life. “Life” is therefore not a thing, but a label for a bundle of processes.

Perhaps the same applies to the other great concepts: intelligence, consciousness, feeling – and also AGI. They look like substances, but they are probably emergent phenomena that only arise from the cooperation of subsystems.

4.1 Intelligence as an Emergent Bundle

When we speak of intelligence, we do not mean a substance but an interaction:

  • Statistics: pattern recognition, generalization, compression.
  • Simulation: internal rehearsal of action, running scenarios.

Only in combination does what we call intelligence arise: the ability to understand situations and anticipate actions.

4.2 Consciousness as an Emergent Bundle

Consciousness too is not “a thing.” It can be described as:

  • Simulation: an inner space in which states are represented.
  • Valence: the weight of these states, their pleasant or unpleasant tone, their importance or insignificance.

Consciousness is then not the spark, but the outcome when simulation and valence converge into a globally accessible state.

4.3 Self-Awareness as an Emergent Bundle

The “I” often feels like an even greater substance, as if somewhere in the head there were a little being collecting all experiences. But here too the picture becomes simpler when unraveled:

  • Simulation: inner scenarios.
  • Valence: meaning and weighting.
  • Self-index: a marker that assigns these states to a stable entity, the “self.”

Only when this marker is present can a system not only feel, but also say: “this is happening to me.”

4.4 Feeling as an Emergent Bundle

Feelings may be the most vivid example. Not a mysterious stream, but the interplay of:

  • valence signals (positive or negative),
  • bodily reactions,
  • attention (what comes into the foreground),
  • memory (what it is connected with).

We say “feeling,” but the word is only a canopy stretched over many subsystems resonating in a single moment.

The point is clear:

  • Where we once searched for a spark, we now see emergence.
  • Where we imagined a substance, we find a web of processes.
  • Concepts like intelligence or consciousness are labels – shortcuts that make the ungraspable manageable.

This does not diminish them. On the contrary: emergence is more powerful than the spark. It shows that there is no single secret key, but an architecture that brings forth more than the sum of its parts.

5. Simulation as a Key Principle

If we place the two great streams of AI side by side – LLMs as machines of symbolic compression and Reinforcement Learning as a school of experience – it may seem as if there is no bridge between them. But on closer inspection we find a principle that unites them: simulation.

5.1 What is Simulation?

In its simplest sense, simulation means:
A system plays out possible states internally before they occur in reality.

Three properties are tied to this idea:

  • Representation: there is an internal mapping of states or actions.
  • Dynamics: this mapping can be changed, extended, combined.
  • Decoupling: all of this happens without direct execution in the outside world.

Simulation is therefore an inner space where trial actions take place. It allows a system to estimate consequences without paying the cost of real mistakes.

5.2 Simulation in Animals and Humans

In animals we see the perceptual form of simulation. A cat that pauses before a jump is “testing” the movement internally. Crows that use tools must have a sense of how a stick will work in a gap before they act.

In humans another dimension enters: symbolic simulation. Language allows us to run through scenarios not only in motor or imagistic form, but also as abstract stories. We can build hypotheses, compare alternatives, place possible worlds side by side. This is simulation in a highly compressed and culturally transmitted form.

5.3 Simulation in LLMs

A language model also simulates, though in a special way. An LLM plays out “possible continuations” and implicitly builds scenarios in the space of language. It tries countless “what if” continuations before one output becomes visible.
This is not a grounded simulation with body and valence, yet it shows how powerful the symbolic variant alone can be.

5.4 Simulation in Reinforcement Learning

RL agents are by their nature simulators. Their entire learning process is based on trying actions in an environment and deriving consequences from them. Advanced methods like Monte Carlo Tree Search or model-based RL make this internal rehearsal explicit: the agent constructs a model world where it tests possible moves before deciding.

5.5 Simulation as the Axis of Consciousness and Intelligence

Simulation is more than a technical tool. It could be the common denominator from which intelligence, consciousness, and self-awareness emerge:

  • Intelligence = Statistics (patterns) + Simulation (trial action).
  • Consciousness = Simulation + Valence (weight, meaning).
  • Self-awareness = Simulation + Valence + Self-index (a marker that assigns states to a persistent entity, the “self”).

From this perspective, simulation is not a side effect but the foundation on which higher cognitive functions are built.

And in this way a bridge is formed: from AI methods that seem to have little in common, to a principle that quietly operates within both.

6. A Model as an Architecture for AGI

Up to this point we have learned three things:

  • Big terms like “intelligence” or “consciousness” are not substances, but “substantivizations” (labels) for the interplay of many systems.
  • LLMs and Reinforcement Learning are two powerful but incomplete ways of technically reproducing such systems.
  • Simulation works as a connecting principle that turns pattern recognition and experience into inner trial actions.

If we combine these insights, the question arises almost naturally: what would an architecture look like that connects these elements?

6.1 A Personal Attempt

I have begun to sketch exactly this. Not as an official research project, but as a personal, interdisciplinary hobby in my free time. I call it AXIS. The name stands for the idea of an axis, a scaffold on which a synthetic mind could be stretched out.

AXIS is not a finished theory, but a blueprint. It tries to translate insights from neuroscience, psychology, biology, and AI into a model that makes the building blocks of a mind visible. The question that drives me is: If one wanted to build a (synthetic) mind, what would be the fundamental modules that have to come together?

6.2 The Layers of AXIS

From this question a proposal has emerged that can be described in layers:

  • Homeostatic processes
    The foundation is always the striving for stability. A synthetic system needs target values (energy, safety, balance) and mechanisms to correct deviations. Without this base there is no sense of urgency, no motivation.
  • Perception and memory
    Perception creates raw data. Pattern recognition stores it. Memory links experiences across time. This creates the first ground on which the system can recognize a world that is more than the moment.
  • Drives and emotion
    Not all impressions are equal. Valence, whether pleasant or unpleasant, determines what becomes important. Emotions and drives modulate attention, memory, and behavior. They are the inner compass.
  • Simulation
    Here the inner space emerges: trial actions, anticipations, possible worlds. Without simulation there would only be stimulus and reaction. With simulation a genuine internal perspective begins.
  • Language and symbols
    This is where the enormous capacity of LLMs comes into play. Language allows compression, abstraction, cultural transmission. It stabilizes thoughts and makes simulations shareable. But language builds on the lower layers. It is not the foundation, it is a later tool.
  • Self-model
    A system that has simulation, valence, and language can eventually model itself as an entity. It can not only experience states, but also grasp itself as the source of those states. This is where the “I” arises.

This is a strong simplification and summary of my much more detailed AXIS hypothesis.

6.3 Why This Architecture is Emergent

The important point is: none of these modules alone is a “mind.” Homeostasis by itself is regulation. Perception by itself is pattern recognition. Language by itself is symbolic statistics. Only in combination do the properties emerge that we call consciousness, intelligence, or feeling.

AXIS is therefore not a technical construction plan, but an axis of conditions. It shows which systems must reinforce each other so that something new (something emergent) arises out of mechanics.

6.4 A Different Perspective on AGI

In the light of AXIS it becomes clear: the path to a synthetic mind runs neither through scaling language models alone, nor through experience-based agents alone. Both are subsystems. What is missing is an emergent architecture that brings them together. And that is exactly the core that AXIS as a blueprint is meant to make visible.

For me personally, both schools (LLMs and Reinforcement Learning) are not sufficient on their own to develop something like AGI or a synthetic mind. Many more disciplines must be combined: neuroscience, biology, psychology, AI.

AXIS is my attempt to bring order to questions that have accompanied me since childhood: the eternal mysteries that never seemed fully graspable, like consciousness, intelligence, and feeling. Even back then I sensed that behind these words there was more than some inexplicable “spark.” AXIS is an attempt to demystify them, to break them down into building blocks and mechanisms, without losing their depth.

I am still working on it, piece by piece, like on a blueprint for a synthetic mind. Perhaps one day I will publish this work. Not as a finished truth, but as an invitation to think further together.

7. Outlook: Why This Matters

When we speak of concepts like “intelligence,” “consciousness,” or “feeling,” they often sound like eternal mysteries. Perhaps they are. But they remain mysterious only as long as we treat them as substances. Once we begin to see them as emergent systems, the perspective changes.

For AI research this means: we should not get lost in one camp alone. The scaling of language models has shown how far symbolic compression can carry. Reinforcement learning has shown the strength of experience and action. Yet on their own both approaches are dead ends. Only when we combine them in an architecture such as AXIS can we move beyond mere scaling. The path forward is not just more data, but more structure.

For philosophy the value lies in clearing the fog around the great terms. “Consciousness” is not a spark, but the interplay of simulation and valence. “Intelligence” is not a gift, but the product of pattern recognition and internal trial actions. “Feeling” is not a mysterious force, but a composition of body, memory, and evaluation. This demystifies, but it does not diminish. On the contrary, it shows how something greater can emerge out of simple building blocks.

For us as humans it has a personal dimension. Because if we understand the mechanisms behind our great words, we understand ourselves more clearly. We see that much of what feels self-evident rests on a fragile composition: perception, memory, motivation, language. This knowledge can create humility, but also hope. Humility, because we see how little “magical” we really are. Hope, because we recognize that mind is not beyond explanation, but within reach of design.

And here the circle closes with the term AGI, where we began. Perhaps “Artificial General Intelligence” is not a single thing that can be switched on or off, not a spark that one day ignites in a machine. Perhaps AGI is a collective label, a tag, for the interplay of modules that we are only beginning to understand. AXIS is one attempt to bring order to these modules. Not the final order, not the only right one, but one that makes visible: AGI is not beyond imagination. It emerges when the conditions align, through emergence, not through magic.

The circle is closed, but not complete. Because emergence always means: it remains open what else can grow from it.

Bonus: Perceptive Intelligence and My Maine Coon Cat

My cat loves to rummage in my overcrowded shelves, sniffing and pawing around to dig things out.

He is a small, living example of the “speechless intelligence” we have been talking about here.

He carries drives that come from deep within his evolutionary equipment: curiosity, hunting instinct, playfulness. For him my shelves are not furniture full of books or objects with symbolic meaning, but a complex landscape full of potential stimuli. Between the objects there might be “prey items,” unfamiliar smells, small cavities, shadows. His brain is tuned to search for patterns in such structures, to detect deviations, and to discover possibilities for action.

So when he pulls things out with his paw, it is not “making a mess,” but an experiment: What might be inside? What moves if I touch it? Could something unexpected jump out? The rummaging is embodied simulation. He tries out actions and learns from feedback.

With language he might have learned the label “shelf = place of storage,” and the matter would be settled. Without language the shelf remains an open world, an arena for exploration. And that is the core: animals “sense” and “act” directly, without a symbolic system compressing the experience. My cat becomes a practical model for what I mean by perceptive intelligence in AXIS.


Discover more from GRAUSOFT

Subscribe to get the latest posts sent to your email.


Leave a Reply

Discover more from GRAUSOFT

Subscribe now to keep reading and get access to the full archive.

Continue reading