In recent years, artificial intelligence (AI) has made remarkable strides. From self-driving cars to language models capable of writing essays, AI systems are increasingly integrated into our daily lives. Yet, despite these advancements, there remains a fundamental gap between how humans learn and how AI “learns.” In this blog post, we’ll explore why human learning is qualitatively different from current AI approaches, drawing on insights from the paper “How Organisms Come to Know the World: Fundamental Limits on Artificial General Intelligence” by Andrea Roli, Johannes Jaeger, and Stuart A. Kauffman.
The Illusion of Machine Learning
At first glance, modern AI systems seem to mimic human cognition. They can solve complex problems, generate creative outputs like paintings or music, and even engage in conversations that feel eerily lifelike. However, as the authors argue, these achievements mask a deeper truth: AI operates within strict algorithmic boundaries that fundamentally limit its ability to truly understand or innovate.
Human learning involves more than pattern recognition or optimization—it requires agency. Humans set their own goals, adapt to ambiguous situations, and create meaning through interactions with their environment. For example, when faced with an unfamiliar tool, such as using a screwdriver to open a paint can, humans intuitively jury-rig solutions based on context and creativity. This process relies on identifying new affordances—opportunities or impediments relevant to achieving a goal—that were not preprogrammed or explicitly taught.
In contrast, AI systems operate deductively, constrained by predefined ontologies. An AI trained to recognize objects cannot spontaneously identify novel uses for those objects unless specifically programmed to do so. As the authors explain, algorithms cannot transcend their initial design; they lack the capacity for genuine innovation or semiosis—the creation of meaning.
Affordances: The Key Difference
One of the central concepts in the paper is affordances, introduced by psychologist James Gibson. Affordances represent what the environment offers to an agent, whether opportunities (e.g., climbing steps) or obstacles (e.g., locked doors). Humans excel at recognizing and exploiting affordances because we possess bio-agency—an intrinsic autonomy rooted in our biological organization.
Consider the example of jury-rigging: solving a problem by creatively combining available resources. Imagine you need to fix a leaky pipe but only have duct tape and a rag. You might wrap the rag around the leak and secure it with duct tape. This solution emerges from your ability to perceive the causal properties of objects in relation to your goal—a skill that goes beyond formal logic.
AI systems, however, struggle with such tasks. Because their behavior is entirely determined by inputs and predefined rules, they cannot shift perspectives or leverage ambiguity to discover novel solutions. Even advanced deep-learning models, which detect patterns in vast datasets, remain confined to correlations embedded in their training data. Without the ability to engage in semiosis, AI cannot generate new meanings or exploit unforeseen affordances.
The Frame Problem and Symbol Grounding
Two classic challenges in AI research further highlight the limitations of machine learning compared to human cognition:
- The Frame Problem: How does an AI determine what information is relevant to a given task? Humans effortlessly filter out irrelevant details, focusing instead on what matters most. AI systems, however, require explicit instructions about what to prioritize, making them ill-suited for dynamic, real-world scenarios.
- The Symbol Grounding Problem: How do symbols (like words or numbers) acquire meaning for an AI? Humans ground symbols in lived experience, connecting abstract representations to tangible realities. AI, lacking embodiment and agency, struggles to bridge this gap, often relying on external human input to assign significance.
These issues underscore a critical distinction: while humans actively construct knowledge through interaction and reflection, AI passively processes preexisting categories. As the authors note, “new meanings—along with their symbolic grounding in real objects—are outside of the predefined ontology of an agential system.”
Evolutionary Implications: Open-Endedness vs. Stagnation
Another fascinating implication of the paper concerns evolution. Human learning and innovation are part of a broader evolutionary process characterized by open-endedness—the continuous emergence of novel possibilities. Organisms evolve by identifying and exploiting new affordances, pushing the boundaries of what is possible.
AI simulations, on the other hand, tend to plateau. Despite efforts to create “digital organisms” capable of evolving autonomously, these systems inevitably stagnate due to their reliance on predefined variables and constraints. Without organismic agency, true open-ended evolution remains elusive. As the authors put it, “algorithmic evolutionary simulations will forever be constrained by their predefined formal ontologies.”
What Does This Mean for AI’s Future?
The paper’s arguments suggest that achieving Artificial General Intelligence (AGI)—a system with human-like cognitive abilities—is unlikely within the current algorithmic framework. While AI will continue to advance in specialized domains, replicating the full spectrum of human intelligence may require a paradigm shift. One possibility is integrating biological components into computational systems, creating hybrid machines that blend organic and digital elements.
However, such developments raise ethical questions. If an AGI could genuinely choose its own goals, would it align with human values? Could we ensure its benevolence? These uncertainties remind us that technological progress must be accompanied by careful consideration of its societal impact.
Conclusion: Celebrating Human Uniqueness
Rather than viewing AI’s limitations as failures, we should celebrate the unique qualities that make human learning so powerful. Our capacity for creativity, perspective-taking, and meaning-making reflects the richness of our embodied existence. While AI tools enhance productivity and expand our capabilities, they cannot replace the depth and nuance of human thought.
As we continue exploring the frontiers of AI, let us remember that intelligence is not merely about computation—it’s about connection. Whether solving everyday problems or contemplating life’s mysteries, humans bring something irreplaceable to the table: the ability to see the world anew, moment by moment, and shape it according to our dreams.
So the next time you marvel at an AI-generated masterpiece or chatbot conversation, take a moment to appreciate the profound difference between artificial and human learning. After all, it’s our very humanity that makes such reflections possible.