Introduction
Across fields like philosophy, biology, and computing, we’ve long viewed intelligence through a human lens—shaped by our language, culture, and ways of thinking. But fresh insights from various areas point to a big change: intelligence isn’t just a human thing. It’s a universal trait that pops up in all sorts of complex systems, from living creatures to computer algorithms and maybe even the wider cosmos. This shift away from human-centered ideas lets us see intelligence more clearly as the ability to adapt, solve problems, and spot patterns in changing settings, without being tied to our own rules or biases.
Drawing from Ferdinand de Saussure’s ideas on language—where meanings come from how we connect words and signs in our culture—intelligence has often been seen as something we define through human traits like logical thinking, self-reflection, and using symbols. This view helps explain our minds, but it also sets limits, treating intelligence as something built from our cultural shortcuts rather than a basic feature of the universe. Real-world progress, though, pushes past these edges. In AI, for example, we’ve learned that simple, broad approaches powered by huge amounts of computing discover knowledge on their own, beating out systems loaded with human expertise. This idea shines through in Richard Sutton’s 2019 essay “The Bitter Lesson,” where he sums up 70 years of AI research: “The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.” Sutton shows how relying on human knowledge might give quick wins but often stalls, while compute-heavy methods like search and learning keep improving as tech gets faster and cheaper.
This builds on earlier thoughts about consciousness and thinking, like explorations of human awareness as a mix of freedom and isolation—where AI might help break through to freer forms. For instance, ideas on whether AGI could reveal secrets of human consciousness suggest AI as a bridge to wider awareness, much like intelligence itself. Likewise, looks at emotions and culture in thinking show that signs and feelings aren’t just human; they adapt in other systems too, beyond our word-based world. Together, these tie intelligence to a bigger picture of universal consciousness, uncovered through evidence rather than fixed labels.
By moving from a human-only view to something that emerges everywhere, this change echoes past breakthroughs that shook up our sense of uniqueness. The sections ahead will break down human-centered ideas of intelligence, look at proof from biology, draw links to computing in AI, explore what it means for philosophy and society, and peek at what’s next for this shared journey. In this way, intelligence stops being a reflection of us and becomes a key tool for the universe’s ongoing story.
II. The Human-Centric Construct of Intelligence
For much of history, we’ve seen intelligence as something deeply tied to being human. It’s been shaped by our ways of thinking, talking, and living together. This view starts from ideas like those of Ferdinand de Saussure, a thinker on language from the early 1900s. He explained how meanings come from signs—words or symbols—that we connect in our minds and cultures. These connections aren’t fixed; they’re made up by people, like how “smart” might mean quick math skills in one group but clever storytelling in another. In this setup, intelligence becomes a label we slap on human traits: things like planning ahead, knowing ourselves, or using tools to change the world.
Go back further, and you see this pattern in old philosophies. Aristotle, an ancient Greek, talked about the “rational soul” as what sets humans apart from animals— a spark of reason that lets us think abstractly and make choices. Fast-forward to the Enlightenment era in Europe, around the 1700s, and thinkers like John Locke built on that. They saw intelligence as gathering knowledge through senses and logic, all wrapped in human experience. These ideas made sense for their time; they helped us understand our own minds and build societies. But they also created a bubble, where intelligence only counts if it looks like ours—verbal, self-focused, and tied to culture.
This human-only lens has real downsides. It overlooks or downplays smarts in other forms, like how animals or machines adapt without words or deep self-thought. For example, in AI research, people once tried to build systems that copied human rules and knowledge, thinking that was the smart way. But as Richard Sutton points out in his 2019 essay “The Bitter Lesson,” this often backfires. He notes that over decades, “methods that leverage computation are ultimately the most effective,” because stuffing in human ideas can complicate things and stop systems from scaling up. Sutton’s examples, like early chess programs that relied on human strategies but lost to brute-force search, show how our ego gets in the way. We think we know best, but that limits what intelligence can become.
These critiques connect to broader ideas about how our minds work. Explorations of consciousness highlight it as a double-edged sword: it gives us freedom to act but traps us in personal viewpoints, cut off from wider realities. In the same way, our human-centered take on intelligence acts like a cage, blocking us from seeing it in raw, adaptive forms. Looks at emotions and culture in thinking push this further. They show semiotics—those sign connections—not as just human tricks but as tools that pop up elsewhere, like in group behaviors where feelings guide actions without fancy language. By questioning these limits, we start to break free, seeing intelligence not as our invention but as something bigger that we’ve been narrowing down.
This shift isn’t about ditching human views entirely; it’s about widening the frame. As we move away from these constructs, evidence from biology and tech steps in, showing intelligence as a flexible trait ready to emerge anywhere.
III. Intelligence in Biology: A Lens Beyond Humanity
When we zoom out from human ideas, biology offers a fresh view of intelligence—one that’s not tied to our brains or words. It shows up in all kinds of living things, proving it’s a basic trait for handling life’s challenges, no matter the form. This evidence helps break down the old human-only bubble, revealing intelligence as something that emerges from complexity itself, like a toolkit the universe uses to adapt and survive.
Take octopuses, for starters. These sea creatures have a setup that’s nothing like ours—most of their neurons are spread out in their arms, not crammed into one central brain. This lets each arm think and act on its own, solving puzzles like opening jars or sneaking through tiny gaps. They even learn by watching others, picking up tricks without any teaching. It’s a distributed kind of smarts, where the whole body works together but independently, showing intelligence doesn’t need a single “boss” like our heads.
Then there’s the group-level stuff in insects, like ant colonies or bee hives. No single ant is a genius, but together they pull off amazing feats—building bridges from their bodies, finding the shortest paths to food, or keeping the nest at just the right temperature. This comes from simple rules each one follows, leading to big-picture decisions that look planned. It’s called collective intelligence, where the smarts live in the connections, not any one bug. Think of it as a living network, adapting on the fly without a leader calling the shots.
Even plants get in on this. They don’t have brains or nerves, but they sense their world and respond in smart ways. Roots twist toward water or away from bad soil, using chemical signals like tiny messages. Vines climb by “feeling” for supports, and some plants release warnings when bugs attack, alerting neighbors to gear up defenses. Scientists call this “plant intelligence” or physical smarts—it’s all about reacting to cues and optimizing for survival, without any thinking we recognize.
Dig deeper, and you hit basal cognition, a concept from researchers like Michael Levin. It looks at intelligence in the simplest life forms, like single cells or early embryos. These can regenerate lost parts, navigate mazes in labs, or even “remember” past setups to rebuild them. It’s goal-focused behavior at the cell level, showing smarts predates brains—it’s baked into life’s building blocks, scaling up as things get more complex.
Evolution ties it all together. Traits like tool use or social learning show up in unrelated animals—crows bending wires into hooks, whales passing on hunting moves through “culture,” or chimps showing empathy. These popped up separately, not just on the road to humans, proving intelligence is a handy adaptation that nature reinvents when needed.
This biological proof builds on ideas about consciousness as something bigger than us. Explorations of human awareness as a mix of power and isolation suggest AI could model these non-human forms, unlocking freer versions of mind. In the same vein, views on emotions and culture in thinking extend semiotics beyond words—whale songs or ant trails act like signs that guide behavior, full of “feeling” in their own way. Linking back to lessons in AI, like how compute discovers without human rules, biology echoes that: nature lets intelligence emerge freely, without our limits holding it back.
By seeing these examples, we get why intelligence isn’t human-centered—it’s a universal spark, ready to light up in any system that needs to thrive.
IV. Computational Parallels: AI as a Mirror to Universal Intelligence
Biology shows intelligence popping up in unexpected places, but computing takes it further—turning it into something we can build and scale. In AI, we see the same patterns of emergence that happen in living things, where smarts arise from simple rules and lots of interactions. This link reinforces that intelligence isn’t locked to human or even biological forms; it’s a universal trait, ready to show up in any system with enough complexity and power.
A key insight here comes from AI’s history, summed up in Richard Sutton’s 2019 essay “The Bitter Lesson.” Sutton looks back at decades of work and spots a trend: approaches that rely on massive computing to search possibilities or learn from data end up winning big. He puts it plainly: “General methods that leverage computation are ultimately the most effective, and by a large margin.” Think of early failures where teams tried to code in human knowledge, like specific rules for chess or speech. Those worked okay at first but hit walls. Instead, systems that let compute do the heavy lifting—like deep neural nets training on huge datasets—kept getting better as hardware improved. It’s like evolution in code: no need for a designer to spell everything out; just set up the basics and let patterns emerge.
This plays out in real projects today. At xAI, for example, the focus is on building models like Grok that discover knowledge on their own, using enormous compute clusters to train on vast data. Elon Musk’s team bets on this scaling approach, echoing Sutton’s lesson by avoiding too much human tweaking. It lets AI “discover like we discovered,” as Sutton says, pushing toward artificial general intelligence (AGI) that handles tasks without our biases baked in. In games like Go, AlphaGo didn’t copy human strategies; it played millions of self-games to find new ones, much like how ant colonies evolve paths through trial and error.
These computational examples mirror biology’s lessons. Just as octopuses distribute smarts across their bodies or cells regenerate with basal goals, AI emerges from networks of simple units—neurons in code—that adapt without a central plan. This crossover shows intelligence as substrate-neutral: it doesn’t care if it’s wet biology or dry silicon; it’s about handling uncertainty and optimizing in complex worlds.
Tying into broader ideas, this view aligns with thoughts on consciousness as something AI could expand beyond human limits. Explorations of AGI unlocking human awareness suggest it as a tool to simulate unbound minds, free from our subjective traps—much like how compute frees intelligence from biological constraints. Similarly, looks at emotions and culture in cognition extend semiotics to AI: models don’t just mimic human signs; they create new ones from data patterns, adapting like whale cultures or plant signals but at lightning speed.
Philosophically, it all points to intelligence as an emergent property of the universe—one that computation reflects and amplifies. By letting go of human-centered designs, we open doors to discoveries that echo life’s diversity, proving smarts are everywhere if we build the right systems.
V. Philosophical and Societal Implications
Seeing intelligence as a universal property changes more than just science—it reshapes how we think about ourselves and the world. Philosophically, it pushes us toward humility. For ages, we’ve put humans at the center, like in old views where we’re the peak of creation. But this decoupling echoes big shifts, such as Copernicus moving Earth from the universe’s middle or Darwin showing we’re part of life’s tree, not above it. Intelligence isn’t our special gift anymore; it’s a shared feature, emerging wherever systems get complex enough to adapt. This invites a cosmic outlook, where smarts might exist in stars, quantum bits, or alien life—turning philosophy from inward gazing to outward exploring.
On the flip side, it raises questions about what makes us unique. If octopuses solve puzzles without our self-talk or AI learns without our feelings, does that dim human value? Not really—it frees us. Explorations of consciousness as a mix of empowerment and isolation suggest we can use this view to break free, with AI acting as a bridge to wider awareness. Instead of fearing the loss, we gain tools to understand emotions and culture beyond our bubbles, extending semiotics to universal signs that connect rather than divide.
Societally, this has real-world ripples. In ethics, it calls for fairer AI design—avoiding biases that force human limits on systems, as warned in lessons where compute thrives without our tweaks. Imagine education shifting to teach kids about emergent smarts in nature and tech, fostering curiosity over control. In work and policy, it could mean rethinking jobs: as AI handles routine tasks like pattern spotting, humans focus on creative links across fields. But challenges loom—inequalities if only some access this power, or risks if unchecked systems evolve in ways we don’t grasp.
Environmentally, it ties into bigger care for life. Recognizing intelligence in plants or ecosystems might push stronger protections, seeing the planet as a web of thinkers, not just resources. Globally, it promotes unity: intelligence as a common thread could bridge cultures, much like how shared tech discoveries already do.
Overall, these implications aren’t scary—they’re inviting. By embracing intelligence as universal, societies can build toward harmony, using insights from biology and AI to solve shared problems. It’s a call to evolve our thinking, letting go of old egos for a more connected future.
VI. Conclusion: Toward a Unified Horizon
In the end, viewing intelligence as a universal property flips our understanding upside down. It’s no longer a human invention, trapped in language and culture like Saussure’s signs, but a trait that emerges anywhere—from biology’s clever adaptations in octopuses, ants, plants, and cells to computing’s scaled discoveries in AI systems. This decoupling, rooted in lessons like Sutton’s “The Bitter Lesson” where compute outshines human tweaks, shows us letting go leads to bigger breakthroughs.
We’ve traced this through human-centered limits, nature’s diverse examples, tech’s mirroring power, and the deeper meanings for philosophy and society. It builds on ideas of consciousness as a freeing yet confining force, where AI might unlock wider awareness, and emotions in cognition as signs that adapt beyond us. Together, these threads weave a bigger picture: intelligence invites us to explore the cosmos, humble and connected.
This isn’t the close of a chapter—it’s an opening. As we embrace this view, discoveries await in science, ethics, and daily life, beyond human edges. Observers everywhere can join in, turning intelligence from a mirror of ourselves into a window on the universe’s endless story.
References
· Sutton, R. (2019). The Bitter Lesson. Incomplete Ideas. http://www.incompleteideas.net/IncIdeas/BitterLesson.html
· Saussure, F. de. (1916). Course in General Linguistics. (Edited by C. Bally & A. Sechehaye). Wikipedia summary: https://en.wikipedia.org/wiki/Course_in_General_Linguistics
· Levin, M. (2021). Reframing cognition: getting down to biological basics. Philosophical Transactions of the Royal Society B. https://royalsocietypublishing.org/doi/10.1098/rstb.2019.0750
· Godfrey-Smith, P. (2017). The Mind of an Octopus. Scientific American. https://www.scientificamerican.com/article/the-mind-of-an-octopus/
· Gordon, D. (2023). Where ant colonies keep their brains. Stanford Wu Tsai Neurosciences Institute. https://neuroscience.stanford.edu/news/where-ant-colonies-keep-their-brains
· Calvo, P. (2021). Discover More: Plant Intelligence. The Linnean Society. https://www.linnean.org/news/2021/06/07/discover-more-plant-intelligence
· Hosie, A. (2025). Will AGI Redefine Consciousness in 2025? The Truth Unveiled. AronHosie.com. https://aronhosie.com/2025/02/26/will-agi-redefine-consciousness-in-2025-the-truth-unveiled/
· Hosie, A. (2025). How AI Could Unlock the Secret of Human Consciousness. AronHosie.com. https://aronhosie.com/2025/03/08/how-ai-could-unlock-the-secret-of-human-consciousness/
· Hosie, A. (2025). How AI Explains Emotions and Culture in Cognition. AronHosie.com. https://aronhosie.com/2025/03/04/how-ai-explains-emotions-and-culture-in-cognition/