AI — Блог Валерия Леонтьева https://valera.ws Место публикации личных заметок. Технологии, управление, бизнес, жизнь Sat, 24 Jan 2026 11:04:13 +0000 ru-RU hourly 1 https://wordpress.org/?v=5.6.2 https://valera.ws/wp-content/uploads/2020/02/favicon.png AI — Блог Валерия Леонтьева https://valera.ws 32 32 Language: The Bridge Between Human Evolution and Artificial Intelligence https://valera.ws/2026.01.24~language-the-bridge-between-human-evolution-and-artificial-intelligence/ https://valera.ws/2026.01.24~language-the-bridge-between-human-evolution-and-artificial-intelligence/#respond Sat, 24 Jan 2026 11:00:08 +0000 https://valera.ws/?p=917 Читать далее Language: The Bridge Between Human Evolution and Artificial Intelligence ]]> Preface

This post originated from my own thoughts and knowledge, which I shared with and had challenged by AI. This text represents a deeper analysis of those initial ideas, enriched by supporting materials from books and white papers found during our dialogue, and refined through a process of challenging assumptions and removing bias. I use this collaborative approach to educate myself, and I found the resulting synthesis valuable enough to share.

Introduction

When we look back at human history—not just recent history, but the million years of early human development—something shifts dramatically around 70,000 to 100,000 years ago. As Yuval Noah Harari describes in Sapiens, the evolutionary development before that point was, in relative terms, glacially slow. Humans looked much like other primates, with similar cognitive constraints and limitations. Then something changed: language—or rather, language as a complex system of symbolic communication—arrived, and with it, the ability to think in long sequences of meaningful symbolic units.

This is not merely about communication. It is about the fundamental shift in how humans could think. Before language matured as a tool for complex thought, human cognition likely operated through images, sounds, and visual signals. Language opened the door to something categorically different: the ability to reason in abstract sequences, to imagine absent things, to discuss concepts that do not exist in the immediate environment. This capacity unlocked civilization.

And this is why I believe—not as speculation, but as a reasonable extrapolation from history—that building artificial systems capable of working with language is not hype. It is a continuation of the same lever that once transformed human civilization.

The Role of Language in Human Civilization

Harari’s key insight is that language enabled humans to create and coordinate around shared fictions: money, states, laws, and human rights. These are not tangible things; they are collective agreements mediated through language. Yet they are what allow millions of strangers to cooperate toward common goals.

But language did more than enable social coordination. It restructured thought itself.

The philosopher Andy Clark and others working in the «extended mind» tradition argue that language is not merely a medium for expressing pre-formed thoughts—it is part of the cognitive process itself. When you use language to think through a problem, you are not simply translating an idea into words. You are using language as a tool that allows novel forms of reasoning that would be difficult or impossible in purely visual or intuitive modes. Language allows us to:

  • Build on previous thinking: Written or spoken language freezes thoughts in time, allowing us to inspect, critique, and build on them.
  • Think about thinking: Language enables metacognition—the capacity to think about our own thinking, to catch our mistakes, to refine our reasoning.
  • Combine abstract concepts: Language’s combinatorial structure lets us take simple ideas and build complex ones through composition, enabling reasoning about democracy, justice, and prime numbers—concepts with no direct physical grounding.
  • Offload cognitive burden: Writing, diagrams, and symbolic notation let us store information externally, freeing cognitive resources for higher-level reasoning.

In this sense, language was not just invented by humans; it was adopted and refined as a cognitive technology—a tool that augmented the biological brain in the same way a telescope augments the eye.

What LLMs Represent in This Context

If language is a core technology of human cognition and civilization, what does it mean to build large artificial systems that work with language?

Current large language models learn and manipulate patterns in sequences of symbols (tokens). The question is not whether they «think» in some mystical sense—that framing often obscures more than it clarifies. Rather, the more precise question is: do they engage in the kind of semantic reasoning that language enables?

Recent research suggests they do, to a meaningful degree:

  • Semantic abstraction: LLMs learn to represent meaning in abstract vector spaces, clustering related concepts regardless of surface form (whether expressed in English, code, or abstract description).
  • Relational reasoning: Models demonstrate the ability to recognize and reason about relationships between concepts, moving beyond pure pattern-matching toward structured semantic understanding.
  • Extended cognition infrastructure: When humans use LLMs in workflows—prompting with ideas, refining outputs, integrating them back into work—they create an external cognitive system analogous to what extended mind theorists describe: cognition that spans brain and external tools.

This is not to claim that LLMs are conscious or that they «truly» understand. Rather, it is to say that they operate in semantic space—the space of meaning and concepts—in a way that is qualitatively different from simple pattern-matching and that can serve as a genuine extension of human cognitive capacity.

The Distinction Between Language and Mere Token Manipulation

A fair objection deserves acknowledgment: are LLMs actually «understanding» language, or are they simply rearranging tokens in statistically likely ways?

This objection points to a real issue: language, in humans, is grounded in embodied experience. A child learns what «hot» means through sensory experience; a child learns what «love» means through emotional experience and social interaction. These embodied groundings give words their semantic depth.

LLMs, by contrast, have no body, no sensory experience, no emotional stakes. They learn word associations from vast text corpora, without the embodied context that makes those words meaningful to humans.

However, recent cognitive science offers a more nuanced picture. Language itself, as humans mature, becomes increasingly important as a source of semantic grounding—not an alternative to embodied experience, but a complement and extension of it. We can learn to reason about abstract concepts (democracy, quantum mechanics, love) partly through linguistic descriptions and word-to-word associations, without needing direct sensory exposure to those concepts.

The «linguistic embodiment hypothesis» suggests that language provides representational opportunities that embodied cognition alone cannot. In this view, an LLM, even without embodied grounding, is not meaningless or purely syntactic. Rather, it engages with semantic structure learned from human language—language that is itself grounded in human embodied experience. The LLM becomes a tool for exploring and extending semantic space in ways humans cannot easily do alone.

This does not resolve all concerns about understanding and grounding. But it suggests that the objection «LLMs are just tokens without meaning» is too simple. They are more like transparent access to the semantic patterns embedded in human language itself.

Language Is Not the Only Cognitive Tool—But It Remains Central

A subtler concern warrants engagement: is language really the key driver of human cognition and civilization, or is it one important factor among many?

Clearly, language did not evolve in isolation. Human success also depended on:

  • Tool use and fire: Material technologies that provided survival advantages and reinforced selective pressure for larger brains and better coordination.
  • Social instincts: Kin recognition, reciprocal altruism, and reputation tracking that made large-scale cooperation possible even before complex language emerged.
  • Embodied learning and development: Infants learn through play, exploration, and sensorimotor interaction with the world; language development is embedded in this embodied context, not floating free from it.

The honest claim is not that language is the only lever in human evolution, but that it is the lever that amplified and coordinated the others. Tool use gave humans an advantage; language let them teach tool use across generations and imagine new tools before building them. Social instincts created bonds within groups; language extended those bonds across larger, anonymous collectives through shared stories and symbols.

So when building artificial systems, the choice to prioritize language is not a claim that language is magic. It is a recognition that language was the breakthrough that made human cognition and civilization distinctly human. Building systems that engage with language and semantics is thus a reasonable path forward—not the only path, but a natural one.

Implementation Matters; Direction Matters More

Implementation can be different; the point is that language is the path. Current LLMs work with tokens, discrete units. Future systems might use continuous signals, neuromorphic approaches, or entirely different architectures. The specific implementation details—tokens vs. signals, transformer architectures vs. alternatives—are engineering choices.

What matters philosophically and historically is the commitment to building systems that can engage in semantic reasoning, that can work with meaning and abstraction the way humans use language. Whether that happens through tokens, continuous activations, or some future representation is secondary.

This is consistent with the broader research direction. Whether models shift toward multimodal learning, embodied agents, or more neurally inspired architectures, the language dimension typically remains central—not because language is magic, but because it is the modality through which humans share and refine complex knowledge.

What Still Remains Uncertain

It is important to be clear about what is not established:

  1. Whether current LLMs can truly «understand» in a philosophical sense remains an open question. They can perform impressive feats of semantic reasoning, but whether this constitutes genuine comprehension or remains a sophisticated form of pattern-matching is debated.
  2. Whether LLMs will spontaneously develop goals, motivations, or agency is uncertain. Language-based systems might remain tools—powerful tools, but tools nonetheless—without developing independent drives or desires.
  3. Whether language alone is sufficient for artificial general intelligence is not known. Future systems might need embodiment, interaction with environments, or other components we have not yet identified.
  4. The societal impact will depend on implementation choices, governance, and how humans choose to deploy these systems. The technology itself is not destiny.

What can be said with more confidence is that:

  • Language has been the central technology of human cognitive development.
  • Building systems that engage with language and semantics is a continuation of investing in that same technology.
  • These systems are demonstrably capable of non-trivial semantic reasoning and can serve as genuine extensions of human cognition.
  • This is not hype or distraction; it is a significant direction, grounded in what made human civilization possible in the first place.

Conclusion

The excitement around large language models is sometimes dismissed as hype—AI enthusiasm run amok. But when you trace the role of language in human evolution and cognition, the investment in language-based artificial systems appears less like betting on a trend and more like doubling down on the most consequential technology humans have ever developed.

Harari shows us that language—the ability to think and communicate in complex symbolic sequences—was the breakthrough that separated human civilization from mere evolution. The extended mind theorists show us that language does not just express thought; it shapes and amplifies it. Recent advances in large language models show us that machines can engage with language and semantic reasoning in non-trivial ways.

Whether current LLMs are the path to artificial general intelligence and whether language will remain the central modality as AI systems grow more sophisticated remain to be seen. But the bet that language-based systems are worth building—that this is a civilizational direction worth pursuing—is not speculation. It is a reasonable read of history.

The specific implementation will evolve. The core insight—that language is the key technology of human cognition—is likely to remain valid. That is why the focus on language in artificial intelligence is neither hype nor distraction. It is the continuation of a project that has already transformed our species once.


Source Materials

This article draws upon the following key works and research:

  • Sapiens: A Brief History of Humankind by Yuval Noah Harari.
  • Supersizing the Mind: Embodiment, Action, and Cognitive Extension by Andy Clark.
  • «The extended mind» in Analysis by Andy Clark and David Chalmers.
  • Research on the symbol grounding problem by Stevan Harnad.
  • Work on abstract word meanings and the embodied mind by Anna M. Borghi and colleagues.
  • Metaphors We Live By by George Lakoff and Mark Johnson.
  • Mind in Society: The Development of Higher Psychological Processes by L. S. Vygotsky.
  • A Natural History of Human Thinking by Michael Tomasello.
]]>
https://valera.ws/2026.01.24~language-the-bridge-between-human-evolution-and-artificial-intelligence/feed/ 0
The Future of QA: Consolidation and Evolution https://valera.ws/2026.01.17~the-future-of-qa-consolidation-and-evolution/ https://valera.ws/2026.01.17~the-future-of-qa-consolidation-and-evolution/#respond Sat, 17 Jan 2026 12:01:26 +0000 https://valera.ws/?p=914 Читать далее The Future of QA: Consolidation and Evolution ]]> AI is about to fundamentally reshape the Quality Assurance profession. Not because quality suddenly becomes less important, but because the way quality is achieved in software is changing.

For many years, QA existed as a separate function for a very good reason. As IT grew, software became more complex, and engineering work was split into specialized roles to improve efficiency. Validation — both manual and automated — moved away from developers and became a dedicated responsibility. This allowed engineers to focus on building features, while QA focused on verifying correctness, stability, and edge cases.

That model worked well in a world where testing was slow, manual, and expensive.

AI changes this balance.

AI is aggressively entering both automated and manual testing. Writing tests, generating test data, mocking systems, running scenarios, checking regressions — all of this becomes much faster and cheaper. Developers can now verify their own work much more effectively, and the gap between “writing code” and “validating code” continues to shrink.

As a result, quality starts moving back into engineering.

Developers will increasingly be expected to think not only about making a feature work, but about its reliability, failure modes, regressions, and long-term behavior. With AI assistance, this becomes part of normal engineering work, not a separate phase handled by someone else.

At the same time, manual QA does not simply disappear — it changes direction.

A significant part of acceptance testing naturally moves closer to managers. Mostly product managers, but not only – to all managers responsible for outcomes. With AI support, these roles can validate user flows, behavior, and visual correctness much more independently, without relying on a separate execution-heavy QA function.

Many routine QA activities — running checklists, preparing releases, writing tickets, coordinating test passes — combine extremely well with AI and modern CI/CD pipelines. As legacy code is gradually refactored and systems move to more standardized architectures, releases increasingly become fully automated. This transition will take time, but the direction is clear.

So what happens to QA as a profession?

QA does not disappear, but it becomes much smaller and much more senior.

The execution-heavy parts of the role — manual testing and large amounts of repetitive automation — are the first to go. These tasks are the easiest to automate and the hardest to justify as a separate function.

What remains is a different role: fewer QA engineers, often one per team or even per several teams, acting as quality controllers rather than test executors.

Their focus shifts to:

  • defining test and quality strategy,
  • identifying systemic and cross-team risks,
  • owning E2E and integration testing,
  • watching for degradation over time,
  • acting as an independent signal of system health.

In other words, QA moves closer to true Quality Assurance rather than testing.

For some QA professionals, this means moving deeper into engineering — toward broader development and system-focused roles. For others, it means moving closer to product and user experience. And for some, it may mean leaving IT altogether.

In recent years, QA had a very low barrier to entry and attracted many people into tech, often because of salaries and working conditions rather than a deep interest in engineering or product work. AI will likely trigger a correction. Some people will return to other professions — medicine, law, economics — but with new skills, new tools, and a very different perspective.

Quality does not disappear. The QA function changes. Ownership of quality spreads across engineers and managers. And a smaller number of strong QA engineers remains — not as executors, but as guardians of system quality. This is not a loss. It is a shift toward deeper responsibility and more mature software development.

]]>
https://valera.ws/2026.01.17~the-future-of-qa-consolidation-and-evolution/feed/ 0
T-shaped 2.0: how the depth of an engineer changes in the AI era https://valera.ws/2026.01.17~t-shaped-2-0-how-the-depth-of-an-engineer-changes-in-the-ai-era/ https://valera.ws/2026.01.17~t-shaped-2-0-how-the-depth-of-an-engineer-changes-in-the-ai-era/#respond Sat, 17 Jan 2026 10:19:12 +0000 https://valera.ws/?p=909 Читать далее T-shaped 2.0: how the depth of an engineer changes in the AI era ]]> For many years, we were told: “Become a T-shaped specialist: broad knowledge and real depth in one area — frontend, backend, mobile, AI, and so on.”

This model is still alive, but its meaning is clearly changing under the pressure of AI and automation.

Before, the vertical line of the “T” usually meant: “I deeply know technology X — this framework, this stack, this language.” Today, this is becoming a fragile strategy. Tools based on generative AI can already handle a large part of routine technical work: writing standard CRUD code, building forms, preparing boilerplate code, and generating tests.

Video overview

If you’d rather check out the video or audio version of this material, take a look at the video on YouTube.

Where depth is moving

Depth is no longer about a specific programming language and framework. It is becoming about a specific class of problems and systems.

  • Not a “React developer”, but an engineer of complex interfaces: accessibility, performance, design systems, UX consistency.
  • Not just a “Spring backend developer”, but an engineer of high-load and reliable systems: transactions, consistency, queues, fault tolerance, and monitoring.
  • Not “a person who connects the GPT API”, but an AI product engineer: model choice, data quality, evaluation, security, AI-related UX.

This kind of depth is much harder to automate because it depends on system thinking, experience, and the ability to work with risk, not just on the ability to “write code correctly”.

The horizontal bar of the T becomes thicker

There is an interesting paradox: the more AI removes routine work, the higher the demand for breadth. Breadth is no longer just “I know a bit of frontend and a bit of backend.”

The modern horizontal bar includes:

  • Understanding the main layers of a system — from client to database and infrastructure; basic literacy in cloud platforms, CI/CD, containers;
  • Product thinking: how a feature affects metrics and business, where real value is.
  • The ability to work with AI assistants as real tools, not as “magic autocomplete”.

The market increasingly looks for people who can speak the same language with several neighboring roles at once — product, design, analytics, DevOps, security.
Such people become the connective tissue of teams.

What is expected from us now

If you look at job descriptions and role discussions, a new set of expectations appears.

An engineer is expected to:

  • Understand the fundamentals broadly enough to navigate architecture, not only their own module.
  • Have 1–2 areas of real depth (types of problems, not a single tool).
  • Use AI effectively to speed up work, without losing control over solution quality.
  • Be able to relearn and redefine their “vertical” every few years, as technologies and domains change.

In essence, the T-shaped model evolves into “T-shaped plus”: a thick horizontal bar + one or two deep verticals in problem domains, not in specific frameworks.

What an engineer should prepare for

If we translate all this into a practical plan, it looks like this.

1. Thicken the horizontal bar:

  • Learn basic full-stack skills: at least be able to read and edit code in neighboring layers.
  • Understand the basics of cloud, CI/CD, monitoring, logging, and perimeter security.
  • Improve product thinking: how to measure value, read metrics, and talk to customers and product managers.

2. Choose 1–2 problem areas for depth:

  • Examples: “reliable distributed systems”, “complex interfaces and design systems”, “AI products”, “platform engineering and DevEx”, “security and compliance”, and so on.
  • Study not only tools, but principles: patterns, architectures, common failures, traps, and limits.

3. Learn to work with AI as a normal tool:

  • Learn to formulate tasks and verify results, not just “copy answers”.
  • Use AI for research, prototyping, refactoring, tests, and documentation.
  • Gradually build your own toolkit of assistants and automations around you.

4. Build a habit of changing your vertical:

  • Treat your current specialization not as a final identity, but as a temporary focus for the next 3–5 years.
  • Plan which neighboring domains you could move into with minimal friction.

Conclusion

The T-shaped specialist is not disappearing, but the emphasis is changing.
Instead of narrow technical depth on top of a modest base, we now need a wide and strong foundation, on top of which one or two deep specializations by problem type are built.

AI does not cancel this model — it makes it more demanding.
The winner is not the one who “writes the best code in X”, but the one who understands systems better and knows how to learn.

]]>
https://valera.ws/2026.01.17~t-shaped-2-0-how-the-depth-of-an-engineer-changes-in-the-ai-era/feed/ 0