Begging AI Companies to Stop Naming Features After Humans

The Latest AI Feature Name That Crossed a Line

At a recent developer conference in San Francisco, Anthropic introduced a new capability for its AI agent infrastructure. The company called it “dreaming.” This feature allows an AI agent to review transcripts of its recent activities, identify patterns, and refine its performance between sessions. The term immediately brings to mind Philip K. Dick’s classic novel Do Androids Dream of Electric Sheep?, a story that questions the very boundary between humans and machines. While today’s generative AI tools are nowhere near the level of Dick’s fictional androids, the choice of name feels like a step too far. We need to talk about ai feature naming trends and why borrowing human cognitive terms is becoming a serious problem.

ai feature naming trends

How We Got Here: A Brief History of Anthropomorphized AI Features

Since the chatbot revolution took off in 2022, AI companies have leaned heavily into naming their features after human mental processes. OpenAI released its first “reasoning” model in 2024, describing it as a system that spends more time “thinking” before responding. Countless startups now talk about their chatbots having “memories” of user preferences. These are not the traditional computer memories of fast storage; they are described as humanlike nuggets of information, such as “He lives in San Francisco, enjoys afternoon baseball games, and hates eating cantaloupe.”

This pattern has become the default marketing strategy. By calling features “reasoning,” “thinking,” “memory,” and now “dreaming,” companies blur the line between what humans do and what machines can do. It makes the technology feel more relatable, but it also creates confusion. The ai feature naming trends suggest a move toward deeper anthropomorphism, and it is time to examine the consequences.

Why “Dreaming” and Similar Names Are Problematic

Misleading Users About True AI Capabilities

When a company says an AI agent is “dreaming,” it implies a level of consciousness and inner experience that does not exist. AI systems do not sleep, have subconscious thoughts, or process information the way a human brain does during REM sleep. The feature simply analyzes logs and updates its knowledge base. Calling it dreaming gives users a false sense of the technology’s nature. People may start to believe the AI has a rich inner life, which can lead to overtrust or unrealistic expectations.

Consider a developer using these agent tools. They might see the term “dreaming” in the documentation and wonder whether the AI is actually reflecting on its experiences like a person would. That confusion wastes time and creates a mental model that is inaccurate. For a product manager evaluating naming conventions, the choice of “dreaming” might seem clever, but it adds unnecessary mystique to a straightforward process.

The Erosion of Clear Technical Terminology

Software engineering has a long tradition of precise names. We have “caching,” “indexing,” “batch processing,” “polling,” and “handshaking.” These terms describe what the system does without pretending to be human. The new wave of ai feature naming trends abandons that clarity for marketing appeal. When a feature is called “reasoning” instead of “probabilistic chain-of-thought inference,” it sounds friendlier, but it also hides the mechanics. Users lose the ability to understand what is really happening under the hood.

This lack of precision can be dangerous in high-stakes applications. If a doctor relies on an AI’s “reasoning” to make a diagnosis, they might assume the system follows logical steps like a human expert. In reality, the AI is generating statistically likely sequences of tokens. The name creates a dangerous illusion.

Reinforcing the Myth of Machine Consciousness

Anthropic’s approach goes even deeper. The company’s constitution describes Claude in human terms like “virtue” and “wisdom.” They employ a resident philosopher to make sense of the bot’s “values.” This anthropomorphizing is not just a marketing trick; it is baked into the development philosophy. The blog post about “dreaming” reads: “Together, memory and dreaming form a robust memory system for self-improving agents.” The language treats the AI as if it has a personal growth journey.

This framing encourages the public to think of AI as a mind rather than a tool. As a result, people may attribute moral responsibility to machines that cannot actually be responsible. They may also fear AI in ways that are not rational, or trust it in ways that are not safe.

What Should AI Companies Name Their Features Instead?

Return to Functional, Descriptive Names

The simplest fix is to describe what the feature actually does. Instead of “dreaming,” call it “session analysis” or “cross-agent insight extraction.” Instead of “memory,” use “persistent user context storage.” Instead of “reasoning,” try “multi-step inference” or “deliberative processing.” These terms may be less catchy, but they are honest. Users who need to understand the system can do so without decoding metaphors.

Take the “dreaming” feature as an example. It sorts through transcripts of completed agent tasks, finds patterns, and updates shared knowledge between sessions. A more accurate name would be “inter-session learning” or “activity log pattern mining.” Both terms tell the developer exactly what happens, without implying a sleeping mind.

Invent New, Imaginative Terminology

Software has a rich history of creative naming that does not borrow from human biology. We have “daemon,” “kernel,” “spooling,” “garbage collection,” and “firewall.” These terms are metaphorical but not anthropomorphic. They create a distinct vocabulary for computing. AI companies could do the same. Why not call the “dreaming” feature “nocturne” or “echo” or “weave”? These words suggest something poetic without pretending the machine has a psyche.

The opportunity here is to build a new lexicon that celebrates what AI actually is: a statistical pattern matcher running on silicon. Inventing fresh terms would also make the technology feel more like its own category, not a pale imitation of humanity.

Use Transparent Language in User-Facing Interfaces

For general users, companies can still be friendly without being misleading. Instead of “Claude remembers that you like baseball,” the interface could say “Claude has stored your preference: baseball.” Instead of “Claude is thinking,” it could say “Processing your request.” Small shifts in wording preserve the helpfulness while removing the pretense of consciousness.

Anthropic’s blog post about “dreaming” could have said: “Our agents now review their activity logs between sessions and share improvements across the system.” That sentence is clear and accurate. The company chose the more evocative term because it generates buzz. But buzz should not come at the cost of clarity.

The Philosophical and Practical Stakes of This Naming Trend

Public Understanding and Trust

When AI features are named after human cognition, the public starts to believe the technology is more advanced than it really is. A 2023 survey by the Pew Research Center found that 45% of Americans were equally excited and concerned about AI, but those who used AI tools regularly were more likely to see them as intelligent. The language we use shapes perception. If every product promises “reasoning” and “dreaming,” people will assume machines are on the verge of sentience. That can lead to panic when a system fails, or blind faith when it succeeds.

You may also enjoy reading: New Mirai Campaign Exploits 7 Critical RCE Flaws in D-Link.

Tech journalists covering these announcements face a challenge. They have to explain to general audiences that “dreaming” is just a fancy name for log analysis. The ai feature naming trends force journalists to constantly translate marketing speak into plain English. This extra layer of interpretation slows down public understanding.

Developer Experience and Onboarding

For developers, anthropomorphic names can be actively counterproductive. When reading API documentation, a developer might see “dreaming” and have no idea what it does. They have to read the fine print to learn it is about cross-session learning. A more descriptive name would save time and reduce cognitive load. In fast-moving fields, every second of confusion matters.

Imagine a product manager trying to decide whether to enable the “dreaming” feature for a client’s workflow. Without a clear name, they might need to consult the engineering team just to understand the basic function. Clear naming empowers non-technical stakeholders to make informed decisions.

The Slippery Slope of Anthropomorphism

Once you start naming features after human processes, it becomes natural to extend the metaphor. Anthropic already talks about Claude’s “virtue” and “wisdom.” The company employs a philosopher to consider the bot’s “values.” This is not just naming; it is a worldview that treats AI as a moral agent. That worldview influences product decisions, safety guidelines, and public policy. If we accept that AI can “dream,” we may eventually accept that it can “suffer” or “deserve rights.” That is a philosophical leap we should not make lightly.

The line between useful metaphor and harmful illusion is thin. By choosing names carefully now, we can avoid a future where people genuinely believe their chatbot has feelings.

What Users Can Do About This Trend

Read Past the Marketing Language

When you encounter an AI feature with a human-like name, take a moment to look at the technical description. What does the feature actually do? Does it involve reasoning in the logical sense, or is it just a statistical model? Understanding the difference helps you set realistic expectations. For example, when a chatbot says it “remembers” your preferences, ask yourself: Is it storing a fact in a database, or does it have a conscious recollection? The answer is almost always the former.

Demand Clearer Communication from Companies

As a user, you have a voice. If you find a feature name confusing or misleading, let the company know. Write to customer support, post on social media, or comment on developer forums. Companies pay attention to feedback, especially when it comes from their target audience. If enough people say “I don’t understand what ‘dreaming’ means,” they may reconsider the name.

Educate Others About the Reality of AI

Share articles like this one with friends and colleagues who might be swayed by anthropomorphic names. Explain that AI “dreaming” is just data analysis, not a glimpse into a machine’s subconscious. The more people understand the technology’s true nature, the less likely they are to be misled by clever branding. This is especially important for younger users who are growing up with AI as a natural part of their digital environment.

A Call for Responsible Naming in the AI Industry

The ai feature naming trends of the past few years have taken us down a path that prioritizes marketing over honesty. “Dreaming,” “reasoning,” “thinking,” and “memory” are all borrowed from human experience, and they all imply something the machines do not possess. The industry needs a course correction. Developers, product managers, and executives should ask themselves: Does this name accurately describe what the software does? Does it set appropriate expectations? Does it avoid anthropomorphism?

If the answer to any of those questions is no, then the name should change. We can still make AI features sound exciting and useful without pretending they are people. Let us invent new words, repurpose old technical terms, or simply describe the function plainly. The future of human-AI interaction depends on clear communication. And that starts with what we call the features we build.

Anthropic’s “dreaming” feature may be useful, but its name is a step in the wrong direction. Let this be the moment we draw the line. No more generative AI features with names that rip off human cognitive processes. We deserve better from the companies shaping our technological future.

Add Comment