← Back to Library
Wikipedia Deep Dive

Software agent

Based on Wikipedia: Software agent

The Programs That Think for Themselves

Imagine a program that doesn't wait for you to click a button. It watches. It waits. When conditions are right, it springs into action—sending an email, buying a product, or alerting you to danger. No human intervention required.

This is a software agent, and whether you realize it or not, you've probably interacted with dozens of them today.

The word "agent" comes from the Latin agere, meaning "to do." An agent acts on your behalf, much like a real estate agent or a talent agent in the physical world. But here's what makes a software agent different from the programs you're used to: it decides for itself whether to act, and how. Your word processor waits for you to type. A software agent might be scanning your inbox right now, deciding which emails deserve your attention and which can wait.

What Makes an Agent Different from Ordinary Software

Not every program qualifies as an agent. Your calculator app isn't an agent—it sits dormant until you punch in numbers, then delivers results with no independent judgment. But the distinction goes deeper than just "runs automatically."

Researchers have identified four qualities that separate agents from ordinary programs:

  • Persistence — The code runs continuously, not just when invoked. It's always on, always watching, deciding for itself when to act.
  • Autonomy — Agents select their own tasks, prioritize goals, and make decisions without waiting for human approval.
  • Social ability — Agents can communicate and coordinate with other programs, systems, or even other agents.
  • Reactivity — Agents perceive their environment and respond appropriately to changes.

Think of the difference this way: a traditional program is like a vending machine. You push a button, you get a result. An agent is more like a guard dog. It patrols, observes, makes judgments, and only barks when something demands attention.

The Family Tree of Software Agents

Not all agents are created equal. The field has spawned an entire taxonomy of specialized types, each optimized for different purposes.

Intelligent agents incorporate aspects of artificial intelligence—reasoning, learning, adapting to new situations. They don't just follow rules; they figure things out.

Autonomous agents can modify their own methods. If the path to their goal becomes blocked, they find another way. They're not stuck following a script.

Distributed agents spread themselves across multiple computers. Picture a hive mind, with different parts running on different machines but working toward a common purpose.

Multi-agent systems take this further—multiple agents collaborating to achieve objectives that no single agent could accomplish alone. Like a team of specialists, each contributes unique capabilities.

Mobile agents can actually relocate themselves. They might start running on your laptop, then migrate to a server in another country to get closer to the data they need. The same code, a different physical location.

The Bot Connection

You've probably heard the term "bot," short for robot. Many software agents are colloquially called bots—chatbots, shopping bots, game bots. The terminology overlaps because the concepts do too. But not all bots are sophisticated agents, and not all agents are called bots. A spam filter making intelligent decisions about your email is an agent. A simple script that tweets at fixed intervals is more bot than agent—it lacks the autonomy and reactivity that define true agency.

Some agents even have bodies. When software intelligence is paired with physical hardware—like Honda's humanoid robot Asimo or Apple's Siri running on your phone—the agent becomes embodied. It can see through cameras, hear through microphones, and interact with the physical world through motors and speakers.

The Intellectual Origins

The concept traces back to 1977 and a computer scientist named Carl Hewitt. His "Actor Model" described self-contained, interactive objects that could execute concurrently while maintaining internal state and communication capability. If that sounds abstract, think of it as the philosophical blueprint for programs that could think and act independently.

Software agents evolved from a field called Distributed Artificial Intelligence, which itself branched from parallel computing and distributed problem solving. The DNA of agents contains genes from decades of research into how to make computers work together and think independently.

Then came 1987 and John Sculley's "Knowledge Navigator" video. Sculley, then CEO of Apple, presented a vision of the future where a professor interacts with a sophisticated digital assistant displayed on a tablet-like device. The assistant understands natural language, proactively gathers information, and manages schedules. It was science fiction at the time—but it painted a picture that researchers would spend decades trying to realize.

Early attempts at building this vision failed spectacularly. Engineers tried top-down approaches, attempting to build complete intelligent assistants from scratch. It didn't work. The field only gained traction in the 1990s when developers switched to bottom-up strategies—building simple, specialized agents first, then gradually increasing their sophistication. The rise of the World Wide Web provided perfect hunting grounds for these early agents: search engines, web crawlers, and shopping bots.

Agents in the Wild

Let's get concrete. What are software agents actually doing right now?

Shopping Bots and Buyer Agents

These agents traverse networks—primarily the internet—hunting for goods and services. They excel at commodity products: books, electronics, airline tickets. Anything where specifications are standard and price is the main differentiator. The agent can compare prices across dozens of vendors in seconds, something that would take a human hours.

By 2025, these buyer agents have evolved into something more ambitious. Advanced AI agents now handle what's called "agentic commerce"—they don't just find products, they autonomously discover, compare, negotiate, and complete transactions. Your agent might buy your groceries, renew your subscriptions, and grab concert tickets the moment they go on sale, all while you sleep.

Personal Agents

These work in your corner. They check your email and sort it by importance. They fill out web forms automatically, remembering your address so you don't have to type it for the hundredth time. They scan news sources and assemble customized reports based on topics you care about. They patrol job boards and submit your resume to positions matching your criteria.

Some personal agents even engage in conversation, discussing topics ranging from sports to your deepest fears. Whether that's comforting or unsettling probably depends on your perspective.

Monitoring and Surveillance Agents

The National Aeronautics and Space Administration's Jet Propulsion Laboratory runs agents that monitor inventory levels, plan equipment orders to minimize costs, and manage food storage facilities. These aren't glamorous applications, but they're exactly the kind of repetitive, detail-oriented work that agents excel at.

Other monitoring agents watch for stock market manipulation, track competitor pricing, or keep tabs on complex computer networks—knowing the configuration of every connected machine and alerting humans when something changes unexpectedly.

Military applications take this further. Organizations of agents coordinate tactical decision-making, monitoring ammunition, weapons, and transport platforms. Higher-level agents set goals; lower-level agents pursue those goals while managing scarce resources. It's autonomous warfare coordination, for better or worse.

Data Mining Agents

These operate in data warehouses—massive repositories that aggregate information from many different sources. The agent's job is finding patterns humans would miss. Trends in customer behavior. Shifts in market conditions. Early warning signs of problems.

Classification is their bread and butter: scanning vast quantities of information and sorting it into meaningful categories. A data mining agent might detect a decline in construction industry activity before it becomes obvious, giving companies precious lead time to adjust hiring or equipment purchases.

Security Agents

Some of the most consequential agents today work in cybersecurity. Data Loss Prevention agents watch what users do on computers and networks, comparing actions against policies and intervening when necessary—allowing, alerting, or blocking activity in real time.

Endpoint Detection and Response agents monitor every activity on a computer, hunting for signs of malicious behavior. Cloud Access Security Brokers examine traffic flowing to cloud applications, acting as gatekeepers between users and the services they access.

These agents must be fast, accurate, and comprehensive. A security agent that misses an attack is worse than useless—it provides false confidence.

The Human Cost of Delegation

Here's where things get complicated. Agents offer obvious benefits: they automate tedious tasks, work around the clock, and never get bored or distracted. People generally hate administrative work, and offloading it to agents increases job satisfaction while freeing humans for more meaningful tasks.

But there are darker implications.

Trust affliction. Some people can't bring themselves to fully delegate important tasks to software. They hover, double-check, second-guess. The anxiety of not being in control may outweigh any time savings.

Skills erosion. When you stop doing something, you forget how. People who rely entirely on GPS navigation lose their sense of direction. People who rely on agents to find and filter information may lose the ability to do it themselves. Information literacy—knowing how to find, evaluate, and use information—atrophies without practice.

Privacy attrition. For an agent to act on your behalf effectively, it needs to know you deeply. Your preferences, your habits, your relationships, your secrets. Every effective personal agent is also a detailed dossier about its owner, with all the privacy risks that implies.

Social detachment. When agents handle more of our communication—drafting emails, scheduling meetings, managing relationships—we risk losing genuine human contact. We start seeing the world through our agents' eyes, interacting with other people's agents rather than the people themselves.

These aren't reasons to reject agent technology. But they're reasons to think carefully about what we delegate and what we keep for ourselves.

How Agents Think

The internal architecture of an agent typically involves several interacting components.

The access methods let the agent perceive its environment. It might subscribe to news feeds, query databases, or send out spiders to crawl the web. The content retrieved is usually pre-filtered—you've already selected which newsfeeds to monitor or which databases to search.

The machinery processes this content. It might extract keywords, identify patterns, or parse natural language. This abstracted content becomes an "event" that the agent must decide how to handle.

The reasoning engine makes the decisions. It combines new events with existing rules and knowledge, looking for matches or triggers. If the agent finds something significant, it might perform a deeper search or take action.

The learning machinery allows the agent to improve over time. If you respond quickly to certain types of notifications, the agent increases its weighting for similar events. If you ignore alerts, it learns to deprioritize them. The agent adapts to your behavior.

All of this is governed by security functions that verify actions before executing them. An agent acting on your behalf needs your authority—and that authority must be controlled to prevent misuse.

The Challenge of Building Agent Systems

Creating effective agent systems raises thorny engineering questions.

How do you schedule tasks when multiple agents might want to act simultaneously? How do you synchronize them so they don't work at cross-purposes?

How should agents prioritize? When everything seems important, what comes first?

How can agents collaborate without stepping on each other's toes? How can they recruit resources when they need help?

How do you move an agent from one environment to another while preserving its internal state? If an agent migrates to a new server, does it remember what it learned on the old one?

How should agents communicate? They need shared semantics—agreed-upon meanings for the data they exchange. Without common understanding, agents talking to each other are just making noise.

What hierarchies work best? Perhaps task-execution agents should report to scheduling agents, which report to planning agents. Or perhaps flat organizations work better. The optimal structure likely depends on the problem.

These questions don't have universal answers. Each agent system must solve them in ways appropriate to its domain.

From Objects to Agents

There's a philosophical distinction worth noting. In traditional object-oriented programming, you define software entities by their methods and attributes—what they can do and what data they contain. An agent, by contrast, is defined by its behavior. Not what it can do, but what it does do. How it perceives, how it reasons, how it acts.

This shift in perspective matters. An object is a tool waiting to be used. An agent is a colleague doing its job. The relationship is fundamentally different.

Objects are passive; agents are active. Objects respond to requests; agents initiate action. Objects have no goals; agents pursue objectives. The philosophical distance between these concepts is short, but the practical implications are enormous.

Looking Forward

The trajectory is clear: agents are becoming more capable, more autonomous, and more deeply embedded in daily life. The shopping bots of the 1990s have evolved into commerce agents that handle entire purchasing workflows. The simple email filters of the 2000s have become sophisticated assistants that manage relationships and schedules.

As artificial intelligence advances, agents will make more complex decisions with greater confidence. They'll negotiate on our behalf, represent our interests, and act in situations where we lack the time or expertise to act ourselves.

But this raises a question that goes beyond engineering: how much agency are we willing to surrender to our agents? Every task we delegate is a decision we no longer make. Every choice we automate is a muscle we no longer exercise.

Software agents are, in the end, tools for extending human capability. Like all powerful tools, they can liberate us from drudgery or gradually diminish us, depending on how wisely we use them. The technology will continue advancing whether we think carefully about these questions or not.

Better, then, to think carefully.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.