Government Performance and Results Act
Based on Wikipedia: Government Performance and Results Act
The Government's Report Card
Imagine running a business where you never tracked whether your products actually worked, where you spent billions of dollars annually with no clear way to measure success or failure. That was essentially how the United States federal government operated for most of its history. Agencies received money, did things, and hoped for the best. Whether those things accomplished anything meaningful? Often, nobody really knew.
The Government Performance and Results Act changed that.
Signed into law by President Bill Clinton on August 3, 1993, this legislation introduced something surprisingly radical to federal agencies: accountability through measurement. For the first time, government programs would be required to set specific goals, track their progress, and report honestly on whether they succeeded or failed.
Why This Mattered
The basic idea seems almost comically obvious. Of course you should know if your programs are working. Of course you should set goals before spending taxpayer money. But the federal government is a sprawling organism with roughly two million civilian employees, an annual budget exceeding four trillion dollars, and hundreds of agencies doing everything from managing national parks to regulating nuclear power plants. Getting this behemoth to track its own performance was, in practice, revolutionary.
Before the Government Performance and Results Act—often abbreviated as GPRA, pronounced "gip-ra" by Washington insiders—there had been several attempts to bring private-sector accountability to government operations. In the 1960s, the Pentagon introduced something called the Planning, Programming, and Budgeting System, which tried to link spending decisions to measurable outcomes. It spread briefly to civilian agencies before collapsing under its own complexity.
Then came Zero-Based Budgeting in the 1970s, championed by President Jimmy Carter. The idea was elegant: instead of assuming last year's budget was a starting point, agencies would justify every dollar from scratch each year. In theory, this would eliminate wasteful programs that had outlived their usefulness. In practice, it generated mountains of paperwork and changed very little.
Total Quality Management had its moment in the 1980s and early 1990s, borrowing techniques from Japanese manufacturing to improve government processes. Like its predecessors, it generated enthusiasm, consumed resources, and ultimately faded away.
The GPRA succeeded where these earlier efforts failed partly because it was simpler. Rather than reimagining the entire budget process, it focused on three basic requirements: tell us what you're trying to accomplish, tell us how you'll know if you succeeded, and tell us what actually happened.
The Mechanics of Measurement
Under the original 1993 law, federal agencies had to produce three types of documents.
First, strategic plans. Every agency must create a five-year strategic plan containing a mission statement and long-term goals. These aren't vague aspirations like "improve public health" but specific, results-oriented objectives that can actually be measured. The Environmental Protection Agency, for instance, can't simply promise to protect the environment—it must specify what outcomes it's pursuing and how it will track progress toward them.
Second, annual performance plans. Each fiscal year—which in the federal government runs from October 1 through September 30—agencies must establish concrete performance goals. These plans explain not just what the agency hopes to achieve but how it intends to get there and how anyone watching can verify whether it succeeded.
Third, annual performance reports. This is the accountability moment. Agencies must review whether they hit their targets. If they fell short, they must explain why and describe what they'll do differently. No hiding from failure. No pretending that falling short of goals was actually some form of success.
The Office of Management and Budget, which sits within the Executive Office of the President and serves as the federal government's chief fiscal overseer, compiles all this information into an annual government-wide performance report. This document accompanies the President's budget request to Congress, theoretically connecting spending decisions to actual results.
The Gap Between Legislation and Implementation
Despite being signed in 1993, the GPRA didn't actually take effect until 1999. This six-year gap wasn't bureaucratic laziness—it was intentional. The law recognized that measuring government performance was genuinely difficult and that agencies would need time to develop the systems, collect baseline data, and train personnel to make meaningful measurement possible.
This patience proved wise. Measuring government effectiveness is fundamentally different from measuring business performance. A company can track profits, market share, and customer satisfaction. But what's the equivalent for the National Weather Service? For the Federal Bureau of Investigation? For the Centers for Disease Control and Prevention?
Consider the challenge facing the Department of Homeland Security. Should it measure the number of terrorist attacks prevented? But how do you count attacks that never happened partly because of your efforts and partly because no one was planning them anyway? Should you measure arrests? Convictions? Public confidence surveys? Each metric captures something real but potentially encourages the wrong behavior. An agency rewarded for arrests might make arrests that don't hold up in court. One rewarded for convictions might avoid difficult cases.
These measurement challenges don't disappear just because a law requires measurement. Agencies learned to game their metrics, choosing goals they could easily achieve rather than ones that mattered most. They discovered the observer effect familiar to physicists: the act of measuring something changes the thing being measured.
Modernization for a New Era
By 2010, the original GPRA had been operating for over a decade, and its limitations had become clear. President Barack Obama signed the GPRA Modernization Act on January 4, 2011, updating the framework for a more connected, data-driven era.
The modernization act introduced several significant changes. Agencies now had to identify "priority goals"—a smaller set of high-impact objectives receiving focused attention. This addressed a common criticism of the original law: that agencies produced exhaustive documentation covering everything, which meant nothing stood out as truly important.
The updated law also required agencies to publish their plans and reports in machine-readable formats. This might sound like a technical detail, but it mattered enormously. Previously, performance information was buried in PDF documents that humans could read but computers couldn't easily analyze. Machine-readable data meant that journalists, researchers, advocacy groups, and ordinary citizens could download government performance information and examine it themselves. Transparency became practical, not just theoretical.
One specific format encouraged by the modernization act is called Strategy Markup Language, or StratML. This is essentially a standardized way of describing organizational strategies so that computers can compare and analyze them across different agencies. It's the kind of infrastructure that makes government-wide performance analysis possible.
The modernization effort was led by Jeffrey Zients, who headed the Office of Management and Budget, and his associate Shelley Metzenbaum, who had spent her career studying how to make government performance measurement actually useful rather than merely compliant.
External Factors and Honest Limitations
One requirement added by the modernization act deserves special attention: agencies must identify "key factors external to the agency and beyond its control that could significantly affect the achievement of the general goals and objectives."
This sounds bureaucratic, but it represents genuine wisdom about accountability. The Department of Agriculture can't control weather patterns that affect crop yields. The Department of Labor can't control whether a pandemic shuts down the economy. The State Department can't control what foreign governments decide to do.
Demanding that agencies identify these external factors upfront serves multiple purposes. It prevents agencies from inventing excuses after the fact—they must predict potential obstacles before outcomes are known. It helps Congress and the public understand the limits of what government action can accomplish. And it encourages honest conversations about which problems government can solve and which it can only influence.
The Ecosystem of Accountability
The GPRA didn't emerge in isolation. It's part of a broader ecosystem of laws designed to improve how the federal government manages itself.
The Federal Acquisition Streamlining Act of 1994 simplified government purchasing rules and linked contracting decisions more closely to performance outcomes. Previously, government contracts often rewarded the lowest bidder regardless of quality. The new framework allowed agencies to consider past performance when choosing contractors.
The Information Technology Management Reform Act of 1996, commonly called the Clinger-Cohen Act after its congressional sponsors, brought similar discipline to government technology investments. Federal IT projects had become notorious for running over budget, behind schedule, and failing to deliver promised capabilities. Clinger-Cohen required agencies to develop capital planning processes for technology investments and appointed Chief Information Officers with genuine authority over technology decisions.
Together, these laws represented Congress's attempt to make government more businesslike—not in the sense of pursuing profit, but in the sense of tracking whether money spent actually produced results.
What Success Looks Like
Has the Government Performance and Results Act actually worked? The answer, appropriately enough, requires nuance.
By one measure, the law has been remarkably successful. Federal agencies now routinely set goals, track metrics, and report results. The infrastructure of accountability exists. Congressional committees have access to performance information when considering whether to fund, modify, or eliminate programs. This information didn't exist before 1993.
By a stricter measure, the law's impact has been modest. Studies consistently find that budget decisions in Congress remain driven primarily by politics, constituency interests, and ideology rather than performance data. An ineffective program with powerful defenders survives; an effective program without champions may not.
But perhaps the most honest assessment acknowledges something harder to measure: the GPRA changed the culture of federal management. Younger federal employees have never known a time when setting goals and measuring results wasn't expected. The vocabulary of performance management—outcomes versus outputs, baselines and targets, logic models and theories of change—has become standard. Even if performance data doesn't determine every budget decision, it shapes how agencies think about their work.
The Deeper Challenge
Behind the technical requirements of strategic plans and performance reports lies a more fundamental question: what is government for, and how would we know if it's succeeding?
Private companies have a clear bottom line. Their purpose is to generate returns for shareholders, and profit measures success. Government serves multiple, often conflicting purposes. It must balance efficiency against equity, speed against deliberation, action against restraint. A program serving a small number of people with severe needs might look wasteful compared to one serving many people with minor needs—but which is more valuable?
The GPRA doesn't answer these questions. It can't. They're political questions requiring democratic deliberation, not technical problems amenable to measurement.
What the law can do—what it does do—is make the tradeoffs visible. When an agency sets goals, it reveals its priorities. When it measures outcomes, it shows what it values. When it reports failures, it opens itself to questions. The infrastructure of accountability doesn't replace democratic judgment, but it gives democracy better information to work with.
Twenty-three years after full implementation, the Government Performance and Results Act remains the foundation of federal performance management. It hasn't made government perfectly efficient or eliminated waste or fraud or failure. No law could. But it created a framework where asking "is this working?" became not just possible but required. In the long history of efforts to make government accountable, that counts as genuine progress.
A Final Irony
There's something deliciously recursive about a law requiring government agencies to set goals and measure results. Because the law itself has goals—improving performance, increasing accountability, earning public trust. And measuring whether the law achieved those goals faces all the same challenges that agencies face measuring their own programs.
How do you establish a baseline for comparison? Government performance before 1993 wasn't systematically measured, so we can't precisely quantify improvement. How do you separate the law's effects from everything else that changed? The internet, globalization, successive administrations with different priorities—all influenced government performance simultaneously. How do you measure something as amorphous as public trust?
The Government Performance and Results Act, in other words, can't fully evaluate its own success using its own methodology. This isn't a flaw in the law. It's a reminder that measurement, however valuable, has limits. Some of the most important things government does—preserving liberty, promoting justice, providing for the common defense—resist quantification.
What the law provides is not certainty but discipline. The discipline to ask hard questions. The discipline to answer honestly. The discipline to improve, even when improvement is difficult to measure. In the messy business of governing a continental democracy of 330 million people, that discipline matters more than any metric could capture.