How Machine Agency Is Reshaping Our World
Among the many questions we grapple with in the wake of UnitedHealthcare CEO Brian Thompson’s tragic murder is the ethics of relying on machines to carry out macro-level decision-making.
A recently leaked manifesto allegedly written by Luigi Mangione—the man accused of murdering UnitedHealthcare CEO Brian Thompson—reveals that the motive behind last week’s targeted killing stemmed from outrage over a belief that UnitedHealth egregiously prioritizes profits over patient care: "The US has the #1 most expensive healthcare system in the world, yet we rank roughly #42 in life expectancy... The reality is, these [indecipherable] have simply gotten too powerful, and they continue to abuse our country for immense profit…"
Thompson’s murder comes just one year after a lawsuit was filed against UnitedHealth, alleging that the company used faulty AI systems to deny medically necessary coverage to elderly patients.
Machine Agency & Autonomous Agents
Machine agency—a broad concept encompassing the ability of machines to perform tasks, shape outcomes, and increasingly make decisions traditionally reserved for humans—has become a defining feature of modern AI applications. Within this broader framework, autonomous agents represent a specific and advanced subset of machine agency. These AI-driven systems distinguish themselves by their ability to operate independently, adapt to dynamic environments, reason about complex scenarios, and execute consequential decisions without human intervention. AI systems, like those alleged to have been used by UnitedHealth in the approval process of health insurance claims, frequently employ this type of sophisticated agentic AI technology.
The evolution of machine agency promises transformative benefits for humanity and has already observed success in this regard. One such application is the use of the autonomous drone delivery platform Zipline to deliver medical supplies, including blood products and vaccines, to remote and underserved areas. These drones operate with minimal human intervention, navigating complex terrains to ensure timely access to critical healthcare resources. However, the same autonomy that delivers such humanitarian benefits also poses risks when these agents operate with flawed programming, insufficient oversight, or excessive authority.
Though autonomous agents are a prime example, machine agency exists along a continuum. Broadly speaking, we can distinguish between two types of machine agents: those that act on our behalf and those that decide on our behalf.
Action-Based Agents
These agents focus solely on executing tasks as instructed, without making independent decisions. They excel at automating repetitive or labor-intensive processes efficiently and at scale. Examples include:
Industrial Robotics: Robots performing predefined tasks on assembly lines.
Web Crawling: Bots systematically browsing and indexing web pages to improve search results.
Workflow Automation: Enterprise systems automating tasks like data entry or inventory updates.
Decision-Making Agents
These agents go beyond task execution by combining actions with decision-making. Their influence can range from producing informed, personalized guidance to making outright decisions with little to no human input. There are two predominant types:
Softer Decision-Making: Recommender systems, such as those that recommend movies on Netflix or curate social media feeds, shape user experiences and preferences without eliminating human choice entirely. However, their ability to influence our decisions has increased dramatically in the past decade through the evolution of technology, the proliferation of access to data, and the pursuit of mastering psychological and human factors to sustain digital engagement.
Outright Decision-Making: High-stakes scenarios like insurance claims processing, loan approvals, and autonomous vehicle navigation place critical decisions in the hands of machines, often reducing or removing direct human involvement in real-time.
The line between action-based and decision-making agents is opaque. In this context, actions carry less weight than decisions; however, all actions ultimately stem from decisions. Even in less consequential cases of action-based machine agency, such as web crawling, the process begins with both human decision and action—writing and deploying a script that tells a machine how to crawl the web. The machine then acts, crawling the web as instructed by the rules of the script. It’s debatable whether a machine is making decisions in these low-stake scenarios.
However, this basic architecture parallels that of even the most advanced forms of machine agency, where autonomous decision-making is ultimately governed by rules established by human designers. The added risk with advanced, AI-powered agents lies in their unprecedented authority and the inherent unpredictability of their behavior, which, despite being framed by human-defined parameters, is not always fully explainable.
Current & Future Outlook for Machine Agency
In 2024, machine agency—spanning the spectrum from basic automation to fully autonomous systems—has reached unparalleled heights. Its influence extends across domains like enterprise solutions, healthcare, smart cities, and education. Multi-agent systems (MAS), where diverse, specialized agents collaborate to tackle complex challenges, have become increasingly prevalent, enabling innovations in areas such as swarm robotics and large-scale scientific research. Yet, as the tragedy surrounding UnitedHealthcare demonstrates, the rapid adoption of machine agency in decision-making roles at times outpaces society’s preparedness for the ethical, legal, and societal challenges that arise, leaving critical gaps in accountability and equitable outcomes.
While we may never face the dystopian machine takeovers seen in The Matrix and I Robot, nor may we ever enjoy a utopian world of human-machine symbiosis (I couldn’t find an example of this in pop film; messages regarding technology are almost always cautionary), it’s clear that machine agency has already made a profound impact on humanity. If the allegations made in the UnitedHealth lawsuit are true, then this is a quintessential case of granting authority to machines to determine the health and survival of humans.
The concerns brought forth in the UnitedHealth lawsuit underscore the pressing need for accountability in how we design, deploy, and regulate systems of machine agency. While these systems hold immense potential to improve efficiency and even save lives, their misuse, abuse, or poor implementation can lead to devastating consequences. As machines take on increasingly consequential roles, the onus lies on humanity to ensure that these tools serve the greater good, avoiding the exacerbation of existing inequities or the emergence of additional disparities.
This raises vital questions about the future of machine agency. How can we maximize the benefits of these systems while minimizing the risks they pose? What safeguards should be in place to ensure their use aligns with ethical principles and human values? Existing frameworks like the Belmont Report, which outlines ethical principles for research involving humans, the General Data Protection Regulation (GDPR), which enforces data privacy and security standards, and the Children’s Online Privacy Protection Act (COPPA), which protects minors from exploitative data practices, offer valuable lessons. These efforts show that governance, accountability, and ethical oversight are achievable when society prioritizes them. Machines, after all, are tools—neither inherently good nor bad. It is up to us to wield them responsibly, embedding transparency, fairness, and accountability into their design. As we continue to innovate, we must remember that the true measure of progress lies not in the capabilities of our machines, but in how those capabilities are harnessed to uplift humanity as a whole.