Skip to content

Who bears the financial responsibility for AI problematic outcomes?

Investigating the Impact of Industrial Taxes on Market Actions, Mitigating Negative Effects, and Enhancing Administrative Management

AI Liability: Determining Financial Responsibility in Artificial Intelligence Incidents or...
AI Liability: Determining Financial Responsibility in Artificial Intelligence Incidents or Malfunctions

Who bears the financial responsibility for AI problematic outcomes?

In a bid to manage the growing impact of Artificial Intelligence (AI) on society and the environment, a new framework is emerging that draws parallels from existing levies and taxes in other sectors. This approach, known as the 'polluter pays' principle, could see developers and deployers of AI systems financially responsible for any harm caused.

The importance of political will in establishing effective and durable levies is highlighted by the case of the Superfund in the United States. Established in 1980, the Superfund was a levy paid by chemical and petroleum industries to clean up contaminated sites that threatened public health and the environment.

Similarly, a cross-border coalition of states, co-led by Kenya, Barbados, and France, has proposed a levy on cryptocurrency mining activities. This levy aims to address energy consumption and other environmental impacts, with each country imposing a levy on mining activities based on energy use or mining output.

The purpose of these levies varies. For instance, the Digital Services Tax (DST) in the UK aims to tax large tech companies operating in the UK but based in another, usually low-tax, jurisdiction. The DST, which has generated notable public revenue, serves as a model for taxation of global tech companies.

In the realm of AI, potential levies are designed to counter specific harms. Proposals focus on outputs and the redistribution of corporate profits, with generative AI users or developers paying levies to compensate artists whose work is substituted by AI outputs.

Potential types of AI levies include Content Harm Levies, which would penalize companies that deploy AI-generated misinformation, deepfakes, or manipulative recommender systems causing social harm. Another type is the Labor Displacement Levies, also known as "Robot Taxes", which would charge firms that adopt AI tools replacing human labor.

Other proposed levies include Cybersecurity Risk Levies, Extended Producer Responsibility (EPR) Style Levies, and Data Use and Privacy Impact Levies. These levies serve both as disincentives for harmful AI practices and as funding mechanisms to support mitigation, regulation, or remediation efforts.

The 'polluter pays' approach emphasizes making AI creators or deployers financially responsible for harms, much like taxation on pollution or hazardous waste in environmental governance. This framework aligns with evolving legislative efforts and expert recommendations for AI governance and risk management.

The criteria for defining 'AI pollution' or externalities could include where it occurs, how it manifests itself, and whom it impacts. For example, regulating AI-generated synthetic content that harms individuals or democratic processes could include uniform standards for accountability and penalties for violations.

In summary, the main potential AI levies inspired by the polluter pays principle are:

| Type of Harm | Potential Levy Type | Purpose | |--------------------------------|---------------------------------------------|-------------------------------------------------| | Misinformation and Deepfakes | Content Harm Levies | Penalize harmful synthetic content generation | | Labor displacement | Robot Taxes | Discourage replacement of human labor | | Cybersecurity threats | Cyber Risk Levies | Fund cybersecurity defenses and resilience | | Lifecycle AI responsibility | Extended Producer Responsibility Levies | Encourage safer AI development and deployment |

This framework, inspired by successful levies in sectors such as environmental policy and digital platform regulations, could provide a comprehensive approach to managing AI-related risks and countering potential harm. Establishing clear legal frameworks defining who is liable and establishing uniform standards is crucial to enforce these levies effectively.

In the realm of AI governance and risk management, Content Harm Levies could be implemented to penalize companies that deploy AI-generated misinformation, deepfakes, or manipulative recommender systems causing social harm, similar to how a Digital Services Tax targets large tech companies that operate in one jurisdiction but are based in another. On the other hand, Labor Displacement Levies, also known as "Robot Taxes", could discourage replacement of human labor by charging firms that adopt AI tools, mirroring the 'polluter pays' principle in environmental policy where developers and deployers of AI systems are financially responsible for any harm caused.

Read also:

    Latest