Sls logo

Artificial Intelligence and the Law

Legal scholars on the potential for innovation and upheaval.

  • December 5, 2023
  • Tomas Weber
  • Illustrations by Joan Wong | Photography by Timothy Archibald
  • Fall 2023 – Issue 109
  • Cover Story
  • Share on Twitter
  • Share on Facebook
  • Share by Email

Artificial Intelligence and the Law

Earlier this year, in Belgium, a young father of two ended his life after a conversation with an AI-powered chatbot. He had, apparently, been talking to the large language model regularly and had become emotionally dependent on it. When the system encouraged him to commit suicide, he did. “Without these conversations with the chatbot,” his widow told a Brussels newspaper, “my husband would still be here.”

A devastating tragedy, but one that experts predict could become a lot more common.

As the use of generative AI expands, so does the capacity of large language models to cause serious harm. Mark Lemley (BA ’88), the William H. Neukom Professor of Law, worries about a future in which AI provides advice on committing acts of terrorism, recipes for poisons or explosives, or disinformation that can ruin reputations or incite violence.

The question is who, if anybody, will be held accountable for these harms?

“We don’t have case law yet,” Lemley says. “The company that runs the AI is not doing anything deliberate. They don’t necessarily know what the AI is going to say in response to any given prompt.” So, who’s liable? “The correct answer, right now, might be nobody. And that’s something we will probably want to change.”

Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.

To keep up with the flood of new,  large language models like ChatGPT, judges and lawmakers will need to grapple, for the first time, with a host of complex questions. For starters, how should the law govern harmful speech that is not created by human beings with rights under the First Amendment? How must criminal statutes and prosecutions change to address the role of bots in the commission of crimes? As growing numbers of people seek legal advice from chatbots, what does that mean for the regulation of legal services? With large language models capable of authoring novels and AI video generators churning out movies, how can existing copyright law be made current?

Hanging over this urgent list of questions is yet another: Are politicians, administrators, judges, and lawyers ready for the upheaval AI has triggered?

ARTIFICIAL AGENTS, CRIMINAL INTENT

Did ChatGPT defame Professor Lemley?

In 2023, when Lemley asked the chatbot GPT-4 to provide information about himself, it said he had been accused of a crime: namely, the misappropriation of trade secrets. Director of the Stanford Program in Law, Science and Technology , Lemley had done no such thing. His area of research, it seems, had caused the chatbot to hallucinate criminal offenses.

More recently, while researching a paper on AI and liability, Lemley and his team asked Google for information on how to prevent seizures. The search engine responded with a link titled “Had a seizure, now what?” and Lemley clicked. Among the answers: “put something in someone’s mouth” and “hold the person down.” Something was very wrong. Google’s algorithm, it turned out, had sourced content from a webpage explaining precisely what not to do. The error could have caused serious injury. (This advice is no longer included in search results.)

Lemley says it is not clear AI companies will be held liable for errors like these. The law, he says, needs to evolve to plug the gaps. But Lemley is also concerned about an even broader problem: how to deal with AI models that cause harm but that have impenetrable technical details locked inside a black box.

Take defamation. Establishing liability, Lemley explains, requires a plaintiff to prove mens rea: an intent to deceive. When the author of an allegedly defamatory statement is a chatbot, though, the question of intent becomes murky and will likely turn on the model’s technical details: how exactly it was trained and optimized.

To guard against possible exposure, Lemley fears, developers will make their models less transparent. Turning an AI into a black box, after all, makes it harder for plaintiffs to argue that it had the requisite “intent.” At the same time, it makes models more difficult to regulate.

How, then, should we change the law? What’s needed, says Lemley, is a legal framework that incentivizes developers to focus less on avoiding liability and more on encouraging companies to create systems that reflect our preferences. We’d like systems to be open and comprehensible, he says. We’d prefer AIs that do not lie and do not cause harm. But that doesn’t mean they should only say nice things about people simply to avoid liability. We expect them to be genuinely informative.

In light of these competing interests, judges and policymakers should take a fine-grained approach to AI cases, asking what, exactly, we should be seeking to incentivize. As a starting point, suggests Lemley, we should dump the mens rea requirement in AI defamation cases now that we’ve entered an era when dangerous content can so easily be generated by machines that lack intent.

Lemley’s point extends to AI speech that contributes to criminal conduct. Imagine, he says, a chatbot generating a list of instructions for becoming a hit man or making a deadly toxin. There is precedent for finding human beings liable for these things. But when it comes to AI, once again accountability is made difficult by the machine’s lack of intent.

“We want AI to avoid persuading people to hurt themselves, facilitating crimes, and telling falsehoods about people,” Lemley writes in “Where’s the Liability in Harmful AI Speech?” So instead of liability resting on intent, which AIs lack, Lemley suggests an AI company should be held liable for harms in cases where it was designed without taking standard actions to mitigate risk.

“It is deploying AI to help prosecutors make decisions that are not conditioned on race. Because that’s what the law requires.”

Julian Nyarko, associate professor of law, on the algorithm he developed

At the same time, Lemley worries that holding AI companies liable when ordinary humans wouldn’t be, may inappropriately discourage development of the technology. He and his co-authors argue that we need a set of best practices for safe AI. Companies that follow the best practices would be immune from suit for harms that result from their technology while companies that ignore best practices would be held responsible when their AIs are found to have contributed to a resulting harm.

HELPING TO CLOSE THE ACCESS TO JUSTICE GAP 

As AI threatens to disrupt criminal law, lawyers themselves are facing major disruptions. The technology has empowered individuals who cannot find or pay an attorney to turn to AI-powered legal help. In a civil justice system awash in unmet legal need, that could be a game changer.

Artificial Intelligence and the Law 2

“It’s hard to believe,” says David Freeman Engstrom , JD ’02, Stanford’s LSVF Professor in Law and co-director of the Deborah L. Rhode Center on the Legal Profession , “but the majority of civil cases in the American legal system—that’s millions of cases each year—are debt collections, evictions, or family law matters.” Most pit a represented institutional plaintiff (a bank, landlord, or government agency) against an unrepresented individual. AI-powered legal help could profoundly shift the legal services marketplace while opening courthouse doors wider for all.

“Up until now,” says Engstrom, “my view was that AI wasn’t powerful enough to move the dial on access to justice.” That view was front and center in a book Engstrom published earlier this year, Legal Tech and the Future of Civil Justice . Then ChatGPT roared onto the scene—a “lightning-bolt moment,” as he puts it. The technology has advanced so fast that Engstrom now sees rich potential for large language models to translate back and forth between plain language and legalese, parsing an individual’s description of a problem and responding with clear legal options and actions.

“We need to make more room for new tools to serve people who currently don’t have lawyers,” says Engstrom, whose Rhode Center has worked with multiple state supreme courts on how to responsibly relax their unauthorized practice of law and related rules. As part of that work, a groundbreaking Rhode Center study offered the first rigorous evidence on legal innovation in Utah and Arizona, the first two states to implement significant reforms.

But there are signs of trouble on the horizon. This summer, a New York judge sanctioned an attorney for filing a motion that cited phantom precedents. The lawyer, it turns out, relied on ChatGPT for legal research, never imagining the chatbot might hallucinate fake law.

How worried should we be about AI-powered legal tech leading lay people—or even attorneys—astray? Margaret Hagan , JD ’13, lecturer in law, is trying to walk a fine line between techno-optimism and pessimism.

“I can see the point of view of both camps,” says Hagan, who is also the executive director of the Legal Design Lab , which is researching how AI can increase access to justice, as well as designing and evaluating new tools. “The lab tries to steer between those two viewpoints and not be guided by either optimistic anecdotes or scary stories.”

Artificial Intelligence and the Law 5

To that end, Hagan is studying how individuals are using AI tools to solve legal problems. Beginning in June, she gave volunteers fictional legal scenarios, such as receiving an eviction notice, and watched as they consulted Google Bard. “People were asking, ‘Do I have any rights if my landlord sends me a notice?’ and ‘Can I really be evicted if I pay my rent on time?’” says Hagan.

Bard “provided them with very clear and seemingly authoritative information,” she says, including correct statutes and ordinances. It also offered up imaginary case law and phone numbers of nonexistent legal aid groups.

In her policy lab class, AI for Legal Help , which began last autumn, Hagan’s students are continuing that work by interviewing members of the public about how they might use AI to help them with legal problems. As a future lawyer, Jessica Shin, JD ’25, a participant in Hagan’s class, is concerned about vulnerable people placing too much faith in these tools.

“I’m worried that if a chatbot isn’t dotting the i’s and crossing the t’s, key things can and will be missed—like  statute of limitation deadlines or other procedural steps that will make or break their cases,” she says.

“Government cannot govern AI, if government doesn’t understand AI.”

Daniel Ho, William Benjamin Scott and Luna M. Scott Professor of Law

Given all this promise and peril, courts need guidance, and SLS is providing it. Engstrom was just tapped by the American Law Institute to lead a multiyear project to advise courts on “high-volume” dockets, including debt, eviction, and family cases. Technology will be a pivotal part, as will examining how courts can leverage AI. Two years ago, Engstrom and Hagan teamed up with Mark Chandler, JD ’81, former Cisco chief legal officer now at the Rhode Center, to launch the Filing Fairness Project . They’ve partnered with courts in seven states, from Alaska to Texas, to make it easier for tech providers to serve litigants using AI-based tools. Their latest collaboration will work with the Los Angeles Superior Court, the nation’s largest, to design new digital pathways that better serve court users.

CAN MACHINES PROMOTE COMPLIANCE WITH THE LAW?

The hope that AI can be harnessed to help foster fairness and efficiency extends to the work of government too. Take criminal justice. It’s supposed to be blind, but the system all too often can be discriminatory—especially when it comes to race. When deciding whether to charge or dismiss a case, a prosecutor is prohibited by the Constitution from taking a suspect’s race into account. There is real concern, though, that these decisions might be shaped by racial bias—whether implicit or explicit.

Enter AI. Julian Nyarko , associate professor of law, has developed an algorithm to mask race-related information from felony reports. He then implemented the algorithm in a district attorney’s office, erasing racially identifying details before the reports reached the prosecutor’s desk. Nyarko believes his algorithm will help ensure lawful prosecutorial decisions.

“The work uses AI tools to increase compliance with the law,” he says. “It is deploying AI to help prosecutors make decisions that are not conditioned on race. Because that’s what the law requires.”

GOVERNING AI

While the legal profession evaluates how it might integrate this new technology, the government has been catching up on how to grapple with the AI revolution. According to Daniel Ho , the William Benjamin Scott and Luna M. Scott Professor of Law and a senior fellow at Stanford’s Institute for Human-Centered AI, one of the core challenges for the public sector is a dearth of expertise.

Very few specialists in AI choose to work in the public sector. According to a recent survey, less than 1 percent of recent AI PhD graduates took positions in government—compared with some 60 percent who chose industry jobs. A lack of the right people, and an ailing government digital infrastructure, means the public sector is missing the expertise to craft law and policy and effectively use these tools to improve governance. “Government cannot govern AI,” says Ho, “if government doesn’t understand AI.”

Artificial Intelligence and the Law 3

Ho, who also advises the White House as an appointed member of the National AI Advisory Committee (NAIAC), is concerned policymakers and administrators lack sufficient knowledge to separate speculative from concrete risks posed by the technology.

Evelyn Douek , a Stanford Law assistant professor, agrees. There is a lack of available information about how commonly used AI tools work—information the government could use to guide its regulatory approach, she says. The outcome? An epidemic of what Douek calls “magical thinking” on the part of the public sector about what is possible.

The information gap between the public and private sectors motivated a large research team from Stanford Law School’s Regulation, Evaluation, and Governance Lab (RegLab) to assess the feasibility of recent proposals for AI regulation. The team, which included Tino Cuéllar (MA ’96, PhD ’00), former SLS professor and president of the Carnegie Endowment for International Peace; Colleen Honigsberg , professor of law; and Ho, concluded that one important step is for the government to collect and investigate events in which AI systems seriously malfunction or cause harm, such as with bioweapons risk.

“If you look at other complex products, like cars and pharmaceuticals, the government has a database of information that details the factors that led to accidents and harms,” says Neel Guha, JD/PhD ’24 (BA ’18), a PhD student in computer science and co-author of a forthcoming paper that explores this topic. The NAIAC formally adopted this recommendation for such a reporting system in November.

“Our full understanding of how these systems are being used and where they might fail is still in flux,” says Guha. “An adverse-event-reporting system is a necessary prerequisite for more effective governance.”

MODERNIZING GOVERNMENT

While the latest AI models demand new regulatory tools and frameworks, they also require that we rethink existing ones—a challenge when the various stakeholders often operate in separate silos.

“Policymakers might propose something that is technically impossible. Engineers might propose a technical solution that is flatly illegal.” Ho says. “What you need are people with an understanding of both dimensions.”

Last year, Ho, Christie Lawrence, JD ’24, and Isaac Cui, JD ’25, documented extensive challenges the federal government faced in implementing AI legal requirements in an article. This led Ho to testify before the U.S. Senate on a range of reforms. And this work is driving change. The landmark White House executive order on AI adopted these recommendations, and the proposed AI Leadership to Enable Accountable Deployment (AI LEAD) Act would further codify recommendations, such as the creation of a chief AI officer, agency AI governance boards, and agency strategic planning. These requirements would help ensure the government is able to properly use and govern the technology.

“If generative AI technologies continue on their present trajectory, it seems likely that they will upend many of our assumptions about a copyright system.”

Paul Goldstein, Stella W. and Ira S. Lillick Professor of Law

Ho, as faculty director of RegLab, is also building bridges with local and federal agencies to develop high-impact demonstration projects of machine learning and data science in the public sector.

The RegLab is working with the Internal Revenue Service to modernize the tax-collection system with AI. It is collaborating with the Environmental Protection Agency to develop machine-learning technology to improve environmental compliance. And during the pandemic, it partnered with Santa Clara County to improve the public health department’s wide range of pandemic response programs.

“AI has real potential to transform parts of the public sector,” says Ho. “Our demonstration projects with government agencies help to envision an affirmative view of responsible technology to serve Americans.”

In a sign of an encouraging shift, Ho has observed an increasing number of computer scientists gravitating toward public policy, eager to participate in shaping laws and policy to respond to rapidly advancing AI, as well as law students with deep interests in technology. Alumni of the RegLab have been snapped up to serve in the IRS and the U.S. Digital Service, the technical arm of the executive branch. Ho himself serves as senior advisor on responsible AI to the U.S. Department of Labor. And the law school and the RegLab are front and center in training a new generation of lawyers and technologists to shape this future.

AI GOES TO HOLLYWOOD 

Swaths of books and movies have been made about humans threatened by artificial intelligence, but what happens when the technology becomes a menace to the entertainment industry itself? It’s still early days for generative AI-created novels, films, and other content, but it’s beginning to look like Hollywood has been cast in its own science fiction tale—and the law has a role to play.

“If generative AI technologies continue on their present trajectory,” says the Stella W. and Ira S. Lillick Professor of Law Paul Goldstein , “it seems likely that they will upend many of our assumptions about a copyright system.”

There are two main assumptions behind intellectual property law that AI is on track to disrupt. From feature films and video games with multimillion-dollar budgets to a book whose author took five years to complete, the presumption has been that copyright law is necessary to incentivize costly investments. Now AI has upended that logic.

“When a video game that today requires a $100 million investment can be produced by generative AI at a cost that is one or two orders of magnitude lower,” says Goldstein, “the argument for copyright as an incentive to investment will weaken significantly across popular culture.”

The second assumption, resting on the consumer side of the equation, is no more stable. Copyright, a system designed in part to protect the creators of original works, has also long been justified as maximizing consumer choice. However, in an era of AI-powered recommendation engines, individual choice becomes less and less important, and the argument will only weaken as streaming services “get a lot better at figuring out what suits your tastes and making decisions for you,” says Goldstein.

If these bedrock assumptions behind copyright are both going to be rendered “increasingly irrelevant” by AI, what then is the necessary response? Goldstein says we need to find legal frameworks that will better safeguard human authors.

“I believe that authorship and autonomy are independent values that deserve to be protected,” he says. Goldstein foresees a framework in which AI-produced works are clearly labeled as such to guarantee consumers have accurate information.

The labeling approach may have the advantage of simplicity, but on its own it is not enough. At a moment of unprecedented disruption, Goldstein argues, lawmakers should be looking for additional ways to support human creators who will find themselves competing with AIs that can generate works faster and for a fraction of the cost. The solution, he suggests, might involve looking to practices in countries that have traditionally given greater thought to supporting artists, such as those in Europe.

“There will always be an appetite for authenticity, a taste for the real thing,” Goldstein says. “How else do you explain why someone will pay $2,000 to watch Taylor Swift from a distant balcony, when they could stream the same songs in their living room for pennies?” In the case of intellectual property law, catching up with the technology may mean heeding our human impulse—and taking the necessary steps to facilitate the deeply rooted urge to make and share authentic works of art.  SL

McGill Law Journal

e-Legislation: Law-Making in the Digital Age

Table of Contents

David Howes’

This article takes a communications approach to law. The author argues that the formulation, dissemina- tion, and reception-as well as doctrinal notions-of legislation are shaped by the prevailing mode of com- munication. Three such modes are distinguished: oral, print (or typographic), and digital (or electronic). The doctrine of legal positivism is shown to derive from a text-based communications order. The legislative ideals associated with this doctrine, such as generality, prom- ulgation, clarity and absence of contradiction, and top- down authority, all reflect the imprimatur of the printed text. In pre- and post-typographic (i.e. oral and digital) communications orders, the predominant legislative values are flexibility, participation and accessibility, contextuality, and multicentric authority. These tenets are summed up by the notion of legal interactivism. The author shows this notion to be motivated by the ubiquity, multisensoriality (or organicity), and instanta- neous-interactive quality of communication in both the oral and digital modes. It is for this reason, the author argues, that the best way to envision the future of leg- islation is by recurring to the model of law in pre- modem oral societies. Two such models are pre- sented-the corporeal model of the Inca Empire and the gastronomic (law as feast) model of the Witsu- wit’en-and their implications for conceptualizing law- making in the digital age are discussed.

L’article examine le droit en adoptant une appro- che du champ des communications. L’auteur soutient que les modes de communication dominants mod~lent ]a formulation, ]a diss6mination, la rception ainsi que les notions doctrinales de Ia l6gislation. II distingue : l’oral, trois de ces modes de communication l’impression (ou la typographie), et le digital (ou l’61eclronique). La doctrine du positivisme juridique d~coule de l’ordre des communications fond6 sur les textes. Les objectifs 16gislatifs de cette doctrine telle la g6n6-alit6, la promulgation, la clart6 et l’absence de contradictions et l’autorit6 hi6archis6e reprdsentent l’imprimatur des textes imprim~s. Dans les ordres de communications pr6 et post-typographiques, c’est-ii- dire l’oral et le digital, les valeurs 16gislatives pr&Iomi- et nantes l’accessibilit6, ]a contextualit6 et l’autorit6 multicentri- que. L’auteur 6tablit que la notion d’interactivisme juri- dique, rsum6 par ces doctrines, est justifie par l’ubiquit6, la multisensorialit6 (ou l’organicit6) et la qualit6 instantandment interactive dans les formes de communications orale et digitale. L’auteur sugg~re ain- si que r6f~rer au module de la loi des soci6ts orales pr6-modemes constitue Ia meilleure fagon d’envisager l’avenir de ]a l6gislation. En prdsentant deux modules de ces socidt6s, soit le module corporel de l’Empire In- ca et le module gastronomique des Witsuwit’en (a loi en tant que festin), l’auteur explicite leurs consquen- ces pour ]a conceptualisation de la l6gislation A l’6re digitale.

la participation

flexibilitY,

“Of the Department of Sociology and Anthropology, Concordia University. I wish to thank the or- ganizers of the Roundtable on Legislation for the invitation to participate, and my fellow participants for the inspiration I received from our discussions.

McGill Law Journal 2001

Revue de droit de McGill 2001 To be cited as: (2001) 47 McGill L.J. 39 Mode de rf6rence : (2001) 47 R.D. McGill 39

McGILL LAW JOURNAL / REVUE DE DROIT DE MCGILL

Introduction

I. Charting Cyberspace

I1. Legislation in a Digital Age

II1. The Cyber-Village

IV. Governing the Electronic Tribe or Feasting on the Law

D. HOWES – E-LEGISLATION. LAW-MAKING IN THE DIGITAL AGE

This article explores the iconic implications of the materiality of legislation, or law’s “embodiment” as digital versus printed text in the network era. With Desmond Manderson, I am interested in how one can “illuminate both the meaning and force of law” by being “sensitive to the form and imagery of legal texts.” Framing the issue of law’s expression in this way puts the medium through which legal norms are commu- nicated before the articulation of the norms themselves in what can prove to be a highly instructive manner. As regards electronic communication, for example, digital texts may be seen to evoke a different understanding of authorship and authority from printed texts. Digital texts have the potential to be interactive, whereas there is no back and forth between sender and receiver with printed texts. This makes the former appear more collaborative than “authoritative” (in the conventional unidirectional sense that a printed text displays). How much does our common-sense notion of the (top-down) authority of legislative acts depend on their form as printed texts? Forget the doctrine of legal positivism. How will our understandings of the force of law have to change to accommodate the authorial and other implications of the digitization of legislation?

Most of the books and articles regarding law and cyberspace are concerned with how existing legal rules may be adapted to suit the particular features of the Internet! The assumption throughout this literature is that standard forms of legislation will continue to hold in the “real world”. Those who take this assumption for granted seri- ously overlook the influence that Internet use will likely have on ways of thinking about government and the law even outside cyberspace. Indeed, I want to argue that the implicit normative structure of Internet communication has already had a pro- found impact on the form in which legislative activity is conceptualized and received by those whose behaviour it is intended to govern. Moreover, I consider that the very distinction between cyberspace and “real” space will become less apparent and im- portant as digital forms of expression come to pervade our lives and consciousness, and the whole world becomes a cyber-village.

It has been suggested that the network era, with its dynamic and instantaneous forms of communication, actually represents a return to the tribal era, for network so-

‘D. Manderson, Songs without Music: Aesthetic Dimensions of Law and Justice (Berkeley: Univer- sity of California Press, 2000) at ix. This essay may also be read as a companion piece to Nicholas Kasirer’s paper on the successive material embodiments of Quebec’s civil code entitled “If the Mona Lisa Is in the Louvre, Where Is the Civil Code of Lower Canada?” (Paper presented at Law Commis- sion of Canada, First Roundtable on Legislation, McGill University, Montreal, 28 January 2000) [un- published, archived at McGill Lmv Journal.

2 See e.g. M. Racicot et al., The Cyberspace Is Not a “No Law Land”: A Study of the Issues of Li-

ability for Content Circulating on the Internet (Ottawa: Industry Canada, 1997).

MCGILL LAW JOURNAL / REVUE DE DROIT DE MCGILL

ciety displays many of the characteristics of preliterate oral societies Following up on this perceived resemblance, I want to examine how examples of law-making drawn from pre-modem societies may provide models for legislative activity in the cyber- village of postmodernity. As cyberspace becomes more interactive, more sensuous, and more ubiquitous through new developments in network technology, the way in which legislation is conceptualized and experienced may become less and less textual (i.e. informed by the icon of the statute book) and more like a song, a dance, or even a feast-all traditional forms of legal expression in oral societies.

The argument of this paper can be summed up as follows: both the construction and dissemination of legislation tend to be inflected by the implicit normative struc- ture of the prevailing mode of communication (oral, print, or digital).’ The paper be- gins by charting the distinctive features and dominant trends in the development of cyberspace, then traces the implications of these features and trends for the future shape of legislation, and concludes by finding confinmation for this analysis through an exploration of law-making in oral societies.

Cyberspace has been called a world of electrons in contrast to the physical world of atoms. In the world of atoms things exist as objects in three-dimensional space; in the world of electrons things exist as patterns of energy. This distinction is indicative of the unique nature of cyberspace. It is only metaphorically that it can be described in spatial terms at all.

Consider the example of a community of Internet users, or “virtual community”, consisting of, say, all of the participants in the same Usenet discussion group or chat room. In the physical world communities have customarily consisted of people living in close proximity to each other. A virtual community, however, may consist of people

‘ See D. de Kerckhove, The Skin of Culture: Investigating the New Electronic Reality, ed. C. Dewd-

ney (Toronto: Somerville House, 1995) especially at 99-112. See further the sources cited infra note 4. 4 On the impact of the printing press, see E.L. Eisenstein, The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modem Europe (Cambridge: Cambridge University Press, 1979), and of print media generally, see B. Anderson, Imagined Com- munities: Reflections on the Origin and Spread of Nationalism, rev. ed. (London: Verso, 1991). On the impact of electronic communication, see J. Baudrillard, Les strat6gies fatales (Paris: Bernard Grasset, 1983); de Kerckhove, supra note 3. The first to map this terrain in a comprehensive (if mo- saical) way was, of course, Marshall McLuhan in The Gutenberg Galaxy: The Making of Typographic Man (Toronto: University of Toronto Press, 1962). It is his lead that I follow in this paper when I treat the modal medium of communication in a given culture as inflecting all other aspects of the culture with its biases. Or as McLuhan himself put it in Understanding Media: The Extensions of Man (New York: McGraw-Hill, 1964) at 8: “For the ‘message’ of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs.”

living in many different locations who never have any physical contact with each other. Even though the members of such a community are widely dispersed geo- graphically, they can neverthless enjoy instantaneous communication due to what has been called “the collapsed space-time of the Web.”‘

Just as computer-generated cyberspace has created a parallel universe of virtual locations, so has it created a parallel realm of virtual selves-that is, a space in which “You are not your body”, in Douglas Coupland’s phrase.’ Internet users cannot enter cyberspace with their physical bodies, but they can transmit body images, and the im- age that a user presents in cyberspace need in no way correspond to his or her actual physical body. A male user, for example, may present himself as female to his Internet companions; a child user may present herself or himself as an adult. Other aspects of personal identity, such as character, disability, or ethnicity may similarly be altered in Net communications. Users usually have little possibility to verify the actual identity of the persons with whom they communicate on the Internet-or even where they re- side, since country codes (such as “.ca” for Canada or “.uk” for the United Kingdom) do not reveal a discrete physical location within the country concerned. Cyberspace is therefore a world of virtual selves with no fixed addresses.

Many proponents of Net society have taken this characteristic of cyberspace to be one of its most liberating features, arguing that the virtual identities of cyberspace al- low people to escape the limits imposed by the particular physical and cultural condi- tions of their embodied realities and to present themselves as whomever and whatever they wish. In cyberspace everyone participates as equals, while at the same time an in- finity of experiments with self-fashioning is possible. It has also been claimed that the Internet helps users to overcome the isolation of contemporary life, where many peo- ple do not interact with their neighbours in their own geographical communities. With the Internet, so the argument goes, it has become astoundingly easy to find a commu- nity of like-minded individuals no matter what one’s personal interests may be.’

The peculiar characteristics of life on the Internet are likely to become increas- ingly normative as computer use becomes more integrated with everyday life, or part

5 B. Vacker, “Global Village or World Bazaar?” in A.B. Albarran & D.H. Goff, eds., Understanding the Web: Social, Political and Economic Dimensions of the Internet (Ames, Iowa: Iowa State Univer- sity Press, 2000) 211 at 236. See further D. Harvey, The Condition of Postmodernity: An Enquiry into the Origins of Cultural Change (Cambridge, Mass.: Blackwell, 1990).

6 D. Coupland, Life After God (New York: Pocket Books, 1994) dustacket. ‘See S. Rafaeli, M. McLaughlin & R Sudweeks, “Introduction” in F Sudweeks, M. McLaughlin & S. Rafaeli, eds., Network and Netplay: Vrtual Groups on the Internet (Menlo Park, Cal. & Cam- bridge, Mass.: AAAI Press & MIT Press, 1998) xv; S. Turkle, Life on the Screen: Identity in the Age of the Internet (New York: Simon & Schuster, 1995); M. Willson, “Community in the Abstract: A Political and Ethical Dilemma?” in D. Holmes, ed., Virtual Politics: Identity and Community in Cy- berspace (London: Sage, 1997) 145.

of a “seamless web”. For instance, the last few decades witnessed the transition from mainframe to personal computers, and from computing as the preserve of technical experts to its figuring centrally in many peoples’ everyday work and leisure activities. Continuing this trend, one of the major developments of the next decades will be the triumph of “ubiquitous computing”. This term is used to mean that computing will take place not only within the personal computer as we know it today but in many objects of everyday life. A major Canadian communications company, for example, has already developed an interactive telephone device that comes with a small display screen and permits the residents of a model “wired” community to access a sort of “electronic mall” where they “can pay bills, do their banking, view advertisements, compare prices, order prescriptions, make purchases, and even read news headlines without ever leaving the house or turning on their personal computer.” ‘8

The next wave of home appliances to acquire computer functions will include objects that one would never expect to serve as instruments of informational and commercial exchange, such as the recently unveiled microwave oven that can support e-mail and electronic banking In this way the whole home is being transformed into a computing device, thereby completing the revolution that began with bringing the personal computer into the home. It is projected, for example, that thin holographic monitors placed on the wall or in windows will shortly allow the inhabitants of a dwelling to enter cyberspace from many different locations in the house, simply by means of a voice command or even a glance, as registered and interpreted by a sens- ing device.'” Ubiquitous computing, then, seeks to make every physical surface into a potential electronic interface or Intemet access node.

Like the drive for ubiquity, the history of computing has witnessed a drive for ever more intensive and engaging forms of interactivity. One of the earliest (and still among the most popular) embodiments of the interactive dimension of electronic communication is the network of online news and discussion groups known as Use- net. Here users are able to post, read, and respond to messages pertaining to a specific topic area, and a record (or “thread”) of all past discussions on the topic is maintained that can be consulted by new or ongoing participants.

The Multi-User Domain (“MUD”) and the MUD, Object-Oriented (“MOO”) rep- resent another early example of the new computer-mediated sociality. These consist of large-scale, collaboratively constructed, online environments, where

participants enter textual descriptions of imaginary places that others can visit, and of … characters that populate those places, awaiting scripted interaction

D. Barney, Prometheus Wired: The Hope for Democracy in the Age of Network Technology (Van-

couver: UBC Press, 2000) at 169-70.

9 W.W. Gibbs, “As We May Live” Scientific American 283:5 (November 2000) 36. 10 Ibid.

D. HOWES – E-LEGISLATION: LAW-MAKING IN THE DIGITAL AGE

with future visitors. The underlying software ties all the descriptions and scripts together to create a single, continually evolving environment and provides an opportunity for [the user] to meet and interact with other participants within that environment.”

MUDs and MOOs constitute virtual environments that are evidently quite literary, or text-based. Like a novel,

they textually construct complex places where the lives of many characters si- multaneously unfold and interact, but they are collaboratively authored rather than the work of one person, and they are indefinitely in progress and con- stantly being extended-not closed and complete like a novel. Instead of turn- ing pages, [the user] explores them by typing commands or pointing-and- clicking to move around and evoke responses.

The heavy dependence on text and typed commands of such early virtual envi- ronments has been augmented or supplanted by graphic interfaces, by sound and syn- chronization, and most recently by 3-D shared-space technology as the field of inter- active digital entertainment has attracted increasing capital investment and develop- ment.’3 The many diverse projects on which engineers and programmers in this field are now working include developing “intelligent” virtual beings and creating interac- tive cyber-movies in which viewers can participate as actors and direct the plot.”

The dimension of interactivity, so crucial to the Intemet experience, will be fur- ther enhanced by the integration of new sensory domains into cyberspace. Through a development known as convergence, digitization is facilitating the transformation of previously distinct media, such as music, movies, and video games, into a single me- dium that delivers high resolution audio and video content that is also interactive.” This transformation, which began in the mid-1990s, has enabled Intemet users to con- struct and access virtual environments that are vastly more engaging than written texts because they encompass sound and graphics as well as moving-image applications.

Nor will the merging of media that is unfolding stop at audio and video content, for “digitization establishes the means for translating and reintegrating [all] the senses.”‘ In other words, while cyberspace may be a multimedia environment today, it promises to become a multi-sensory surround tomorrow. Technology has already been developed that will allow the sensations of smell and touch to be transmitted elec-

” WJ. Mitchell, “Replacing Place” in R Lunenfeld, ed., The Digital Dialectic: New Essays on New

Media (Cambridge, Mass.: MIT Press, 1999) 112 at 114.

12 Ibid. , Ibid. at 115-27. ‘4 G. Davenport, “Your Own Virtual Storyworld” Scientific American 283:5 (November 2000) 79. ‘R P Forman & R.W. Saint John, “Creating Convergence” Scientific American 283:5 (November

6 C. Vasseleu, “Virtual Bodies/Virtual Worlds” in Holmes, supra note 7,46 at 50.

tronically. An odour synthesizer, for example, has recently been put on the market that can be attached to computers to transmit odours. The synthesizer consists of a small black box with tiny vials of scent inside. When a message is received the machine blends a selection of basic essences and then blows the required scent out through an air vent. Such olfactory signals could accompany movies, advertisements, and elec- tronic books, or could be sent by e-mail.”

A number of haptic devices being developed or currently in use make it possible to transform electronic messages into tactile sensations. A typical haptic device is a computer-controlled glove that, when worn, gives users the sensation of holding and feeling computer-generated objects. Researchers look ahead to the creation of a “hap- tic suit” that would enable users to feel computer-generated sensations all over their bodies.” Communication on the Intemet will hence no longer be limited to disem- bodied, linear typed messages and responses but will consist of dynamic, multisen- sory interactions between “re-embodied” virtual beings.

All of these characteristics of cyber-life in the present and future depict a world that is universally accessible, immensely engaging, endlessly transformable, unfail- ingly responsive, and, while removed from most physical realities, completely con- nected within itself.

II. Legislation in a Digital Age

When we consider issues of legislation, the question that arises as a result of these computing trends is not only how is it possible to make laws for cyberspace, but how will the digitization of the word and the omnipresence of digital media transform the very notion and forms of law-making?

Significantly, in addition to the multiplication of physical devices and surfaces that can serve as Intemet access points, there has been an extraordinary proliferation in the range and nature of sites Internet users can visit. Not only commercial institu- tions, but governmental and non-governmental organizations as well as countless in- dividuals have created online identities in the form of Web pages that disseminate in- formation and/or offer access to services. New norms of accessibility have emerged in the process and appear to be reshaping not only what it means to be a consumer (as in

“7 C. Platt, “You’ve Got Smell!” Wired 7:11 (November 1999) 257, online: Wired (date accessed: 21 August 2001).

8 D. Pescovitz, “Getting Real in Cyberspace” Scientific American Presents 10:3 (Fall 1999) 48 at 51. The development of such a suit gives new meaning to McLuhan’s aphorism “the medium is the massage”; M. McLuhan & Q. Fiore, co-ordinated by J. Agel, The Medium Is the Massage (New York: Random House, 1967).

the “electronic mall” or e-commerce phenomenon), but also what it means to be a citizen in liberal democratic society.

For example, there is a growing demand for governments to ensure universal Inter- net access for their citizens, on the assumption that meaningful participation in public life is dependent on access to the informational resources of the Internet, and that ena- bling such access would of itself suffice to overcome the inequities in the distribution of information and income that currently stand in the way of full civic participation. 9

One image of how the new norms of accessibility supported by network technol- ogy are fueling new forms of civic participation is that of the homeless man at a com- puter terminal in a public library writing an e-mail to his local member of Parliament. Another image is the model of “keypad democracy” that Lawrence Grossman cham- pions. According to Grossman, the obstacles of scale that have tended to thwart strong democratic participation in the past are being overcome by recent developments in network technology: “Using a combination telephone-video screen computer, citizens will be capable of participating in audio- and videophone calls, teleconferences, tele- debates, tele-discussions, tele-forums, and electronic town meetings.’2 Time and dis- tance will thus cease to figure as factors limiting political participation.

Of course, time and distance are not the only factors obstructing participation. Some legal theorists blame the interference of the public/private dichotomy, which is so fundamental to the whole architecture of liberal democratic society. They hold that there is a deep problem with the way liberalism defines the public sphere in a manner that excludes any particular “private” conceptions of the good, and hence cultural dif- ference. This exclusion is consistent with liberalism’s abstract definition of the self as a rights-bearing entity, rather than a member of a particular community. It is regres- sive, however, insofar as it results in an impoverished public discourse that can never give good reasons for why legislation should apply to cultural minorities in the same way as it applies to the majority, when the minorities themselves can never accede to the legislation because of the deep value differences that set them apart from the mainstream. This crisis of legitimacy can only be resolved by redrawing the pub- lic/private distinction so as to include aspects of the private in the public realm, thereby letting difference out rather than keeping it contained. This strategy, it is said, can only enhance citizen participation in the deliberative process, though it may also result in legislation that is flexible instead of universal, because of the need to resort to

9 See generally Barney, supra note 8. 20 L.K. Grossman, The Electronic Republic: Reshaping Democracy in the Infornation Age (New

York: Vldng, 1995) at 148.

compromise and accommodation in order to arrive at a norm that everyone concerned can accede to practically and rationally.2′

The theory of deliberative democracy, with its definition of legislation as all- inclusive conversation, can be seen as motivated by the chief technological imperative of the network society, which is: “always connect”. The critique of the conventional public/private distinction in this theory can also be read as technologically inspired in that the Internet has effectively undermined the demarcation of public from private in- sofar as users can connect from anywhere and electronic information flows have no respect for borders. Thus, while the theory of deliberative democracy has many precedents in European philosophy, it is the manner in which it maps onto the mate- rial infrastructure of Internet communication that accounts for its increasing salience today.

With the success of user-friendly software and Web sites, convenience has be- come another of the defining characteristics and thus one of the norms of Internet communication. People are drawn to the Internet not only because it is interactive or informative or engaging, but because it is easy. Pointing and clicking is much simpler and faster than going to the library and looking something up in a book. Writing an e- mail is much simpler than writing a letter. The very formality of a letter appears ar- chaic within the informal, fast-paced give-and-take of cyberspace. Users familiar with informal, user-friendly cyber-formats may therefore come to reject the rigid, arcane format of conventional legal texts as inaccessible and irrelevant.

Furthermore, as non-linear, non-textual models for the organization of informa- tion become popular through Internet use, existing forms of inscribing and communi- cating legislation may come to seem as unwieldy and outdated as Moses’ stone tab- lets. A case in point would be the fragmentary state of public access to primary legal materials in electronic form in Canada. This fragmentation is caused by the uneasy co-existence of print-based and digitized models of law. For example, Theresa Scassa clearly adopts a digitized conception of legislation when she argues that the federal and provincial governments should collaborate to make authoritative, up-to-date ver- sions of statutes and regulations available online in a unified (or at least harmonized) searchable database which the public could access for free. What in fact exists, how- ever, is an uneven patchwork of sites. Moreover, the sites that do exist are mostly

“1 This account of the political theory of deliberative democracy is based on my reading of L.B. Tremblay, “La justification de la 16gislation comme jugement pratique” (2001) 47 McGill L.J. 59 and D. Kropp, “Legislating away Democracy: The Loss of Legitimacy and a Call for Renewal” (Paper presented at Law Commission of Canada, First Roundtable on Legislation, McGill University, Mont- real, 28 January 2000) [unpublished]. See further the discussion of a “civil society model” in G. Segell, “A People’s Electronic Democracy and an Establishment System of Government: The United Kingdom” in B. Ebo, ed., Cyberimperialism? Global Relations in the New Electronic Frontier (West- port, Conn.: Praeger, 2001) 111 at 112-13.

searchable only by the title of the statute (i.e. alphabetically) and in all cases contain disclaimers directing users to rely upon “official” print versions. Not only is this hy- brid (semi-digitized) “system” unwieldy, it is unworkable.

The idea that print versions are authoritative and their digital counterparts are not is one of the fictions that governments will have to abandon if they are to face up to the implications of digitization for the dissemination of legislation. The fluidity of digital text plays havoc with the standard notion of the letter of the law as stable or fixed. Digital text

can always be reconfigured, reformatted, rewritten. Digital text hence is infi- nitely adaptable to different needs and uses, and since it consists of codes that other codes can search, rearrange, and otherwise manipulate, digital text is al- ways open, unbordered, unfinished, and unfinishable, capable of infinite exten- sion.’

Once a text has been digitized it can be metamorphosed endlessly. This is the dif- ference between the law in books and the law in electrons. In the digital era, author and reader, legislator and legislatee, are equal participants in the text or statute’s con- struction, since the reader (with the text on his or her computer screen) is able to “add to a text or subtract from it, rearrange it, revise it, suffuse it with commentary,” intro- duce graphics, or transform it into music if he or she wants. ‘ Plainly, the digitization of legislation spells the demise of the doctrine of legal positivism at the same time as it exposes how dependent that doctrine was for its force on a print-based communica- tions order (and in particular on the idea of the top-down authority of the printed text). In view of the shared authority of the digital text, what is needed now is a doctrine of legal interactivism.

In order to conceptualize the new forms that legislation may take in a digital age, it may be necessary to stop thinking of statutes as bounded texts and to start thinking of them as “delivery systems” or exercises in “interactive fictionalized modeling”.’ On this model, a statute would be composed of a series of alternative scenarios that

‘ T. Scassa, ‘The Best Things in Law Are Free? Towards Quality Free Public Access to Primary Legal Materials in Canada” (2000) 23 Dal. L.. 301. See also D. Alikat, “Cyberspace of the People, by the People, for the People: Predominant Use of the Web in the Public Sector” in Albarran & Goff, supra note 5, 23.

G.P. Landow, “Hypertext as Collage-Writing” in Lunenfeld, supra note 11, 150 at 166.

24 R.A. Lanham, The Electronic Word: Democracy, Technology, and the Arts (Chicago: University of Chicago Press, 1993) at 6. Lanham is here describing how digital textbooks function, but his de- scription is equally applicable to legislation in view of the pedagogical function of legislation brought out, for example, by Nicholas Kasirer in “Honour Bound” (2001) 47 McGill LJ. 237.

2 Lanham, ibid. at 6, 126-29.

user-citizens could choose between and enact for themselves on a completely indi- vidualized basis.” This is law as acting out rather than as enactment.

The concern that such interactive forms of legislation might introduce too much indeterminacy into the law might well keep official legislative acts confined to the relatively stable form of printed texts for a long time to come. However, printed texts themselves only have the authority a society chooses to ascribe to them. In a culture that has already progressed so far down the path of digitization, this may be rather lit- tle. In such a culture the focus will likely be on dynamic, collaborative conflict reso- lution rather than on text-bound legislative enactments which, in a world of instant in- formation and continuous change, would come to seem outdated as soon as they are published. Ethan Katsh writes that in a digital world “the focus on the past will be less emphasized. Process and dispute solving and reestablishing relationships may, for ex- ample, prove to be valued much more than determining what was intended at the time some contract was formed” 2– or at the time some legislation was enacted.

II. The Cyber-Village

In some ways the world of cyberspace appears to be-and is-removed from any previously known form of social interaction. Yet in many ways it reproduces key traits of oral, preliterate societies.’ One of the primary characteristics of oral societies is that communication between members is always direct and immediate due to depend- ence on speech. The Internet (like the telephone before it but to a far greater extent) enables people who are geographically distant to engage in a similar kind of immedi- ate communication. Communication in oral societies is also highly interactive, being grounded in dialogue and ritual, which contrasts with print cultures where written messages are unidirectional. As Constance Classen notes: “One cannot engage a book

‘6This suggestion invites comparison with Rod Macdonald’s discussion of legislation that takes the form of “examination hypotheticals” in ‘The Fridge-Door Statute” (2001) 47 McGill LJ. 11 at 30-31. It may also be compared with the “sense and respond” business model, a token example of which is the Levi Strauss clothing company’s “Personal Paie’ program, which enables customers to design and manufacture their own customized jeans using multimedia technology. See S.P Bradley & R.L. Nolan, “Capturing Value in the Network Era” in S.P Bradley & R.L. Nolan, eds., Sense and Respond: Capturing Value in the Network Era (Boston: Harvard Business School Press, 1998) 3 at 22. See fur- ther the discussion of “court kiosks” and other access mechanisms in R. Susskind, The Future of Law: Facing the Challenges ofInformation Technology (Oxford: Clarendon Press, 1996) at 212-15.

27 M.E. Katsh, Law in a Digital World (New York: Oxford University Press, 1995) at 123. ‘ For a general review of the literature on oral societies (or “performance cultures”), see BJ. Hib- bitts, “‘Coming to Our Senses’: Communication and Legal Expression in Performance Cultures” (1992) 41 Emory L.J. 873.

in dialogue. A book never changes its mind, it always affirms what it affirms whether one agrees with it or refutes it.”

The textual basis of knowledge in literate Western society hence is radically dif- ferent from that of oral societies where the absence of written documents allows for a more fluid and interactive mode of transmitting information. As Classen documents, one of the most striking aspects of the cultural encounter between Europe and the Americas in the sixteenth and seventeenth centuries was the clash between the Euro- pean textual understanding of knowledge and authority and the Amerindian oral un- derstanding of the same. From the latter’s perspective, the European reliance on books appeared rigid, autocratic, and life-denying. The indigenous cosmos was conceptual- ized as dynamic and personal, ordered and animated by a continuous flow of oral in- terchange. The European cosmos, by contrast, appeared to be silent, still, and imper- sonal, ordered by a realm of written documents.

The advent of electronic communications has ushered in a new age of orality, for while electronic messages at present still primarily take written form, the interactive, dialogical character of Internet communication mimics the qualities of oral communi- cation. Media theoretician Walter J. Ong has proposed the term “secondary orality” to describe the kinds of social conjunctures created by network technology.” Ong’s mentor, Marshall McLuhan, evoked this same “re-tribalization” of society by means of the famous phrase “global village”.” In the twenty-first century the global village has become the cyber-village.

Internet culture is thus in many ways an oral culture with a number of the distinct traits characterizing oral cultures: it is synthetic, personal, dynamic, reciprocal. The cultural clash of the future over modes of communication and the social models with which they are associated, therefore, is likely to take the form of a war between ad- herents to the old print-based models of social and legal order and participants in the new electronic model of social interaction and organization.

It might be argued that the social models of traditional oral societies could only work on a small, “tribal” scale and thus can have little relevance to the large-scale so-

‘ C. Classen, “Literacy as Anti-Culture: The Andean Experience of the Written Word” in C. Clas- sen, W1brlds of Sense: Exploring the Senses in History and across Cultures (London: Routledge, 1993) 106 at 110.

“Secondary orality” is “secondary” because instead of being untouched by writing, the new oral- ity is “based permanently on the use of writing and print, which are essential for the manufacture and operation of [electronic communications] equipment and for its use as well:’ See WJ. Ong, Orality and Literacy: The Technologizing of the Word (London: Methuen, 1982) at 136.

“, M. McLuhan & Q. Fiore, co-ordinated by J. Agel, War and Peace in the Global Village: An In- ventory of Some of the Current Spastic Situations That Could Be Eliminated by More Feedforward (New York: McGraw-Hill, 1968).

cieties of the “cyber-village”. Yet not all oral societies were small-scale. The Inca Empire in South America, for example, consisted of some ten million people who were organized and governed without the aid of writing. In such cases each small community is integrated into the larger society through an extensive and dynamic network of oral communications. In the example of the Inca Empire, the empire (and also the cosmos) was conceptualized as a living body that required the participation and co-operation of all members in order to survive. 2

Organic models, such as that of the body, may ironically also work well to or- ganize and animate the ostensibly inorganic realm of cyberspace. Conceptualizing the Internet as a vast body or nervous system and computer terminals as its organs is, in fact, quite widespread in contemporary culture, as evidenced by the discourse about computer viruses. In oral societies individuals depend on the social network for their survival. In a networked society, users depend on their connection to the Net to pursue their cyber-lives. You cannot disconnect your computer and strike out on your own in cyberspace.

Employing corporeal models to order a system has the advantage of relating what might otherwise seem to be purely an abstract creation of bureaucracy or technology to the more personal and appealing notion of a living organism with natural structures and functions-an organism in which each individual plays a vital role and serves as a model for the whole. Among the Incas, for instance, employing body models meant that each person could relate to the structures and functions of society and the cosmos from the basis of his or her own personal corporeal experience.

Current developments in interactive computer technology point to possibilities for developing a range of organically-based models for ordering and interacting in cyber- space. One example is a program-currently in prototype-called Happenstance. This is described as an “ecological interface [that] translates common computer ac- tivities, such as conducting Internet searches, into movement through the landscape.”” Happenstance uses the image of a garden as a model for accessing and conveying in- formation:

32 C. Classen, Inca Cosmology and the Human Body (Salt Lake City: University of Utah Press, 1993). The case of the Inca, a “traditional” society whose complexity rivalled that of most coeval European states, underlines the difficulty of classifying societies according to an evolutionary typol- ogy based solely on the presence of writing. The term “pre-literate” or “oral society” as used in this essay should not be understood to suggest a linear scheme of development, for it is not the case that contemporary oral societies like the Witsuwit’en or historical oral societies like the Inca can be as- similated to anterior stages in the development of Western civilization. Rather, they should be viewed as alternative regimes for the management of information and society, each with its own historical trajectory.

” Davenport, supra note 14 at 81.

D HOWES – E-LEGISLATION. LAW-MAKING IN THE DIGITAL AGE

If you decide, for instance, that you’re hungry for Chinese food, you could type a query that gets attached to an icon of a tree seed. You could then plant the seed in the cybergarden of Happenstance to begin a search for nearby restau- rants. Today’s Internet browsers would list the query results as hyperlinked blocks of text, but inside Happenstance the results appear as leaves sprouting on a tree.Y

One obvious difference between the tribal village and the cyber-village is that the members of oral societies lead an ostensibly more embodied existence, being in con- stant bodily engagement with their environment and each other. Cyberspace, in com- parison, is notoriously disembodied. As noted above, however, cyberspace is rapidly becoming “re-embodied” as a wide range of sensory phenomema, from touches to smells, is adapted for electronic transmission. If cyberspace is a world of secondary orality, it is also becoming a world of “secondary embodiment’.

It will be appreciated how the sensory development of cyberspace will have the effect of restoring the corporeal dimension of the communication process (a dimen- sion that writing and print have tended to exclude or suppress), and of evoking pas- sions that were previously suppressed behind a facade of disembodied objectivity. The most dramatic transformation, however, has to do with the new potential for multime- dia, non-verbal, non-linear communication.” In a cyber-world a text on a computer screen may suddenly burst into song, change colour, transform into a 3-D sculptural image, or start to dance, just as messages in oral societies may take many different sen- sory forms. Can the black letter of the law remain untouched by these transformations?

Although the resemblance between primary and secondary oral cultures is far from total, it is still strong enough to indicate that, when considering the future of legislation in a digital age, it may be more fruitful to look at law-making in pre-modem oral societies than to dwell on its manifestations in the text-bound culture of modernity.

IbiL 35See Lanham, supra note 24 at 11. The digitization of the word has freed it from the reification to

which it was subjected under the regime of print, with the result that

[t]he historical evolution of two-dimensional, static letterforms arranged and fixed in a horizontal string is shifting course. Type is no longer restricted to the characteristics found in the medium of print such as typeface, point size, weight … Letterforms with behavioral, anthropomorphic and otherwise kinetic characteristics; text that liquifies and flows; three-dimensional structures held together by lines, planes and volumes of text, through which a reader may travel-these are only a few examples of the impact digital technology is having on the once simple, humble letterform.

J. Bellantoni & M. Woolman, Type in Motion: Innovations in Digital Graphics (New York: Rizzoli, 1999) at 9.

In an oral society the law is personal: it is always conveyed by one person to an- other, and hence never has the depersonalized objective character of a written text. In oral societies law is also customarily shared. While the elders and leaders may have a greater store of legal experience, all members of the community will be familiar with the rules and regulations of their society. Law is “studied” not by reading books or attending university courses, but by a process of oral (and mimetic) instruction that forms an intimate part of daily life and ritual observance. As H.P. Glenn writes, “ide- ally the important information is learned by all, with the help of many, and all become able to assist in the ongoing process.””

Oral societies do not have the means to preserve vast quantities of legal or other information. What knowledge is to be retained by future generations must be rela- tively simple and memorable. Another key trait of oral law is that it is always current, for its only expression exists in the present. While oral traditions may certainly appear inflexible at times, the absence of written documents of past rule-making increases the potential for adapting customs to respond to contemporary needs. Oral laws are re- fashioned and presented anew every time they are stated or employed.

Both to ensure their transmission and to make them vital to daily experience, laws are communicated through many different means in oral societies. Thus in oral socie- ties laws are not exclusively oral. In his contribution to this issue, Rod Macdonald suggests that in our own society the fridge door, with its plethora of diverse symbols and messages, might serve as a model for law-making.” In oral societies the messages on the fridge door (or whatever form the storehouse takes) provide not only a model for the formulation of legal codes, they are themselves an expression of legal codes, together with the food inside the fridge (storehouse). Laws may be painted in designs that cover house fronts, distilled into perfumes, or cooked into a meal. A flower or an animal, the course of a river, or the patterns of the stars may serve as crucial symbols for the social codes that regulate communal behaviour. Laws may be enacted through songs and dances or through ritual battles. Among the Desana Indians of the Colom- bian rain forest, for example, the shaman states that his role is to help people observe the laws through all of their senses: “to make one see, and act accordingly”, “to make one hear, and act accordingly”, “to make one smell, and act accordingly” and so on.”3 When the law is a dance or a ritual meal it becomes something one can touch and taste and incorporate into one’s own body as well as see and hear.9 By dancing out or

” H.P. Glenn, Legal Traditions of the World: Sustainable Diversity in Law (Oxford: Oxford Univer-

sity Press, 2000) at 59. 31 See Macdonald, supra note 26 at 29-36. ” Discussed in C. Classen, “Worlds of Sense” in Classen, supra note 29, 121 at 133. 39 See Hibbitts, supra note 28. See further M.F Gu&lon, “Dene Ways and the Ethnographer’s Cul- ture” in D.E. Young & J.-G. Goulet, eds., Being Changed: The Anthropology of Extraordinary Expe-

20011 D. HOWES – E-LEGISLATION. LAW-MAKING IN THE DIGITAL AGE

feasting on the law one both learns it and performs it, in conjunction with other mem- bers of one’s community. This point may be illustrated by considering the example of how law is acted out among the Witsuwit’en, a First Nations people of the interior of British Columbia. Among the Witsuwit’en title to land and authority over it are held by particular named, hereditary chiefs on behalf of all the members of a house (or lineage). There is an intrinsic connection between the name of a chief, the songs (or oral histories) and crests associated with that name, and specific territories.

In the event of a succession, in the case of a boundary dispute, or to resolve any other issues, a house will hold a feast. At the feast, in order to validate his title to name and territory alike, the chief will either recite the history of his name and house, or act out his crest. (For example, a chief with “wolf’ as his crest would enter the feast hall wearing a wolf mask or a blanket with a wolf design.) Next, the chief will chant the names of all the landmarks demarcating the traditional territory of his house, ver- bally walking the assembled company around the periphery of his house’s territory. This link with the land is the main basis of his authority. Guests, consisting mainly of the chiefs of other houses, pay close attention to all the territorial and status claims made in the recitations or songs, and challenge any claims that they think do not ring true. In this way, the oral history of each house and its title to specific territories is “authenticated” (by being subject to contradiction, as appropriate) each time it is performed.

The vetting of competing histories and the floating of proposals to resolve dis- puted issues continues until a consensus is reached. The consensus is sealed by the chief’s distribution of furs and meat secured on the house’s territory to the assembled guests. By receiving these gifts, the guests acknowledge the chief’s jurisdiction and accede to his history. They are agreeing literally to eat and wear his words. Finally, the whole gathering is sprinkled with eagle down, symbolizing closure and peace.”0

It is instructive to consider how these characteristics of law-making in oral socie- ties compare to the eight principles “of legal excellence toward which a system of rules may strive” put forward by Lon Fuller.” Fuller’s eight principles, briefly stated, are: generality, promulgation, non-retroactivity, clarity, absence of contradiction, fea- sibility, constancy over time, and congruence. While these principles are not neces- sarily opposed to the character of law in oral societies, they are not entirely applicable.

rience (Peterborough, Ont.: Broadview Press, 1994) 39; J. Ryan, Doing Things the Right Way: Dene Traditional Justice in Lac La Martre, N.W.T (Calgary: University of Calgary Press & Arctic Institute of North America, 1995).

, A. Mills, Eagle Down Is Our Law: Witsuwit’en Law, Feasts, and Land Claims (Vancouver. UBC

Press, 1994) especially at 43-55.

“L.L. Fuller, The Morality ofLaw, rev. ed. (New Haven: Yale University Press, 1969) at 41,46-91.

[Vol. 47 Fuller’s model presumes a top-down, text-based model of law. It is based on a sup- posed alienation of law-subject from law-giver that is not unlike the separation of reader from writer in literate societies. The rule requiring the promulgation of laws, for example, is largely meaningless in a society such as the Witsuwit’en where the whole community participates in law-making events. Similarly, the principles pro- moting the clarity and congruence of laws lose importance when laws are not arcane textual creations to be interpreted and applied by legal specialists but expressions of daily life.

The principles concerning non-contradiction and constancy over time seem like- wise to reside in a textual understanding of law. They point to a vision of law as ide- ally unchanging and therefore fundamentally different in nature from society itself, which is full of contradictions and inconstancies. In oral societies laws do not exist separately from the people who give voice to them or act them out. Variant under- standings and presentations of laws need not seem contradictory or inconstant when there is no written text against which to compare them. Similarly, laws are unlikely to be retroactive where there is no reified, text-based understanding of the past. Consis- tency is important to oral societies, but not consistency within the law itself. What matters is that laws be consistent with general social norms and with the particular situation to which they are being applied.

Fuller’s principles may be considered idealized expressions of a classic, textual (print-based) model of legislation. The shift in emphasis and interpretation that occurs when these principles are examined in the context of oral traditions of law-making suggests some of the ways in which conventional Western notions of legislation may change as we enter an age of electronic orality. The ideal of generality may be re- placed by one of contextuality, promulgation may take second place to participation, striving for non-contradiction and constancy over time may be less important than making room for alternative norms and innovative solutions to social problems. Sig- nificantly, these new versions of Fuller’s principles derived from the basics of oral law are not dissimilar to those formulated by Macdonald using the postmodern model of the montage of signifiers on the fridge door. This reinforces the notion that the social and legal life of postmodernity may resemble that of pre-modemity as much as it does that of modernity.

There are many crucial ways in which the cyber-village differs from the tribal village and presents its own unique social and legal concerns. The cyber-village is, of course, not really a village, just as the global village is not really a village. It can be likened rather to a network of villages with certain common interests and characteris- tics. This network cuts across national boundaries, as the cyberspace occupied by

Internet communities need not correspond to physical space. 2 Here again, however, it seems likely that the global character of the cyber-village would encourage the devel- opment of national and international legal systems with the flexibility to deal with cross-border conflicts. The authority of such legal systems may depend less on the threat of physical enforcement than on their ability to engage and persuade, to seduce through the senses, and to make sense within the new social order of the cyber- village.

Unlike oral societies, electronic societies have the means to store vast quantities of detailed information. However, the continual input of new information on the Net will make much of what is stored seem irrelevant and archaic. When the number of publications appears infinite and when texts can be electronically transmuted by read- ers, the traditional notion of the authority of the printed text will lose much of its in- fluence. There will be an expectation in the postmodern cyber-village that legal knowledge will be accessible, and that it will be both communal and personal, or in- teractive. As in oral societies, the emphasis will be on conflict resolution that adapts standard laws to existing circumstances and norms.

As information is increasingly presented in non-linear, multisensory forms in cy- berspace-such as employing a model of a tree or garden-there will also be a drive to make legal codes appear more dynamic and organic in nature. The rivers and flow- ers that may serve as natural embodiments of social codes in oral societies may have pseudo-organic counterparts in the virtual reality of cyberspace. Such radically new (from the perspective of late-modem print culture) ways of conceptualizing and pre- senting the law may well exist only on an unofficial, popular level. Yet, as noted above, if cyber-models and traits become sufficiently popular and influential they might de facto come to supersede more conventional forms of legislation. Indeed, the legislative assemblies of tomorrow may themselves well consist of more sophisticated versions of cyberspaces like Diamond Park, an “extensive, elaborately detailed, fully three-dimensional, mile-square virtual place” which users navigate by means of a sta- tionary bicycle wired to a computer, and where they meet and converse with other (similarly ensconced) users who appear as three-dimensional animated avatars. 3 Or they might resemble a virtual version of the Witsuwit’en eagle down ceremony de- scribed above, making use of symbols and songs, and culminating in a digital feast.

2 Though it is not within the scope of this paper to examine the questions of sovereignty and juris- diction that arise when actions no longer occur within specific geographic locations, a number of authors have considered these complex issues. For a review, see E. Longworth, “The Possibilities of a Legal Framework for Cyberspace-Including a New Zealand Perspective” in T. Fuentes-Camacho, ed., The International Dimensions of Cyberspace Law (Aldershot, U.K.: Ashgate, 2000) 9. 43Mitchell, supra note 11 at 121-23, quotation at 121.

Thank you to our sponsors

digital law essay

Advertisement

Advertisement

Artificial intelligence as law

Presidential address to the seventeenth international conference on artificial intelligence and law

  • Review Article
  • Open access
  • Published: 14 May 2020
  • Volume 28 , pages 181–206, ( 2020 )

Cite this article

You have full access to this open access article

digital law essay

  • Bart Verheij 1  

17k Accesses

28 Citations

13 Altmetric

Explore all metrics

Information technology is so ubiquitous and AI’s progress so inspiring that also legal professionals experience its benefits and have high expectations. At the same time, the powers of AI have been rising so strongly that it is no longer obvious that AI applications (whether in the law or elsewhere) help promoting a good society; in fact they are sometimes harmful. Hence many argue that safeguards are needed for AI to be trustworthy, social, responsible, humane, ethical. In short: AI should be good for us. But how to establish proper safeguards for AI? One strong answer readily available is: consider the problems and solutions studied in AI & Law. AI & Law has worked on the design of social, explainable, responsible AI aligned with human values for decades already, AI & Law addresses the hardest problems across the breadth of AI (in reasoning, knowledge, learning and language), and AI & Law inspires new solutions (argumentation, schemes and norms, rules and cases, interpretation). It is argued that the study of AI as Law supports the development of an AI that is good for us, making AI & Law more relevant than ever.

Similar content being viewed by others

digital law essay

The Study of Artificial Intelligence as Law

digital law essay

Toward a Conceptual Framework for Understanding AI Action and Legal Reaction

digital law essay

I, Inhuman Lawyer: Developing Artificial Intelligence in the Legal Profession

Explore related subjects.

  • Artificial Intelligence
  • Medical Ethics

Avoid common mistakes on your manuscript.

1 Introduction

It is my pleasure to speak to you today Footnote 1 on Artificial Intelligence and Law, a topic that I have already loved for so long—and I guess many of you too—and that today is in the center of attention.

It is not a new thing that technological innovation in the law has attracted a lot of attention. For instance, think of an innovation brought to us by the French 18th century freemason Joseph-Ignace Guillotin: the guillotine. Many people gathered at the Nieuwmarkt, Amsterdam, when it was first used in the Netherlands in 1812 (Fig.  1 , left). The guillotine was thought of as a humane technology, since the machine guaranteed an instant and painless death.

figure 1

Technological innovation in the law in the past (left) and in the future? (right). Left: Guillotine at the Nieuwmarkt in Amsterdam, 1812 (Rijksmuseum RP-P-OB-87.033, anonymous). Right: TV series Futurama, judge 723 ( futurama.fandom.com/wiki/Judge_723 )

And then a contemporary technological innovation that attracts a lot of attention: the self-driving car that can follow basic traffic rules by itself, so in that sense is an example of normware, an artificial system with embedded norms. In a recent news article, Footnote 2 the story is reported that a drunk driver in Meppel in my province Drenthe in the Netherlands was driving his self-driving car. Well, he was riding his car, as the police discovered that he was tailing a truck, while sleeping behind the wheel, his car in autopilot mode. His driver’s licence has been withdrawn.

And indeed technological innovation in AI is spectacular, think only of the automatically translated headline ‘Drunken Meppeler sleeps on the highway’, perhaps not perfect, but enough for understanding what is meant. Innovation in AI is going so fast that many people have become very enthusiastic about what is possible. For instance, a recent news item reports that Estonia is planning to use AI for automatic decision making in the law. Footnote 3 It brings back the old fears for robot judges (Fig.  1 , right).

Contrast here how legal data enters the legal system in France where it is since recently no longer allowed to use data to evaluate or predict the behavior of individual judges:

LOI n \(^{\mathbf{o}}\) 2019-222 du 23 mars 2019 de programmation 2018–2022 et de réforme pour la justice (1)—Article 33 Les données d’identité des magistrats et des membres du greffe ne peuvent faire l’objet d’une réutilisation ayant pour objet ou pour effet d’évaluer, d’analyser, de comparer ou de prédire leurs pratiques professionnelles réelles ou supposées. [The identity data of magistrates and members of the registry cannot be reused with the purpose or effect of evaluating, analyzing, comparing or predicting their actual or alleged professional practices.]

The fears are real, as the fake news and privacy disasters that are happening show. Even the big tech companies are considering significant changes, such as a data diet. Footnote 4 But no one knows whether that is because of a concern for the people’s privacy or out of fear for more regulation hurting their market dominance. Anyway, in China privacy is thought of very differently. Figure  2 shows an automatically identified car of which it is automatically decided that it is breaching traffic law—see the red box around it. And indeed with both a car and pedestrians on the zebra crossing something is going wrong. Just this weekend the newspaper reported about how the Chinese public thinks of their social scoring system. Footnote 5 It seems that the Chinese emphasise the advantages of the scoring system, as a tool against crimes and misbehavior.

figure 2

A car breaching traffic law, automatically identified

Against this background of the benefits and risks of contemporary AI, the AI community in the Netherlands has presented a manifesto Footnote 6 emphasising what is needed: an AI that is aligned with human values and society. In Fig.  3 , key fields of research in AI are listed in rows, and in columns three key challenges are shown: first, AI should be social, and should allow for sensible interaction with humans; second, AI should be explainable, such that black box algorithms trained on data are made transparent by providing justifying explanations; and, third, AI should be responsible, in particular AI should be guided by the rules, norms, laws of society.

figure 3

Artificial Intelligence Grid: foundational areas and multidisciplinary challenges (source: Dutch AI Manifesto \(^{6}\) )

Also elsewhere there is more and more awareness of the need for a good, humane AI. For instance, the CLAIRE Confederation of Laboratories for AI Research in Europe Footnote 7 uses the slogan

Excellence across all of AI For all of Europe With a Human-Centered Focus.

In other words, this emerging network advertises a strong European AI with social, explainable, responsible AI at its core.

And now a key point for today: AI & Law has been doing this all along. At least since the start of its primary institutions—the biennial conference ICAIL (started in 1987 by IAAIL), Footnote 8 the annual conference JURIX (started in 1988) Footnote 9 and the journal Artificial Intelligence & Law (in 1992)—, we have been working on good AI. In other words, AI & Law has worked on the design of socially aware, explainable, responsible AI for decades already. One can say that what is needed in AI today is to do AI as we do law.

2 Legal technology today

But before explaining how that could go let us look a bit at the current state of legal technology, for things are very different when compared to the start of the field of AI & Law.

For one thing, all branches of government now use legal technology to make information accessible for the public and to provide services as directly and easily as possible. For instance, a Dutch government website Footnote 10 provides access to laws, regulations and treaties valid in the Netherlands. The Dutch public prosecution provides an online knowledge-based system that gives access to fines and punishments in all kinds of offenses. Footnote 11 There you can for instance find out what happens when the police catch you with an amount of marihuana between 5 and 30 grams. In the Netherlands, you have to pay 75 euros, and there is a note: also the drugs will be taken away from you. Indeed in the Netherlands all branches of government have online presence, as there is a website that gives access to information about the Dutch judicial system, including access to many decisions. Footnote 12

An especially good example of successful legal technology is provided by the government’s income tax services. Footnote 13 In the Netherlands, filling out your annual tax form has become very simple. The software is good, it is easy to use, and best of all: in these days of big interconnected data much of what you need to fill in is already fillled in for you. Your salary, bank accounts, savings, mortgage interest paid, the value of your house, it is all already there when you log in. In certain cases the tool even leaves room for some mild tax evasion—or tax optimisation if you like—since by playing with some settings a married couple can make sure that one partner has to pay just below the minimal amount that will in fact be collected, which can save about 40 euros.

One might think that such legal tech systems are now normal, but that is far from true. Many countries struggle with developing proper legal tech at the government level. One issue is that the design of complex systems is notoriously hard, and this is already true without very advanced AI.

Also the Netherlands has had its striking failures. A scary example is the Dutch project to streamline the IT support of population registers. One would say a doable project, just databases with names, birth dates, marriages, addresses and the like. The project was a complete failure. Footnote 14 After burning 90 million euros, the responsible minister—by the way earlier in his career a well-recognized scientist—had to pull the plug. Today all local governments are still using their own systems.

Still legal tech is booming, and focuses on many different styles of work. The classification used by the tech index maintained by the CodeX center for legal informatics at Stanford university distinguishes nine categories (Marketplace, Document Automation, Practice Management, Legal Research, Legal Education, Online Dispute Resolution, E-Discovery, Analytics and Compliance). Footnote 15 It currently lists more than a 1000 legal tech oriented companies.

And on the internet I found a promising graph about how the market for legal technology will develop. Now it is worth already a couple of 100s of millions of dollars, but in a few years time that will have risen to 1.2 billion dollars—according to that particular prediction. I leave it to you to assess what such a prediction really means, but we can be curious and hopeful while following how the market will actually develop.

So legal tech clearly exists, in fact is widespread. But is it AI, in the sense of AI as discussed at academic conferences? Most of it not really. Most of what we see that is successful in legal tech is not really AI. But there are examples.

I don’t know about you, but I consider the tax system just discussed to be a proper AI system. It has expert knowledge of tax law and it applies that legal expertise to your specific situation. True, this is largely good old-fashioned AI already scientifically understood in the 1970s, but by its access to relevant databases of the interconnected-big-data kind, it certainly has a modern twist. One could even say that the system is grounded in real world data, and is hence an example of situated AI, in the way that the term was used in the 1990s (and perhaps before). But also this is clearly not an adaptive machine learning AI system, as is today expected of AI.

3 AI & law is hard

The reason why much of the successful legal tech is not really AI is simple. AI & Law is hard, very hard. In part this explains why many of us are here in this room. We are brave, we like the hard problems. In AI & Law they cannot be evaded.

figure 4

Nederland ontwapent (The Netherlands disarm). Source: Nationaal Archief, 2.24.01.03, 918-0574 (Joost Evers, Anefo)

Let us look at an example of real law. We go back to the year when I was born when pacifism was still a relevant political attitude. In that year the Dutch Supreme court decided that the inscription ‘The Netherlands disarm’, mounted on a tower (Fig.  4 ) was not an offense. Footnote 16 The court admitted that indeed the sign could be considered a violation of Article 1 of the landscape management regulation of the province of North Holland, but the court decided that that regulation lacked binding power by a conflict with the freedom of speech, as codified in Article 7 of the Dutch constitution.

An example of a hard case. This outcome and its reasoning could not really be predicted, which is one reason why the case is still taught in law schools.

The example can be used to illustrate some of the tough hurdles for the development of AI & Law as they have been recognized from the start; here a list used by Rissland ( 1988 ) when reviewing Anne Gardner’s pioneering book ‘An AI approach to legal reasoning’ (Gardner 1987 ), a revision of her 1984 Stanford dissertation. Footnote 17 I am happy that both are present in this room today.

figure 5

The subsumption model

Legal reasoning is rule-guided, rather than rule-governed In the example, indeed both the provincial regulation and the constitution were only guiding, not governing. Their conflict had to be resolved. A wise judge was needed.

Legal terms are open textured In the example it is quite a stretch to interpret a sign on a tower as an example of speech in the sense of freedom of speech, but that is what the court here did. It is the old puzzle of legally qualifying the facts, not at all an easy business, also not for humans. With my background in mathematics, I found legal qualification to be a surprisingly and unpleasantly underspecified problem when I took law school exams during my first years as assistant professor in legal informatics in Maastricht, back in the 1990s. Today computers also still would have a very hard time handling open texture.

Legal questions can have more than one answer, but a reasonable and timely answer must be given I have not checked how quickly the supreme court made its decision, probably not very quickly, but the case was settled. The conflict was resolved. A solution that had not yet been there, had been created, constructed. The decision changed a small part of the world.

The answers to legal questions can change over time In the example I am not sure about today’s law in this respect, in fact it is my guess that freedom of speech is still interpreted as broadly as here, and I would not be surprised when it is now interpreted even more broadly. But society definitely has changed since the late 1960s, and what I would be surprised about is when I would today see such a sign in the public environment.

One way of looking at the hurdles is by saying that the subsumption model is false. According to the subsumption model of law there is a set of laws, thought of as rules, there are some facts,—and you arrive at the legal answers, the legal consequences by applying the rules to the facts (Fig.  5 ). The case facts are subsumed under the rules, providing the legal solution to the case. It is often associated with Montesquieu’s phrase of the judge as a ‘bouche de la loi’, the mouth of the law, according to which a judge is just the one who makes the law speak.

All hurdles just mentioned show that this perspective cannot be true. Rules are only guiding, terms are open-textured, there can be more answers, and things can change.

figure 6

The theory construction model (Verheij 2003a , 2005 )

Hence an alternative perspective on what happens when a case is decided. Legal decision making is a process of constructing and testing a theory, a series of hypotheses that are gradually developed and tested in a critical discussion (Fig.  6 ). The figure suggests an initial version of the facts, an initial version of the relevant rules, and an initial version of the legal conclusions. Gradually the initial hypothesis is adapted. Think of what happens in a court proceedings, and in what in the Netherlands is called the ‘raadkamer’, the internal discussion among judges, where after a careful constructive critical discussion—if the judges get the time for that of course—finally a tried and tested perspective on the case is arrived at, showing the final legal conclusions subsuming the final facts under the final rules. This is the picture I used in the 2003 AI & Law special issue of the AI journal, edited by Edwina Rissland, Kevin Ashley, and Ronald Loui, two of them here in this room. A later version with Floris Bex emphasises that also the perspective on the evidence and how it supports the facts is gradually constructed (Bex and Verheij 2012 ). In our field, the idea of theory construction in the law has for instance been emphasised by McCarty ( 1997 ), Hafner and Berman ( 2002 ), Gordon ( 1995 ), Bench-Capon and Sartor ( 2003 ) and Hage et al. ( 1993 ).

4 AI as law

Today’s claim is that good AI requires a different way of doing AI, a way that we in the field of AI & Law have been doing all along, namely doing AI in a way that meets the requirements of the law, in fact in a way that models how things are done in the law. Let us discuss this perspective a bit further.

There can be many metaphors on what AI is and how it should be done, as follows.

AI as mathematics, where the focus is on formal systems;

AI as technology, where the focus is on the art of system design;

AI as psychology, where the focus is on intelligent minds;

AI as sociology, where the focus is on societies of agents.

And then AI as law, to which we return in a minute (Table  1 ).

In AI as mathematics, one can think of the logical and probabilistic foundations of AI, indeed since the start and still now of core importance. It is said that the namegiver of the field of AI—John McCarty—thought of the foundations of AI as an instance of logic, and logic alone. In contrast today some consider AI to be a kind of statistics 2.0 or 3.0.

In AI as technology, one can think of meticulously crafted rule-based expert systems or of machine learning algorithms evaluated on large carefully labeled data sets. In AI as technology, AI applications and AI research meet most directly.

In AI as psychology, one can think of the modeling of human brains as in cognitive modeling, or of the smart human-like algorithms that are sometimes referred to as cognitive computing.

In AI as sociology, one can think of multi-agent systems simulating a society and of autonomous robots that fly in flocks.

Perhaps you have recognized the list of metaphors as the ones used by Toulmin ( 1958 ) when he discussed what he thought of as a crisis in the formal analysis of human reasoning. He argued that the classical formal logic then fashionable was too irrelevant for what reasoning actually was, and he arrived at a perspective of logic as law. Footnote 18 What he meant was that counterargument must be considered, that rules warranting argumentative steps are material—and not only formal—, that these rules are backed by factual circumstances, that conclusions are often qualified, uncertain, presumptive, and that reasoning and argument are to be thought of as the outcome of debates among individuals and in groups (see also Hitchcock and Verheij 2006 ; Verheij 2009 ). All of these ideas emphasised by Toulmin have now been studied extensively, with the field of AI & Law having played a significant role in the developments. Footnote 19

The metaphors can also be applied to the law, exposing some key ideas familiar in law.

If we think of law as mathematics, the focus is on the formality of procedural rule following and of stare decisis where things are well-defined and there is little room for freedom.

In law as technology, one can think of the art of doing law in a jurisdiction with either a focus on rules, as in civil law systems, or with a focus on cases, as in common law systems.

In law as psychology, one can think of the judicial reasoning by an individual judge, and of the judicial discretion that is to some extent allowed, even wanted.

In law as sociology, the role of critical discussion springs to mind, and of regulating a society in order to give order and prevent chaos.

And finally the somewhat pleonastic metaphor of law as law, but now as law in contrast with the other metaphors. I think of two specific and essential ideas in the law, namely that government is to be bound by the rule of law, and that the goal of law is to arrive at justice, thereby supporting a good society and a good life for its citizens.

Note how this discussion shows the typically legal, hybrid balancing of different sides: rules and cases, regulations and decisions, rationality and interpretation, individual and society, boundedness and justice. And as we know this balancing best takes place in a constructive critical discussion.

Which brings us to bottom line of the list of AI metaphors (Table  1 ).

5. AI as law, where the focus is on hybrid critical discussion.

In AI as law, AI systems are to be thought of as hybrid critical discussion systems, where different hypothetical perspectives are constructed and evaluated until a good answer is found.

figure 7

Bridging the gap between knowledge and data systems in AI (Verheij 2018 )

In this connection, I recently explained what I think is needed in AI (Fig.  7 ), namely the much needed step we have to make towards hybrid systems that connect knowledge representation and reasoning techniques with the powers of machine learning. In this diagram I used the term argumentation systems. But since argumentation has a very specific sound in this community, and perhaps to some feels as a too specific, too limiting perspective, I today speak of AI as Law in the sense of the development of hybrid critical discussion systems.

5 Topics in AI

Let me continue with a discussion of core topics in AI with the AI as Law perspective in mind. My focus is on reasoning, knowledge, learning and language.

5.1 Reasoning

First reasoning. I then indeed think of argumentation where arguments and counterarguments meet (van Eemeren et al. 2014 ; Atkinson et al. 2017 ; Baroni et al. 2018 ). This is connected to the idea of defeasibility, where arguments become defeated when attacked by a stronger counterargument. Argumentation has been used to address the deep and old puzzles of inconsistency, incomplete information and uncertainty.

Here is an example argument about the Dutch bike owner Mary whose bike is stolen (Fig.  8 ). The bike is bought by John, hence both have a claim to ownership—Mary as the original owner, John as the buyer. But in this case the conflict can be resolved as John bought the bike for the low price of 20 euros, indicating that he was not a bona fide buyer. At such a price, he could have known that the bike was stolen, hence he has no claim to ownership as the buyer, and Mary is the owner.

figure 8

  • Argumentation

It is one achievement of the field of AI & Law that the logic of argumentation is by now well understood, so well that it can be implemented in argumentation diagramming software that applies the logic of argumentation, for instance the ArguMed software that I implemented long ago during my postdoc period in the Maastricht law school (Verheij 2003a , 2005 ). Footnote 20 It implements argumentation semantics of the stable kind in the sense of Dung’s abstract argumentation that was proposed some 25 years ago (Dung 1995 ), a turning point and a cornerstone in today’s understanding of argumentation, with many successes. Abstract argumentation also gave new puzzles such as the lack of standardization leading to all kinds of detailed comparative formal studies, and more fundamentally the multiple formal semantics puzzle. The stable, preferred, grounded and complete semantics were the four proposed by Dung ( 1995 ), quickly thereafter extended to 6 when the labeling-based stage and semi-stable semantics were proposed (Verheij 1996 ). But that was only the start because the field of computational argumentation was then still only emerging.

For me, it was obvious that a different approach was needed when I discovered that after combining attack and support 11 different semantics were formally possible (Verheij 2003b ), but practically almost all hardly relevant. No lawyer has to think about whether the applicable argumentation semantics is the semi-stable or the stage semantics.

One puzzle in the field is the following, here included after a discussion on the plane from Amsterdam to Montreal with Trevor Bench-Capon and Henry Prakken. A key idea underlying the original abstract argumentation paper is that derivation-like arguments can be abstracted from, allowing to focus only on attack. I know that for many this idea has helped them in their work and understanding of argumentation. For me, this was—from rather early on—more a distraction than an advantage as it introduced a separate, seemingly spurious layer. In the way that my PhD supervisor Jaap Hage put it: ‘those cloudy formal structures of yours’—and Jaap referred to abstract graphs in the sense of Dung—have no grounding in how lawyers think. There is no separate category of supporting arguments to be abstracted from before considering attack; instead, in the law there are only reasons for and against conclusions that must be balanced. Those were the days when Jaap Hage was working on Reason-Based Logic ( 1997 ) and I was helping him (Verheij et al. 1998 ). In a sense, the ArguMed software based on the DefLog formalism was my answer to removing that redundant intermediate layer (still present in its precursor the Argue! system), while sticking to the important mathematical analysis of reinstatement uncovered by Dung (see Verheij 2003a , 2005 ). For background on the puzzle of combining support and attack, see (van Eemeren et al. 2014 , Sect. 11.5.5).

But as I said from around the turn of the millenium I thought a new mathematical foundation was called for, and it took me years to arrive at something that really increased my understanding of argumentation: the case model formalism (Verheij 2017a , b ), but that is not for now.

5.2 Knowledge

The second topic of AI to be discussed is knowledge, so prominent in AI and in law. I then think of material, semi-formal argumentation schemes such as the witness testimony scheme, or the scheme for practical reasoning, as for instance collected in the nice volume by Walton et al. ( 2008 ).

I also think of norms, in our community often studied with a Hohfeldian or deontic logic perspective on rights and obligations as a background. Footnote 21 And then there are the ontologies that can capture large amounts of knowledge in a systematic way. Footnote 22

One lesson that I have taken home from working in the domain of law—and again don’t forget that I started in the field of mathematics where things are thought of as neat and clean—one lesson is that in the world of law things are always more complex than you think. One could say that it is the business of law to find the exactly right level of complexity, and that is often just a bit more complex than one’s initial idea. And if things are not yet complex now, they can become tomorrow. Remember the dynamics of theory construction that we saw earlier (Fig.  6 ).

figure 9

Types of juristic facts (left); tree of individuals (right) (Hage and Verheij 1999 )

Figure  9 (left) shows how in the law different categories of juristic facts are distinguished. Here juristic facts are the kind of facts that are legally relevant, that have legal consequences. They come in two kinds: acts with legal consequences, and bare juristic facts, where the latter are intentionless events such as being born, which still have legal consequences. And acts with legal consequences are divided in on the one hand juristic acts aimed at a legal consequence (such as contracting), and on the other factual acts, where although there is no legal intention, still there are legal consequences. Here the primary example is that of unlawful acts as discussed in tort law. I am still happy that I learnt this categorization of juristic facts in the Maastricht law school, as it has relevantly expanded my understanding of how things work in the world. And of how things should be done in AI. Definitely not purely logically or purely statistically, definitely with much attention for the specifics of a situation.

figure 10

Signing a sales contract (Hage and Verheij 1999 )

Figure  9 (right) shows another categorization, prepared with Jaap Hage, that shows how we then approached the core categories of things, or ‘individuals’ that should be distinguished when analyzing the law: states of affairs, events rules, other individuals, and then the subcategories of event occurrences, rule validities and other states of affairs. And although such a categorization does have a hint of the baroqueness of Jorge Luis Borges’ animal taxonomy (that included those animals that belong to the emperor, mermaids and innumerable animals), the abstract core ontology helped us to analyze the relations between events, rules and states of affairs that play a role when signing a contract (Fig.  10 ). Indeed at first sight a complex picture. For now it suffices that at the top row there is the physical act of signing—say when the pen is going over the paper to sign—and this physical act counts as engaging in a contractual bond (shown in the second row), which implies the undertaking of an obligation (third row), which in turn leads to a duty to perform an action (at the bottom row). Not a simple picture, but as said, in the law things are often more complex than expected, and typically for good, pragmatic reasons.

The core puzzle for our field and for AI generally that I would like to mention is that of commonsense knowledge. This remains an essential puzzle, also in these days of big data; also in these days of cognitive computing. Machines simply don’t have commonsense knowledge that is nearly good enough. A knowledgeable report in the Communications of the ACM explains that progress has been slow (Davis and Marcus 2015 ). It goes back to 2015, but please do not believe it when it is suggested that things are very different today. The commonsense knowledge problem remains a relevant and important research challenge indeed and I hope to see more of the big knowledge needed for serious AI & Law in the future. Only brave people have the chance to make real progress here, like the people in this room.

One example of what I think is an as yet underestimated cornerstone of commonsense knowledge is the role of globally coherent knowledge structures—such as the scenarios and cases we encounter in the law. Our current program chair Floris Bex took relevant steps to investigate scenario schemes and how they are hierarchically related, in the context of murder stories and crime investigation (Bex 2011 ). Footnote 23 Our field would benefit from more work like this, that goes back to the frames and scripts studied by people such as Roger Schank and Marvin Minsky.

My current favorite kind of knowledge representation uses the case models mentioned before. It has for instance been used to represent how an appellate court gradually constructs its hypotheses about a murder case on the basis of the evidence, gradually testing and selecting which scenario of what has happened to believe or not (Verheij 2019 ), and also to the temporal development of the relevance of past decisions in terms of the values they promote and demote (Verheij 2016 ).

5.3 Learning

Then we come to the topic of learning. It is the domain of statistical analysis that shows that certain judges are more prone to supporting democrat positions than others, and that as we saw no longer is allowed in France. It is the domain of open data, that allows public access to legal sources and in which our community has been very active (Biagioli et al. 2005 ; Francesconi and Passerini 2007 ; Francesconi et al. 2010a , b ; Sartor et al. 2011 ; Athan et al. 2013 ). And it is the realm of neural networks, back in the days called perceptrons, now referred to as deep learning.

The core theme to be discussed here is the issue of how learning and the justification of outcomes go together, using a contemporary term: how to arrive at an explainable AI, an explainable machine learning. We have heard it discussed at all career levels, by young PhD students and by a Turing award winner.

The issue can be illustrated by a mock prediction machine for Dutch criminal courts. Imagine a button that you can push, that once you push it always gives the outcome that the suspect is guilty as charged. And thinking of the need to evaluate systems (Conrad and Zeleznikow 2015 ), this system has indeed been validated by the Dutch Central Bureau of Statistics, that has the data that shows that this prediction machine is correct in 91 out of a 100 cases (Fig.  11 ). The validating data shows that the imaginary prediction machine has become a bit less accurate in recent years, presumably by changes in society, perhaps in part caused by the attention in the Netherlands for so-called dubious cases, or miscarriages of justice, which may have made judges a little more reluctant to decide for guilt. But still: 91% for this very simple machine is quite good. And as you know, all this says very little about how to decide for guilt or not.

figure 11

Convictions in criminal cases in the Netherlands; source: Central Bureau of Statistics ( www.cbs.nl ), data collection of September 11, 2017

How hard judicial prediction really is, also when using serious machine learning techniques, is shown by some recent examples. Katz et al. ( 2017 ) that their US Supreme Court prediction machine could achieve a 70% accuracy. A mild improvement over the baseline of the historical majority outcome (to always affirm a previous decision) which is 60%, and even milder over the 10 year majority outcome which is 67%. The system based its predictions on features such as judge identity, month, court of origin and issue, so modest results are not surprising.

In another study Aletras and colleagues ( 2016 ) studied European Court of Human Rights cases. They used n-grams and topics as the starting point of their training, and used a prepared dataset to make a cleaner baseline of 50% accuracy by random guessing. They reached 79% accuracy using the whole text, and noted that by only using the part where the factual circumstances are described already an accuracy of 73% is reached.

Naively taking the ratios of 70 over 60 and of 79 over 50, one sees that factors of 1.2 and of 1.6 improvement are relevant research outcomes, but practically modest. And more importantly these systems only focus on outcome, without saying anything about how to arrive at an outcome, or about for which reasons an outcome is warranted or not.

And indeed and as said before learning is hard, especially in the domain of law. Footnote 24 I am still a fan of an old paper by Trevor Bench-Capon on neural networks and open texture (Bench-Capon 1993 ). In an artificially constructed example about welfare benefits, he included different kinds of constraints: boolean, categorical, numeric. For instance, women were allowed the benefit after 60, and men after 65. Trevor found that after training, the neural network could achieve a high overall performance, but with somewhat surprising underlying rationales. In Fig.  12 , on the left, one can see that the condition starts to be relevant long before the ages of 60 and 65 and that the difference in gender is something like 15 years instead of 5. On the right, with a more focused training set using cases with only single failing conditions, the relevance started a bit later, but still too early, while the gender difference now indeed was 5 years.

figure 12

Neural networks and open texture (Bench-Capon 1993 )

What I have placed my bets on is the kind of hybrid cases and rules systems that for us in AI & Law are normal. Footnote 25 I now represent Dutch tort law in terms of case models validating rule-based arguments (Verheij 2017b ) (cf. Fig.  13 below).

5.4 Language

Then language, the fourth and final topic of AI that I would like to discuss with you. Today the topic of language is closely connected to machine learning. I think of the labeling of natural language data to allow for training; I think of prediction such as by a search engine or chat application on a smartphone, and I think of argument mining, a relevant topic with strong roots in the field of AI & Law.

The study of natural language in AI, and in fact of AI itself, got a significant boost by IBM’s Watson system that won the Jeopardy! quiz show. For instance, Watson correctly recognized the description of ‘A 2-word phrase [that] means the power to take private property for public use’. That description refers to the typically legal concept of eminent domain, the situation in which a government disowns property for public reasons, such as the construction of a highway or windmill park. Watson’s output showed that the legal concept scored 98%, but also ‘electric company’ and ‘capitalist economy’ were considered with 9% and 5% scores, respectively. Apparently Watson sees some kind of overlap between the legal concept of eminent domain, electric companies and capitalist economy, since 98+9+5 is more than a 100 percent.

And IBM continued, as Watson was used as the basis for its debating technologies. In a 2014 demonstration, Footnote 26 the system is considering the sale of violent video games to minors. The video shows that the system finds reasons for and against banning the sale of such games to minors, for instance that most children who play violent games do not have problems, but that violent video games can increase children’s aggression. The video remains impressive, and for the field of computational argumentation that I am a member of it was somewhat discomforting that the researchers behind this system were then outsiders to the field.

The success of these natural language systems leads one to think about why they can do what they do. Do they really have an understanding of a complex sentence describing the legal concept of eminent domain; can they really digest newspaper articles and other online resources on violent video games?

These questions are especially relevant since in our field of AI & Law we have had the opportunity to follow research on argument mining from the start. Early and relevant research is by Raquel Mochales Palau and Sien Moens, who studied argument mining in a paper at the 2009 ICAIL conference ( 2009 , 2011 ). And as already shown in that paper, it should not be considered an easy task to perform argument mining. Indeed the field has been making relevant and interesting progress, as also shown in research presented at this conference, but no one would claim the kind of natural language understanding needed for interpreting legal concepts or online debates. Footnote 27

So what then is the basis of apparent success? Is it simply because a big tech company can do a research investment that in academia one can only dream of? Certainly that is a part of what has been going on. But there is more to it than that as can be appreciated by a small experiment I did, this time actually an implemented online system. It is what I ironically called Poor Man’s Watson, Footnote 28 which has been programmed without much deep natural language technology, just some simple regular expression scripts using online access to the Google search engine and Wikipedia. And indeed it turns out that the simple script can also recognize the concept of eminent domain: when one types ‘the power to take private property for public use’ the answer is ‘eminent domain’. The explanation for this remarkable result is that for some descriptions the correct Wikipedia page ends up high in the list of pages returned by Google, and that happens because we—the people—have been typing in good descriptions of those concepts in Wikipedia, and indeed Google can find these pages. Sometimes the results are spectacular, but also they are brittle since seemingly small, irrelevant changes can quickly break this simple system.

And for the debating technology something similar holds since there are web sites collecting pros and cons of societal debates. For instance, the web site procon.org has a page on the pros and cons of violent video games. Footnote 29 Arguments it has collected include ‘Pro 1: Playing violent video games causes more aggression, bullying, and fighting’ and ‘Con 1: Sales of violent video games have significantly increased while violent juvenile crime rates have significantly decreased’. The web site Kialo has similar collaboratively created lists. Footnote 30 Concerning the issue ‘Violent video games should be banned to curb school shootings’, it lists for instance the pro ‘Video games normalize violence, especially in the eyes of kids, and affect how they see and interact with the world’ and the con ‘School shootings are, primarily, the result of other factors that should be dealt with instead’.

Surely the existence of such lists typed in, in a structured way, by humans is a central basis for what debating technology can and cannot do. It is not a coincidence that—listening carefully to the reports—the examples used in marketing concern curated lists of topics. At the same time this does not take away the bravery of IBM and how strongly it has been stimulating the field of AI by its successful demos. And that also for IBM things are sometimes hard is shown by the report from February 2019 when IBM’s technology entered into a debate with a human debater, and this time lost. Footnote 31 But who knows what the future brings.

What I believe is needed is the development of an ever closer connection between complex knowledge representations and natural language explanations, as for instance in work by Charlotte Vlek on explaining Bayesian Networks (Vlek et al. 2016 ), which had nice connections to the work discussed by Jeroen Keppens yesterday ( 2019 ).

6 Conclusion

As I said I think the way to go for the field is to develop an AI that is much like the law, an AI where systems are hybrid critical discussion systems.

For after phases of AI as mathematics, as technology, as psychology, and as sociology—all still important and relevant—, an AI as Law perspective provides fresh ideas for designing an AI that is good (Table  1 ). And in order to build the hybrid critical discussion systems that I think are needed, lots of work is waiting in reasoning, in knowledge, in learning and in language, as follows.

For reasoning (Sect.  5.1 ), the study of formal and computational argumentation remains relevant and promising, while work is needed to arrive at a formal semantics that is not only accessible for a small group of experts.

For knowledge (Sect.  5.2 ), we need to continue working on knowledge bases large and small, and on systems with embedded norms. But I hope that some of us are also brave enough to be looking for new ways to arrive at good commonsense knowledge for machines. In the law we cannot do without wise commonsense.

For learning (Sect.  5.3 ), the integration of knowledge and data can be addressed by how in the law rules and cases are connected and influence one another. Only then the requirements of explainability and responsibility can be properly addressed.

For language (Sect.  5.4 ), work is needed in interpretation of what is said in a text. This requires an understanding in terms of complex, detailed models of a situation, like what happens in any court of law where every word can make a relevant difference.

Lots of work to do. Lots of high mountains to conquer.

The perspective of AI as Law discussed here today can be regarded as an attempt to broaden what I said in the lecture on ‘Arguments for good AI’ where the focus is mostly on computational argumentation (Verheij 2018 ). There I explain that we need a good AI that can give good answers to our questions, give good reasons for them, and make good choices. I projected that in 2025 we will have arrived at a new kind of AI systems bridging knowledge and data, namely argumentation systems (Fig.  7 ). Clearly and as I tried to explain today, there is still plenty of work to be done. I expect that a key role will be played by work in our field on connections between rules, cases and arguments, as in the set of cases formalizing tort law (Fig.  13 , left) that formally validate the legally relevant rule-based arguments (Fig.  13 , right).

figure 13

Arguments, rules and cases for Dutch tort law (Verheij 2017b )

By following the path of developing AI as Law we can guard against technology that is bad for us, and that—unlike the guillotine I started with—is a really humane technology that directly benefits society and its citizens.

In conclusion, in these days of dreams and fears of AI and algorithms, our beloved field of AI & Law is more relevant than ever. We can be proud that AI & Law has worked on the design of socially aware, explainable, responsible AI for decades already.

And since we in AI & Law are used to address the hardest problems across the breadth of AI (reasoning, knowledge, learning, language)—since in fact we cannot avoid them—, our field can inspire new solutions. In particular, I discussed computational argumentation, schemes for arguments and scenarios, encoded norms, hybrid rule-case systems and computational interpretation.

We only need to look at what happens in the law. In the law, we see an artificial system that adds much value to our life. Let us take inspiration from the law, and let us work on building Artificial Intelligence that is not scary, but that genuinely contributes to a good quality of life in a just society. I am happy and proud to be a member of this brave and smart community and I thank you for your attention.

This text is an adapted version of the IAAIL presidential address delivered at the 17th International Conference on Artificial Intelligence and Law (ICAIL 2019) in Montreal, Canada (Cyberjustice Lab, University of Montreal, June 19, 2019).

‘Beschonken Meppeler rijdt slapend over de snelweg’ (automatic translation: ‘Drunken Meppeler sleeps on the highway’), RTV Drenthe, May 17, 2019.

‘Can AI be a fair judge in court? Estonia thinks so’, Wired, March 25, 2019 (Eric Miller).

‘Het nieuwe datadieet van Google en Facebook’ (automatic translation: ‘The new data diet from Google and Facebook’, nrc.nl , May 11, 2019.

‘Zo stuurt en controleert China zijn burgers’ (automatic translation: ‘This is how China directs and controls its citizens’, nrc.nl , June 14, 2019.

bnvki.org/wp-content/uploads/2018/09/Dutch-AI-Manifesto.pdf .

claire-ai.org .

iaail.org .

wetten.overheid.nl .

www.om.nl/onderwerpen/boetebase .

uitspraken.rechtspraak.nl .

www.belastingdienst.nl .

‘ICT-project basisregistratie totaal mislukt’ (automatic translation: ‘IT project basic registration totally failed’), nrc.nl , July 17, 2017.

techindex.law.stanford.edu .

Supreme Court The Netherlands, January 24, 1967: Nederland ontwapent (The Netherlands disarm).

For more on the complexity of AI & Law, see for instance (Rissland 1983 ; Sergot et al. 1986 ; Bench-Capon et al. 1987 , 2012 ; Rissland and Ashley 1987 ; Oskamp et al. 1989 ; Ashley 1990 , 2017 ; van den Herik 1991 ; Berman and Hafner 1995 ; Loui and Norman 1995 ; Bench-Capon and Sartor 2003 ; Sartor 2005 ; Zurek and Araszkiewicz 2013 ; Lauritsen 2015 ).

Toulmin ( 1958 ) speaks of logic as mathematics, as technology, as psychology, as sociology and as law (jurisprudence).

See for instance the research by Prakken ( 1997 ), Sartor ( 2005 ), Gordon ( 1995 ), Bench-Capon ( 2003 ) and Atkinson and Bench-Capon ( 2006 ). Argumentation research in AI & Law is connected to the wider study of formal and computational argumentation, see for instance (Simari and Loui 1992 ; Pollock 1995 ; Vreeswijk 1997 ; Chesñevar et al. 2000 ). See also the handbooks (Baroni et al. 2018 ; van Eemeren et al. 2014 ).

For some other examples, see (Gordon et al. 2007 ; Loui et al. 1997 ; Kirschner et al. 2003 ; Reed and Rowe 2004 ; Scheuer et al. 2010 ; Lodder and Zelznikow 2005 ).

See for instance (Sartor 2005 ; Gabbay et al. 2013 ; Governatori and Rotolo 2010 ).

See for instance (McCarty 1989 ; Valente 1995 ; van Kralingen 1995 ; Visser 1995 ; Visser and Bench-Capon 1998 ; Hage and Verheij 1999 ; Boer et al. 2002 , 2003 ; Breuker et al. 2004 ; Hoekstra et al. 2007 ; Wyner 2008 ; Casanovas et al. 2016 ).

For more work on evidence in AI & Law, see for instance (Keppens and Schafer 2006 ; Bex et al. 2010 ; Keppens 2012 ; Fenton et al. 2013 ; Vlek et al. 2014 ; Di Bello and Verheij 2018 ).

See also recently (Medvedeva et al. 2019 ).

See for instance work by (Branting 1991 ; Skalak and Rissland 1992 ; Branting 1993 ; Prakken and Sartor 1996 , 1998 ; Stranieri et al. 1999 ; Roth 2003 ; Brüninghaus and Ashley 2003 ; Atkinson and Bench-Capon 2006 ; Čyras et al. 2016 ).

Milken Institute Global Conference 2014, session ‘Why Tomorrow Won’t Look Like Today: Things that Will Blow Your Mind’, youtu.be/6fJOtAzICzw?t=2725 .

See for instance (Schweighofer et al. 2001 ; Wyner et al. 2009 , 2010 ; Grabmair and Ashley 2011 ; Ashley and Walker 2013 ; Grabmair et al. 2015 ; Tran et al. 2020 ).

Poor Man’s Watson, www.ai.rug.nl/~verheij/pmw .

videogames.procon.org .

kialo.com .

‘IBM’s AI loses debate to a human, but it’s got worlds to conquer’, cnet.com , February 11, 2019.

Aletras N, Tsarapatsanis D, Preoţiuc-Pietro D, Lampos V (2016) Predicting judicial decisions of the European Court of Human Rights: a natural language processing perspective. Peer J Comput Sci 2:1–19. https://doi.org/10.7717/peerj-cs.93

Article   Google Scholar  

Ashley KD (1990) Modeling legal arguments: reasoning with cases and hypotheticals. The MIT Press, Cambridge

Google Scholar  

Ashley KD (2017) Artificial intelligence and legal analytics: new tools for law practice in the digital age. Cambridge University Press, Cambridge

Book   Google Scholar  

Ashley KD, Walker VR (2013) Toward constructing evidence-based legal arguments using legal decision documents and machine learning. In: Proceedings of the fourteenth international conference on artificial intelligence and law, pp 176–180. ACM, New York (New York)

Athan T, Boley H, Governatori G, Palmirani M, Paschke A, Wyner A (2013) OASIS LegalRuleML. In: Proceedings of the 14th international conference on artificial intelligence and law (ICAIL 2013), pp 3–12. ACM Press, New York (New York)

Atkinson K, Bench-Capon TJM (2006) Legal case-based reasoning as practical reasoning. Artif Intell Law 13:93–131

Article   MATH   Google Scholar  

Atkinson K, Baroni P, Giacomin M, Hunter A, Prakken H, Reed C, Simari G, Thimm M, Villata S (2017) Toward artificial argumentation. AI Mag 38(3):25–36

Baroni P, Gabbay D, Giacomin M, van der Torre L (eds) (2018) Handbook of formal argumentation. College Publications, London

MATH   Google Scholar  

Bench-Capon TJM (1993) Neural networks and open texture. In: Proceedings of the fourth international conference on artificial intelligence and law, pp 292–297. ACM Press, New York (New York)

Bench-Capon TJM (2003) Persuasion in practical argument using value-based argumentation frameworks. J Logic Comput 13(3):429–448

Article   MathSciNet   MATH   Google Scholar  

Bench-Capon TJM, Sartor G (2003) A model of legal reasoning with cases incorporating theories and values. Artif Intell 150(1):97–143

Bench-Capon TJM, Robinson GO, Routen TW, Sergot MJ (1987) Logic programming for large scale applications in law: a formalisation of supplementary benefit legislation. In: Proceedings of the 1st international conference on artificial intelligence and law (ICAIL 1987), pp 190–198. ACM, New York (New York)

Bench-Capon T, Araszkiewicz M, Ashley KD, Atkinson K, Bex FJ, Borges F, Bourcier D, Bourgine D, Conrad JG, Francesconi E, Gordon TF, Governatori G, Leidner JL, Lewis DD, Loui RP, McCarty LT, Prakken H, Schilder F, Schweighofer E, Thompson P, Tyrrell A, Verheij B, Walton DN, Wyner AZ (2012) A history of AI and Law in 50 papers: 25 years of the international conference on AI and law. Artif Intell Law 20(3):215–319

Berman DH, Hafner CL (1995) Understanding precedents in a temporal context of evolving legal doctrine. In: Proceedings of the fifth international conference on artificial intelligence and law, pp 42–51. ACM Press, New York (New York)

Bex FJ (2011) Arguments, stories and criminal evidence: a formal hybrid theory. Springer, Berlin

Bex FJ, Verheij B (2012) Solving a murder case by asking critical questions: an approach to fact-finding in terms of argumentation and story schemes. Argumentation 26:325–353

Bex FJ, van Koppen PJ, Prakken H, Verheij B (2010) A hybrid formal theory of arguments, stories and criminal evidence. Artif Intell Law 18:1–30

Biagioli C, Francesconi E, Passerini A, Montemagni S, Soria C (2005) Automatic semantics extraction in law documents. In: Proceedings of the 10th international conference on artificial intelligence and law (ICAIL 2005), pp 133–140. ACM Press, New York (New York)

Boer A, Hoekstra R, Winkels R (2002) METAlex: legislation in XML. In: Bench-Capon TJM, Daskalopulu A, Winkels R (eds) Legal knowledge and information systems. JURIX 2002: the fifteenth annual conference. IOS Press, Amsterdam, pp 1–10

Boer A, van Engers T, Winkels R (2003) Using ontologies for comparing and harmonizing legislation. In: Proceedings of the 9th international conference on artificial intelligence and law, pp 60–69. ACM, New York (New York)

Branting LK (1991) Building explanations from rules and structured cases. Int J Man Mach Stud 34(6):797–837

Branting LK (1993) A computational model of ratio decidendi. Artif Intell Law 2(1):1–31

Breuker J, Valente A, Winkels R (2004) Legal ontologies in knowledge engineering and information management. Artif Intell Law 12(4):241–277

Brüninghaus S, Ashley KD (2003) Predicting outcomes of case based legal arguments. In: Proceedings of the 9th international conference on artificial intelligence and law (ICAIL 2003), pp 233–242. ACM, New York (New York)

Casanovas P, Palmirani M, Peroni S, van Engers T, Vitali F (2016) Semantic web for the legal domain: the next step. Semant Web 7(3):213–227

Chesñevar CI, Maguitman AG, Loui RP (2000) Logical models of argument. ACM Comput Surv 32(4):337–383

Conrad JG, Zeleznikow J (2015) The role of evaluation in ai and law: an examination of its different forms in the ai and law journal. In: Proceedings of the 15th international conference on artificial intelligence and law (ICAIL 2015), pp 181–186. ACM, New York (New York)

Čyras K, Satoh K, Toni F (2016) Abstract argumentation for case-based reasoning. In: Proceedings of the fifteenth international conference on principles of knowledge representation and reasoning (KR 2016), pp 549–552. AAAI Press

Davis E, Marcus G (2015) Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun ACM 58(9):92–103

Di Bello M, Verheij B (2018) Evidential reasoning. In: Bongiovanni G, Postema G, Rotolo A, Sartor G, Valentini C, Walton DN (eds) Handbook of legal reasoning and argumentation. Springer, Dordrecht, pp 447–493

Chapter   Google Scholar  

Dung PM (1995) On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif Intell 77:321–357

Fenton NE, Neil MD, Lagnado DA (2013) A general structure for legal arguments about evidence using Bayesian Networks. Cognit Sci 37:61–102

Francesconi E, Passerini A (2007) Automatic classification of provisions in legislative texts. Artif Intell Law 15(1):1–17

Francesconi E, Montemagni S, Peters W, Tiscornia D (2010a) Integrating a bottom–up and top–down methodology for building semantic resources for the multilingual legal domain. In: Semantic processing of legal texts, pp 95–121. Springer, Berlin

Francesconi E, Montemagni S, Peters W, Tiscornia D (2010b) Semantic processing of legal texts: where the language of law meets the law of language. Springer, Berlin

Gabbay D, Horty J, Parent X, Van der Meyden R, van der Torre L (2013) Handbook of deontic logic and normative systems. College Publication, London

Gardner A (1987) An artificial intelligence approach to legal reasoning. The MIT Press, Cambridge

Gordon TF (1995) The pleadings game: an artificial intelligence model of procedural justice. Kluwer, Dordrecht

Gordon TF, Prakken H, Walton DN (2007) The Carneades model of argument and burden of proof. Artif Intell 171(10–15):875–896

Governatori G, Rotolo A (2010) Changing legal systems: legal abrogations and annulments in defeasible logic. Logic J IGPL 18(1):157–194

Grabmair M, Ashley KD (2011) Facilitating case comparison using value judgments and intermediate legal concepts. In: Proceedings of the 13th international conference on Artificial intelligence and law, pp 161–170. ACM, New York (New York)

Grabmair M, Ashley KD, Chen R, Sureshkumar P, Wang C, Nyberg E, Walker VR (2015) Introducing LUIMA: an experiment in legal conceptual retrieval of vaccine injury decisions using a uima type system and tools. In: Proceedings of the 15th international conference on artificial intelligence and law, pp 69–78. ACM, New York (New York)

Hafner CL, Berman DH (2002) The role of context in case-based legal reasoning: teleological, temporal, and procedural. Artif Intell Law 10(1–3):19–64

Hage JC (1997) Reasoning with rules. An essay on legal reasoning and its underlying logic. Kluwer Academic Publishers, Dordrecht

Hage JC, Verheij B (1999) The law as a dynamic interconnected system of states of affairs: a legal top ontology. Int J Hum Comput Stud 51(6):1043–1077

Hage JC, Leenes R, Lodder AR (1993) Hard cases: a procedural approach. Artif Intell Law 2(2):113–167

Hitchcock DL, Verheij B (eds) (2006) Arguing on the toulmin model. New essays in argument analysis and evaluation (argumentation library, volume 10). Springer, Dordrecht

Hoekstra R, Breuker J, Di Bello M, Boer A (2007) The lkif core ontology of basic legal concepts. In: Casanovas P, Biasiotti MA, Francesconi E, Sagri MT (eds). Proceedings of LOAIT 2007. Second workshop on legal ontologies and artificial intelligence techniques, pp 43–63. CEUR-WS

Katz DM, Bommarito II MJ, Blackman J (2017) A general approach for predicting the behavior of the Supreme Court of the United States. PLoS ONE 12(4):1–18. https://doi.org/10.1371/journal.pone.0174698

Keppens J (2012) Argument diagram extraction from evidential Bayesian networks. Artif Intell Law 20:109–143

Keppens J (2019) Explainable Bayesian network query results via natural language generation systems. In: Proceedings of the 17th international conference on artificial intelligence and law (ICAIL 2019), pp 42–51. ACM, New York (New York)

Keppens J, Schafer B (2006) Knowledge based crime scenario modelling. Expert Syst Appl 30(2):203–222

Kirschner PA, Shum SJB, Carr CS (2003) Visualizing argumentation: software tools for collaborative and educational sense-making. Springer, Berlin

Lauritsen M (2015) On balance. Artif Intell Law 23(1):23–42

Lodder AR, Zelznikow J (2005) Developing an online dispute resolution environment: dialogue tools and negotiation support systems in a three-step model. Harvard Negot Law Rev 10:287–337

Loui RP, Norman J (1995) Rationales and argument moves. Artif Intell Law 3:159–189

Loui RP, Norman J, Altepeter J, Pinkard D, Craven D, Linsday J, Foltz M (1997) Progress on room 5: a testbed for public interactive semi-formal legal argumentation. In: Proceedings of the 6th international conference on artificial intelligence and law, pp 207–214. ACM Press

McCarty LT (1989) A language for legal discourse. i. basic features. In: Proceedings of the 2nd international conference on artificial intelligence and law (ICAIL 1989), pp 180–189. ACM, New York (New York)

McCarty LT (1997) Some arguments about legal arguments. In: Proceedings of the 6th international conference on artificial intelligence and law (ICAIL 1997), pp 215–224. ACM Press, New York (New York)

Medvedeva M, Vols M, Wieling M (2019) Using machine learning to predict decisions of the European court of human rights. Artif Intell Law. https://doi.org/10.1007/s10506-019-09255-y

Mochales Palau R, Moens MF (2009) Argumentation mining: the detection, classification and structure of arguments in text. In: Proceedings of the 12th international conference on artificial intelligence and law (ICAIL 2009), pp ges 98–107. ACM Press, New York (New York)

Mochales Palau R, Moens MF (2011) Argumentation mining. Artif Intell Law 19(1):1–22

Oskamp A, Walker RF, Schrickx JA, van den Berg PH (1989) PROLEXS divide and rule: a legal application. In: Proceedings of the second international conference on artificial intelligence and law, pp 54–62. ACM, New York (New York)

Pollock JL (1995) Cognitive carpentry: a blueprint for how to build a person. The MIT Press, Cambridge

Prakken H (1997) Logical tools for modelling legal argument. A study of defeasible reasonong in law. Kluwer Academic Publishers, Dordrecht

Prakken H, Sartor G (1996) A dialectical model of assessing conflicting arguments in legal reasoning. Artif Intell Law 4:331–368

Prakken H, Sartor G (1998) Modelling reasoning with precedents in a formal dialogue game. Artif Intell Law 6:231–287

Reed C, Rowe G (2004) Araucaria: software for argument analysis, diagramming and representation. Int J AI Tools 14(3–4):961–980

Rissland EL (1983) Examples in legal reasoning: Legal hypotheticals. In: Proceedings of the 8th international joint conference on artificial intelligence (IJCAI 1983), pp 90–93

Rissland EL (1988) Book review. An artificial intelligence approach to legal reasoning. Harvard J Law Technol 1(Spring):223–231

Rissland EL, Ashley KD (1987) A case-based system for trade secrets law. In: Proceedings of the first international conference on artificial intelligence and law, pp 60–66. ACM Press, New York (New York)

Roth B (2003) Case-based reasoning in the law. A formal theory of reasoning by case comparison. Dissertation Universiteit Maastricht, Maastricht

Sartor G (2005) Legal reasoning: a cognitive approach to the law. Vol 5 of Treatise on legal philosophy and general jurisprudence. Springer, Berlin

Sartor G, Palmirani M, Francesconi E, Biasiotti MA (2011) Legislative XML for the semantic web: principles, models, standards for document management. Springer, Berlin

Scheuer O, Loll F, Pinkwart N, McLaren BM (2010) Computer-supported argumentation: a review of the state of the art. Int J Comput Support Collab Learn 5(1):43–102

Schweighofer E, Rauber A, Dittenbach M (2001) Automatic text representation, classification and labeling in European law. In: Proceedings of the 8th international conference on artificial intelligence and law, pp 78–87. ACM, New York (New York)

Sergot MJ, Sadri F, Kowalski RA, Kriwaczek F, Hammond P, Cory HT (1986) The british nationality act as a logic program. Commun ACM 29(5):370–386

Simari GR, Loui RP (1992) A mathematical treatment of defeasible reasoning and its applications. Artif Intell 53:125–157

Skalak DB, Rissland EL (1992) Arguments and cases: an inevitable intertwining. Artif Intell Law 1(1):3–44

Stranieri A, Zeleznikow J, Gawler M, Lewis B (1999) A hybrid rule-neural approach for the automation of legal reasoning in the discretionary domain of family law in australia. Artif Intell Law 7(2–3):153–183

Toulmin SE (1958) The uses of argument. Cambridge University Press, Cambridge

Tran V, Le Nguyen M, Tojo S, Satoh K (2020) Encoded summarization: summarizing documents into continuous vector space for legal case retrieval. Artif Intell Law. https://doi.org/10.1007/s10506-020-09262-4

Valente A (1995) Legal knowledge engineering. A modelling approach. IOS Press, Amsterdam

van den Herik HJ (1991) Kunnen computers rechtspreken?. Gouda Quint, Arnhem

van Eemeren FH, Garssen B, Krabbe ECW, Snoeck Henkemans AF, Verheij B, Wagemans JHM (2014) Handbook of argumentation theory. Springer, Berlin

van Kralingen RW (1995) Frame-based conceptual models of statute law. Kluwer Law International, The Hague

Verheij B (1996) Two approaches to dialectical argumentation: admissible sets and argumentation stages. In: Meyer JJ, van der Gaag LC (eds) Proceedings of NAIC’96. Universiteit Utrecht, Utrecht, pp 357–368

Verheij B (2003a) Artificial argument assistants for defeasible argumentation. Artif Intell 150(1–2):291–324

Verheij B (2003b) DefLog: on the logical interpretation of prima facie justified assumptions. J Logic Comput 13(3):319–346

Verheij B (2005) Virtual arguments. On the design of argument assistants for lawyers and other arguers. T.M.C. Asser Press, The Hague

Verheij B (2009) The Toulmin argument model in artificial intelligence. Or: how semi-formal, defeasible argumentation schemes creep into logic. In: Rahwan I, Simari GR (eds) Argumentation in artificial intelligence. Springer, Berlin, pp 219–238

Verheij B (2016) Formalizing value-guided argumentation for ethical systems design. Artif Intell Law 24(4):387–407

Verheij B (2017a) Proof with and without probabilities. Correct evidential reasoning with presumptive arguments, coherent hypotheses and degrees of uncertainty. Artif Intell Law 25(1):127–154

Verheij B (2017b) Formalizing arguments, rules and cases. In: Proceedings of the 16th international conference on artificial intelligence and law (ICAIL 2017), pp 199–208. ACM Press, New York (New York)

Verheij B (2018) Arguments for good artificial intelligence. University of Groningen, Groningen. http://www.ai.rug.nl/~verheij/oratie/

Verheij B (2019) Analyzing the Simonshaven case with and without probabilities. Top Cognit Sci. https://doi.org/10.1111/tops.12436

Verheij B, Hage JC, van den Herik HJ (1998) An integrated view on rules and principles. Artif Intell Law 6(1):3–26

Visser PRS (1995) Knowledge specification for multiple legal tasks; a case study of the interaction problem in the legal domain. Kluwer Law International, The Hague

Visser PRS, Bench-Capon TJM (1998) A comparison of four ontologies for the design of legal knowledge systems. Artif Intell Law 6(1):27–57

Vlek CS, Prakken H, Renooij S, Verheij B (2014) Building Bayesian Networks for legal evidence with narratives: a case study evaluation. Artif Intell Law 22(4):375–421

Vlek CS, Prakken H, Renooij S, Verheij B (2016) A method for explaining Bayesian Networks for legal evidence with scenarios. Artif Intell Law 24(3):285–324

Vreeswijk GAW (1997) Abstract argumentation systems. Artif Intell 90:225–279

Walton DN, Reed C, Macagno F (2008) Argumentation schemes. Cambridge University Press, Cambridge

Book   MATH   Google Scholar  

Wyner A (2008) An ontology in OWL for legal case-based reasoning. Artif Intell Law 16(4):361

Wyner A, Angelov K, Barzdins G, Damljanovic D, Davis B, Fuchs N, Hoefler S, Jones K, Kaljurand K, Kuhn T et al (2009) On controlled natural languages: properties and prospects. In: International workshop on controlled natural language, pp 281–289. Berlin

Wyner A, Mochales-Palau R, Moens MF, Milward D (2010) Approaches to text mining arguments from legal cases. In: Semantic processing of legal texts, pp 60–79. Springer, Berlin

Zurek T, Araszkiewicz M (2013) Modeling teleological interpretation. In: Proceedings of the fourteenth international conference on artificial intelligence and law, pp 160–168. ACM, New York (New York)

Download references

Author information

Authors and affiliations.

Department of Artificial Intelligence, Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence, University of Groningen, Groningen, The Netherlands

Bart Verheij

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Bart Verheij .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Verheij, B. Artificial intelligence as law. Artif Intell Law 28 , 181–206 (2020). https://doi.org/10.1007/s10506-020-09266-0

Download citation

Published : 14 May 2020

Issue Date : June 2020

DOI : https://doi.org/10.1007/s10506-020-09266-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Knowledge representation and reasoning
  • Machine learning
  • Natural language processing
  • Find a journal
  • Publish with us
  • Track your research

We noticed you're using an outdated browser. To get the best experience please visit the Browse Happy site to download a better browser.

Created with King’s College London

Smarts contracts, data and big tech: How digitalisation is transforming the law

Avatar photo

By Maya Ffrench-Adam on May 27 2021 11:03am

Dr Mateja Durovic, reader of contract and commercial law at King’s College London, explains how digitalisation is uprooting existing legal frameworks

digital law essay

Contract law, tort law, financial law, and even jurisprudence — to name a few — are all are being shaped and re-shaped by digitalisation.

Living in what has been dubbed the ‘fourth industrial revolution’, technology is developing at unprecedented rates and spearheading change across our markets, society, economy and, in turn, legal frameworks. An obvious case is ecommerce; growing exponentially in the early 1990s and forcing common law and contract law to enter the digital domain. With blockchain technology now able to self-execute a contractual promise, smart contracts are causing the same disruption today.

Mateja Durovic, reader of contract and commercial law at King’s College London, explains, “smart contracts are an excellent example of whether a hundred-year-old common law system provides an adequate framework in this context”. Getting this framework right is in our interests too. England is one of the most commercially friendly systems of contract law worldwide and London needs to retain its position as a commercial hub post-Brexit.

digital law essay

When it comes to the status of smart contracts, Durovic sees two clear issues; “first we must look at improving enforceability, then we need to focus on making smart contracts future-proof and attractive for investment”, he tells me. Although, while smart contracts have been given the green light in the commercial setting, the question of consumer smart contracts is “potentially much more problematic”, and something we’ll have to watch unfold in the courts.

The past five years have also resulted in a sea-change across data privacy law. Approaching its third year in operation this month, the Data Protection Act (2018) (transposing the EU General Data Protection Regulation (GDPR)) has made a significant dent in disrupting the business and commercialising of data. Although, its future in a Britain that is ‘open for business’ post-Brexit remains to be determined. The expansion of digital markets has also uprooted competition law and policy. Traditional theories of harm and notions of ‘abuse’ have been thrown up in the air as platform giants such as Amazon control both the demand and the supply side of the market.

King’s College London

One of the institutions leading the research around digitalisation and the law is King’s College London’s Centre for Technology, Ethics, Law and Society (TELOS). As one of the first and oldest centres of its kind in Europe, TELOS aims to observe and search for the answers to the legal questions emerging in digital law. Durovic, who is the deputy director of the centre, describes TELOS as “an umbrella organisation for colleagues across King’s to share research and have a singular framework for researching digital law across different fields.”

Off the back of TELOS’s wider research, King’s academics have joined forces to launch a new two-week digital law course , running online as part of The Dickson Poon School of Law’s new Executive and Professional Education programmes.

Led by experts across contract & commercial, international finance, media and information, intellectual property (IP), and competition law, it is one of the first programmes of its kind. The course is designed for legal professionals grappling with the legal questions of digitalisation — including legal practitioners, in-house lawyers, and policy makers, tasked with amending existing frameworks to the digital context. It also offers a head start for students and academics; covering the digital law issues they will face going into the profession.

Commenting on the new offering, Durovic says:

“We are lucky here at King’s because we have one of the biggest research clusters in the area of digital law. We have experts on fields such as IP law, consumer law, contract law, financial law and have divided this course up to reflect such. As a participant you’ll therefore gain a broad overview of each of the different areas of law that are being affected, alongside how the law is responding.”

“The rule of law is at the heart of digital law,” he continues. As society becomes more digitalised, the law needs to support the application of these new technologies, whilst also ensuring fundamental values are being protected. A balancing exercise must take place — and the question of ‘how’ is still in its infancy.

About Legal Cheek Careers posts .

Related Stories

digital law essay

The SQE is just one part of a much bigger story

King’s College London’s Chris Howard shares his vision for the future of legal education

digital law essay

Russell Group unis launch partnership with new SQE provider

Students at Manchester University and King's College London will be offered BARBRI SQE workshops and fee discounts

digital law essay

King’s College London law students to sit ‘take home’ exams amid coronavirus fears

Alternative arrangement sees undergrads take their assessments off-campus within a specified timeframe

 alt=

digital law essay

Digital Law Journal

  • Online First
  • Start submission
  • Author Guidelines
  • Editorial Board
  • Editorial Council
  • Peer Review
  • Publishing Ethics

digital law essay

The purpose of the Digital Law Journal is to provide a theoretical understanding of the issues that arise in Law and Economics in the digital environment, as well as to create a platform for finding the most suitable version of their legal regulation.

This aim is especially vital for the legal community, following the development of the digital economy. An extensive practice of digital economy regulation has been developed all over the world, which provides good material for conducting comparative research on this issue.

Theoretically, "Digital Law" is based on "Internet Law", formed in English-language scientific literature, which a number of researchers consider as a separate branch of Law.

The journal establishes the following objectives:

  • Publication of research in the field of digital law and digital economy in order to intensify international scientific interaction and cooperation within the scientific community of experts.
  • Meeting the information needs of professional specialists, government officials, representatives of public associations, and other citizens and organizations; this concerns assessment (scientific and legal) of modern approaches to the legal regulation of the digital economy.
  • Dissemination of the achievements of current legal and economic science, and the improvement of professional relationships and scientific cooperative interaction between researchers and research groups worldwide

The journal publishes articles in the following fields of developments and challenges facing legal regulation of the digital economy:

  • Legal provision of information security, and the formation of a unified digital environment of trust (identification of subjects in the digital space, legally significant information exchange, etc.).
  • Regulatory support for electronic civil turnover; comprehensive legal research of data in the context of digital technology development, including personal data, public data, and "Big Data".
  • Legal support for data collection, storage, and processing.
  • Regulatory support for the introduction and use of innovative technologies in the financial market (cryptocurrencies, blockchain, etc.).
  • Regulatory incentives for the improvement of the digital economy; legal regulation of contractual relations arising in connection with the development of digital technologies; network contracts (smart contracts); legal regulation of E-Commerce.
  • The formation of legal conditions in the field of legal proceedings and notaries according to the development of the digital economy.
  • Legal provision of digital interaction between the private sector and the state; a definition of the "digital objects" of taxation and legal regime development for the taxation of business activities in the field of digital technologies; a digital budget; a comprehensive study of the legal conditions for using the results of intellectual activity in the digital economy; and digital economy and antitrust regulation.
  • Legal regulation of the digital economy in the context of integration processes.
  • Comprehensive research of legal and ethical aspects related to the development and application of artificial intelligence and robotics systems.
  • Changing approaches to training and retraining of legal personnel in the context of digital technology development; new requirements for the skills of lawyers.

The Journal has been included in the index of the Higher Attestation Commission (VAK) of the Ministry of Education and Science of the Russian Federation. The subject of the journal corresponds to the group of specialties "Legal Sciences"  and "Economic Sciences".

The journal publishes articles in Russian and English.

The journal will publish quarterly, thereby releasing 4 issues per year.

Current issue

Articles .

As a result of technological progress, established traditional legal concepts require constant refi nement to permit their optimal regulation. For example, virtual (digital) “things” — or tokens (NFTs) — are subject to disputes concerning whether it is preferable to rely on a traditional legal institution (e.g., property and intellectual property) or create a completely new regime “from scratch”. Using historical and comparative legal methods based on doctrinal sources, the present work explores the concepts of thing and property in the common law of nation states. The closest functional analogues in the civil law systems the res (“thing”) and in rem (“right”) are compared. Common law in rem rights are established to have emerged in the Middle Ages in form of the feudal system of different statuses with respect to land (estates). Later, under the infl uence of Wesley Hohfeld’s research on legal opposites and correlatives, this system was substantially modernized through the deconstruction of property into a “bundle of rights”. An analysis of a published translation of Joshua Fairfi eld’s article convincingly demonstrates that cryptocurrency, just as any token, is indistinguishable in its principal aspects from a “thing” in the civil-law sense. A similar conclusion is reached in the context of Russian law: the main criteria of “thingness” — materiality and the possibility of being the object of exclusive possession — are equally fulfi lled when it comes to tokens, land plots or chairs in one’s apartment. Accordingly, intuitive notions about things as products having real nature are obviously outdated and should be replaced with a jurisprudential understanding of the “thing” as a result of social interaction, rather than having a certain nature in and of itself. The important functions of materiality consist in a reduction of information costs for participants in legal relations due to the natural formation of intuitive expectations, as well as prejudices about the scope and characteristics of these rights.

Property law in the twentieth century moved from the law of things to the law of rights in things. This was a process of fragmentation: Under Hohfeldian property, we conceive of property as a bundle of sticks, and those sticks can be moved to different holders; the right to possess can be separated from the record ownership right, for example. The downside of Hohfeld’s model is that physical objects — things — become informationally complicated. Thing-ness constrains the extravagances of Hohfeldian property: although we can split off the right to possess from the right to exclude, use, destroy, copy, manage, repair, and so on, there is a gravitational pull to tie these sticks back into a useful bundle centered on the asset, the thing. Correspondingly, there has been an “informational turn” to property law, looking at the ways in which property law serves to limit property forms to reduce search costs, and to identify and celebrate the informational characteristics of thing-ness. The question of thing-ness came to a head in the context of digital and smart assets with the formation of non-fungible tokens. NFTs were attempts to generate and sell “things” a conceptually coherent something that can contain a loose bundle of rights. The project was an attempt to re-create thingness by an amalgam of cryptography, game theory, and intellectual property. This essay discusses thing-ness in the context of digital assets, how simulated thing-ness differs from physical thing-ness, and the problems that arise from attempts to reify digital assets.

Until recently, intellectual creativity was considered as an exclusively human phenomenon and intellectual property legislation was built on the basis of motivating and enhancing human inventiveness. This self-evident assumption is being challenged due to the development of artificial intelligence technologies in the recent decades. In this article author analyzes some aspects of intellectual property law development, including the possibility of recognizing an artificial intelligence as a creator of intellectual activity results. The author examines the legal status of artificial intelligence under Armenian law and foreign intellectual property legislation, analyzes existing approaches to the legal regime and intellectual property ownership of objects created with the help of artificial intelligence. The paper aims to determine the proper right holder to content generated by artificial intelligence and formulate some policy prospects of artificial intelligence regulation. The methodological basis of the research includes general scientific and special legal methods. The author places particular emphasis on the dogmatic (doctrinal) research methods, which made it possible to analyze existing approaches to protection of intellectual property rights. The research is also based on the comparative legal method and analytical legal method of commenting current law of Armenia and foreign countries. The results of the study allow author to substantiate that the actual right holder to the content produced by the neural network is the programmer of the underlying algorithm system. The author concludes that the construction of a solid legislative system should be carried out taking into account the specifics of the areas of application of artificial intelligence, ensuring a balance between the interests of individuals, society and the state related to the development of innovative technology.

Today, the necessary grounds for considering the prospect of deep integration of metaverse technology in the life of society already exist. Modern scientific studies indicate that many legal institutions will be transformed along with the development of metaverses. Hence, there is a need to study the development of theoretical and practical issues regarding the convergence of law and metaverses. The author attempts to generalize some problems pertaining to the legal regulation of public relations in metaverse conditions and offers scientifically grounded options for their possible solution. The dominant method used in this study is legal modelling, which makes it possible to form a general concept of the future synergy of law and metaverses. The author also employed scientific research methods, including legal prediction, comparative-legal, formal-legal, and others. The study made it possible to draw the following conclusions: (1) Today, the possibility of developing uniform international regulation pertaining to metaverses is still unlikely. Countries need to develop their own metaverses, which simplifies the development of corresponding legislation. (2) Creating metaverses in Russia will ensure the country’s international leadership in the digital economy. A regulatory sandbox mechanism can be used to shape legislation on metaverses. (3) Based on the specifics of the Russian legal system, the author has identified certain areas where legislation can be transformed to apply to metaverses. The results of the study will contribute to the development of Russian legal thought on metaverses.

COMMENT 

The rapid progress of technology and increasing globalization have led to various outcomes in the financial market. This commentary delves into an effort to incorporate digital financial instruments into Russia’s legal framework, with particular focus on the digital ruble. The potential introduction of this digital asset aims to address contemporary economic challenges by restructuring the Russian financial system. The commentary focuses on the legal features and business implications associated with the introduction of the digital ruble, placing it within the broader context of digital assets and their potential impact on the Russian economy. In their methodology, the authors rely on a formal legal, technologically efficient, and systemic approach. The commentary outlines the constraints on possible transformations dictated by the regulatory framework accompanying the introduction of the digital ruble and influenced by the economic nature of digital currency. The article concludes that, while digital financial assets could become a vital part of economic transactions, their introduction should be approached with great caution, and under vigilant oversight of the Bank of Russia. The anticipated integration of the digital ruble is expected to positively affect the market but taking a balanced approach considering the legal, economic, and technological risks for businesses is crucial. Through an examination of the foundations of emerging legal regulation of digital financial assets and their economic characteristics, the authors have devised a ‘digitalization matrix’ — a comprehensive management model for integrating digital technologies into both the commercial and public sectors. The model proposes taking a matrix-based approach to managing digitalization processes, while underscoring the significance of pursuing a thorough and scientifically grounded strategy for implementing digital currencies.

Cookies policy

The web-site of this journal uses cookies to optimize its performance and design as well as special service to collect and analyze data about pages visitors. By continuing to browse this web-site you agree to use cookies and the above service.

More about cookies

The Digital Services Act and the EU as the Global Regulator of the Internet

I am grateful to Martha Larson (Professor in Artificial Intelligence, Machine Learning, Language & Communication at Radboud University in the Netherlands) for the reflections and bibliographical suggestions related to the state of the art in artificial intelligence and machine learning in detecting misinformation. I was able to elaborate legal commentary of the state of the art following fascinating conversations with Martha. I am also grateful to Eric Heinze, Jörn Reinhardt, Kristian Skagen Ekeli, Jan-Willem van Prooijen, and the participants of the symposium organized by the  Chicago Journal of International Law  on free speech for interesting discussions. Special thanks to Tori Keller, Christian Pierre-Canel, and Mike Antosiewicz, editors with the  Chicago Journal of International Law , for excellent editing suggestions.

  • Share Chicago Journal of International Law | The Digital Services Act and the EU as the Global Regulator of the Internet on Facebook
  • Share Chicago Journal of International Law | The Digital Services Act and the EU as the Global Regulator of the Internet on Twitter
  • Share Chicago Journal of International Law | The Digital Services Act and the EU as the Global Regulator of the Internet on Email
  • Share Chicago Journal of International Law | The Digital Services Act and the EU as the Global Regulator of the Internet on LinkedIn

Download PDF

This Essay discusses the Digital Services Act (DSA), the new regulation enacted by the EU to combat hate speech and misinformation online, focusing on the major challenges its application will entail. However sophisticated the DSA might be, major technological challenges to detecting hate speech and misinformation online necessitate further research in implementing the DSA. This Essay also discusses potential conflicts with U.S. law that may arise in the application of the DSA. The gap in regulating the platforms in the U.S. has meant that the platforms adapt to the most stringent standards of regulation existing elsewhere. In 2016, the EU agreed with Facebook, Microsoft, Twitter, and YouTube on a code of conduct countering hate speech online. As part of this code, the platforms agreed to rules or Community Guidelines and to practice content moderation in conformity with them. The DSA builds on the content moderation system by enhancing the internal complaint-handling systems the platforms maintain. In the meantime, some states in the U.S., namely Texas and Florida, enacted legislation prohibiting the platforms from engaging in viewpoint discrimination. Two federal courts of appeals that have examined the constitutionality of these statutes under the First Amendment are split in their rulings. This Essay discusses the implications for the platforms’ content moderation practices depending on which ruling will be upheld.

I. Introduction

Extreme speech has become a major source of mass unrest throughout the world. Social media platforms magnify the conflicts that lie latent within many societies, which are often further fueled by powerful political actors. Similarly, widespread misinformation during the COVID-19 pandemic and the perceptions of these platforms’ inadequate responses led the European Union (EU) to pass the 2022 Digital Services Act (DSA) to combat misinformation and extremist speech . 1 The EU also strengthened its Code of Practice on Disinformation . 2 Although these are important developments toward regulating hate speech online, the legislation will be difficult to implement. There are major technological challenges in monitoring online hate speech that necessitate further research. Furthermore, depending on legal developments in the United States (U.S.), the EU’s new legal regime might lead to a conflict with U.S. law, which will complicate platforms’ content moderation processes.

The DSA responds to concerns expressed about the shortcomings of the system of content moderation currently applied by major social media platforms. Although it offers a sophisticated regulatory model to combat hate speech and misinformation, further research is required in several areas related to detecting such content. The state of the relevant detection technologies raises several concerns, which relate to the difficulties in the current artificial intelligence (AI) models that have been developed to detect hate speech and misinformation. Research is also needed to determine the impact of exposure to hate speech online.

The U.S. offers extended protection for freedom of speech. In many European states, however, it is legitimate for the government to limit abuse of the same freedom to protect citizens from harm caused by hate speech. It is also legitimate to limit fake news. In the U.S., the sparse regulation of speech at the federal level has left a gap to be filled by states and civil society actors. Florida and Texas enacted legislation to limit online platforms’ discretion to refuse to host others’ speech. 3 More frequently, contractual terms limit speech rights in several private institutions in the U.S. The major U.S.-based social media companies (Facebook and Twitter) have created deontology committees to limit hate speech in the U.S. under pressure from the EU. Questions emerged recently among academics and political actors in the EU on whether these platforms are limiting too much speech as private actors. The concern emerged that the platforms may be limiting even more speech than what is acceptable in Europe, where limits to hate speech by the government are acceptable. 4

Courts have the last word in Europe about whether social media users’ freedoms will be adequately protected. Citizens can bring claims before courts alleging violations of their constitutional rights by the platforms. The doctrine of horizontal effect of constitutional rights, dominant in European states, enables them to do so. According to this doctrine, the Constitution applies not only to the vertical relationship between the state and its citizens, but also to the horizontal relationship between private parties within society. 5 The constitutionally protected right to freedom of expression justifies government intervention to ensure its protection against civil society actors too. In several EU member states, the DSA will supersede existing national legislation regulating hate speech and fake news online. France has enacted such legislation, the constitutionality of which was examined by the Constitutional Council. 6 Germany has also enacted legislation generating significant case law in this area. 7 The DSA will trump even U.S. free speech law insofar as the major companies are transnational and must therefore follow European rules as well as American law. However, depending on future court decisions, a conflict may emerge between U.S. law and the DSA. Should this conflict emerge, content moderation may become challenging for the platforms, as they will need to maintain different moderation standards in the U.S. and in the EU.

Social media companies are required to modify their operational practices to abide by the EU’s Code of Conduct Countering Illegal Hate Speech Online . 8 Specifically, platforms are required to offer enhanced internal complaint-handling mechanisms. They must also meet several procedural requirements in investigating complaints. They must issue prior warnings before removing users.

The DSA applies to providers of intermediary services irrespective of their place of establishment or residence “in so far as they provide services in the Union, as evidenced by a substantial connection to the Union.” 9 Social media companies modify their behavior to meet the most stringent legal regimes in order to be able to offer their services everywhere. So, by engaging in regional regulation of online speech, the EU is becoming a global regulator of the internet.

Part II of this Essay discusses the role platforms play in defining the public sphere today and the implications of that role for government regulation. Part III presents how the DSA complements existing codes of practice in countering illegal hate speech. Part IV investigates the challenges that regulating online extreme speech and misinformation pose for governments and platforms. These challenges relate to the state of the relevant detection technologies. Part V focuses on transnational enforcement of the Act and discusses possible areas of conflict with U.S. law. Further research is needed to establish guidelines for establishing what counts as hateful, violent, dangerous, offensive, or defamatory expression, insofar as these forms of expression are subject to DSA regulation.

II. The Contemporary Public Sphere and Its Problems

Today, online platforms largely define the public sphere and the opportunities for citizens both to express themselves and access the views of others. Traditionally, governments were considered the source of danger for expressive freedoms, but today, the practices of privately held, multinational corporations also pose a great threat. A transatlantic comparison illustrates how governments respond to this new challenge. In Europe, the doctrine of horizontal effect of human rights authorizes the state to intervene and regulate platforms. 10 That doctrine also authorizes the state to enforce constitutionally protected (or other higher-order) rights against private parties as well. 11

By contrast, in the U.S., the state action doctrine means that the protection of constitutional rights applies only against government actors. 12 The U.S. Constitution does not provide protection against private actors. On the basis of this doctrine, citizens are not able to enforce through courts the protection of their constitutional rights against social media platforms. And social media platforms’ own right to freedom of speech covers how they allow users to express themselves. In the absence of government regulation in this area, social media platforms have created modes of self-regulation to prevent the spread of hate speech, among others. They have appointed all around the world bodies of content moderators with language and regional expertise 13 . In addition, Facebook created a private body, the Facebook Oversight Board, with authority to review content that is taken down and content that is kept up. 14

The emergence of social media and the new challenges inherent in online communication have led many scholars to advocate for restrictions on extreme speech, even within legal systems where such limits may conflict with national constitutional obligations. If in the past many scholars in the U.S. defended free speech as a value against government intervention, more recent discourse has emerged that argues that the government should limit hate speech. Several scholars in recent years have argued that new dangers emerging in online communication and social networks necessitate government intervention to limit speech and to limit how online platforms operate. For Tim Wu, the strong protections of free speech adopted by the U.S. Supreme Court in the 20th century have become obsolete. 15 Brian Leiter has emphasized that the internet, by altering the social epistemology of societies, necessitates a reconceptualization of doctrines articulated in reference to the media societies used in the past . 16 On issues related to knowledge, any society relies upon some epistemic authorities. 17 These authorities are sustained on the basis of “second-order norms” about whom to believe. 18 The internet has contributed to an epistemic crisis that has undermined existing epistemic authorities. 19 The negative unintended consequences of this phenomenon become particularly obvious in times of crisis, like the COVID-19 pandemic. 20 In response to such consequences, President Biden set up a task force to investigate problems arising from online harassment. 21 Similar concerns inspired the regulatory regime established by the DSA.

III. A Closer Look at the DSA

The DSA was motivated by the need to set a standard of transparency and accountability on how major platforms moderate content and use algorithms. 22 It requires them to develop appropriate risk management tools. As explained in the memorandum, the Act aims to mitigate risks of erroneous or unjustified blocking of speech, address the chilling effects on speech that the current moderation practices may have, enhance users’ access to information, and reinforce users’ redress possibilities. 23 It recognizes that some groups or persons may be vulnerable or disadvantaged in their use of online services because of their gender, race or ethnic origin, religion or belief, disability, age, or sexual orientation. 24 It also recognizes that these users may be disproportionately affected by restrictions and removal measures due to unconscious or conscious biases potentially embedded in the notification systems used by individuals and replicated in automated content moderation tools used by platforms. 25 The Act creates mandatory safeguards for the removal of users’ information, which include the provision of explanatory information to the user, complaint mechanisms, and external out-of-court dispute resolution mechanisms. 26

The Act foresees a sophisticated mechanism for content moderation of online platforms. 27 It builds upon a previous regime that had already been elaborated by the EU in 2016, a code of conduct on countering illegal hate speech online. 28 The Code defines illegal hate speech as “all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.” 29 The Code was agreed to by Facebook, Microsoft, Twitter, and YouTube. It led these major platforms to modify their operations and create mechanisms of content moderation within each state where they operate. The platforms assumed the obligation to have in place rules or community guidelines and to create clear and effective processes to review notifications regarding illegal hate speech. 30 Under the Code, the platforms were obliged to review the majority of valid notifications for removal of illegal hate speech in less than twenty-four hours and remove or disable access to such content. 31 The platforms also moderated content pursuant to the Code outside of the EU. The legal gap in regulation in the U.S. meant that the platforms adapted their practices worldwide to adhere to the most stringent legal regime. This is an example of the “Brussels effect,” and signals that the EU has become a global regulator of the internet. 32

Each platform’s efforts to abide by the Code are monitored annually in collaboration with a network of organizations located in several states where platforms offer their services. 33 Using a commonly agreed-upon methodology, these organizations test how platforms are implementing the commitments in the Code. 34 During the latest assessment, a total of 3,634 notifications alleging instances of hate speech were submitted to platforms since October 2021 . 35

The DSA enhances this system. It creates a protective regime for users of online media and attempts to strike a balance between protecting free speech and limiting “illegal” hate speech. To some extent, it enhances free speech by enacting a process for imposing limitations on speech as well as by creating timelines for the duration of those limitations. It enhances access to social media platforms by regulating the circumstances when these platforms may exclude users. These measures aim to respond to concerns that platforms so far have intervened based on the behavior of accounts or groups and the actors or associations behind them . 36 Evelyn Douek, in her comprehensive study on the subject, noted that in the past behavioral content moderation was opaque because giving notice and reasons to users was seen as undermining the effectiveness of rules rather than promoting compliance. 37 Under the DSA, platforms must have already issued a warning to users who frequently post illegal content before those users may be suspended . 38 Moreover, such suspensions may only last for a reasonable period of time. 39

The DSA also creates an obligation to implement “notice and action mechanisms” to alert platforms to the presence of content that the notifier considers to be illegal. 40 This mechanism must make it possible to identify the specific items of information thought to be illegal. 41

The Act further enhances current internal complaint-handling systems that some platforms maintain. 42 In addition, it foresees the possibility for out-of-court dispute settlements. 43 It provides for the creation of new national and European bodies that will oversee its application. These are composed of independent administrative authorities, including Digital Services Coordinators, which will be created within each EU member state, and a European Board for Digital Services, which will be an independent advisory group for the national bodies. 44 The DSA enhances transparency for the process by creating reporting obligations for the platforms. 45

The Act creates additional obligations for very large online platforms to manage systemic risks, 46 which seems to respond to the warnings of academics about the need to address this issue. 47 The preamble to the DSA emphasizes that platforms’ systemic risks may have a disproportionately negative impact in the EU when the number of users of a platform reaches a significant share of the Union population : 48 specifically, where the number of users exceeds a threshold of 45 million, which is equivalent to 10% of the Union population. 49 Platforms of such scale should, under the Act, bear the highest standards of due diligence. 50 Platforms are also obliged to implement reasonable, proportionate, and effective mitigation measures tailored to the specific systemic risks they identify. 51 These measures may include adapting content moderation or recommender systems, decision-making processes, the features or functioning of their services, or their terms and conditions. 52 One of the concerns that motivated this need for a systemic response is the concern that groups of accounts frequently violate platform rules. 53

The Act authorizes member states to impose penalties for infringements by providers of intermediary services under their jurisdiction. 54 These penalties should be effective and proportionate, and they should be serious enough to dissuade violations. 55 However, the maximum amount of fines that member states may impose for a failure to comply with the DSA is 6% of the provider’s annual worldwide turnover. 56

IV. Challenges to Be Addressed

Online communication raises several challenges in the area of hate speech and misinformation. These challenges threaten the very democratic character of online communication itself. Compelling as it is to regulate hate speech and misinformation, the state of the relevant detection technologies raises several concerns. The imperfections of these technologies may lead to limiting more speech than is necessary. This makes it imperative to explore alternative ways for limiting the spread of hate speech online. Further research is required in all these areas in relation to the implementation of the Act.

As mentioned earlier, the Act creates obligations for very large online platforms to manage systemic risks. One of those risk mitigation measures is the Code of Practice on Disinformation, which was strengthened by the EU in 2022. 57 The Code was elaborated by the EU in response to the fact that platforms rely on third-party fact-checkers’ judgments to guide content moderation. 58 The Code provides that the platforms commit to develop and apply tools or features to inform users, through measures such as labels and notices, that independent fact-checking has taken place. 59 The platforms are obliged to report on the independent fact-checkers they have used.

The Code of Practice on Disinformation also foresees that the signatories will provide details about the basic criteria they use to review information sources and disclose relevant safeguards put in place to ensure that their services are apolitical, unbiased, and independent. 60 The Code requires platforms to: inform users whose content or accounts have been subject to enforcement actions taken on the basis of violation of policies relevant to this section; provide them with the possibility to appeal the enforcement action at issue; handle complaints in a timely, diligent, transparent, and objective manner; and to reverse the action without delay when the complaint is deemed to be unfounded. 61 It provides that the platforms integrate, showcase, and consistently use fact-checkers’ work in their services, processes, and content across member states. 62 Platforms commit to creating a repository of fact-checking content that will be governed by the representatives of fact-checkers. 63 The platforms commit to operate on the basis of strict ethical and transparency rules, which must comply with the requirements of instruments such as the International Fact-Checking Network Code of Principles or the proposed Code of Professional Integrity for Independent European fact-checking organizations. 64

In the area of misinformation, the imperfections of the relevant technology may lead to limiting much more speech than is necessary. Distinguishing truth from falsity is most challenging. Flagging and filtering content may imply serious disempowerment for speakers and users of online information. 65 In the area of “false information,” the state of the art technology lies in automated detection systems. These are computer models that can recognize, filter, and flag certain content that contains false information . 66 These models use datasets, which may include news articles that have been labeled as “false” or “true.” Once models are built and their performance is evaluated, they can be implemented for real-world use, where the process for labeling information becomes less clear. 67 One type of model uses natural language processing, which can lead to problematic situations to the extent it picks up cultural biases about gender, race, ethnicity, and religion . 68 This means that it is important to ensure that datasets train models to do what they are intended to do, and to avoid the accidental propagation of undesirable patterns in the data. Some scientists argue that linguistic data will always include preexisting biases. 69 Those gender-based and culture-based biases are due to word embedding, which consists of providing a sort of dictionary for computer programs. Words are associated with other words and with semantic meanings, and when these are embedded in arithmetic models, those models can capture a variety of word relationships that reflect sexist and other pernicious attitudes. 70 Computer programming has evolved toward debiasing algorithms, but nevertheless, this debiasing does not work perfectly. In some attempts to debias the original embedding of those algorithms, 6% of the new embedding was still judged as reproducing stereotypes. 71 Such difficulties complicate the task of regulating misinformation.

All these models are created with some underlying assumptions that inform the data collection and labeling. 72 Fact-checking is done by journalists and researchers collecting data from sources that are deemed reliable or unreliable. 73 To evaluate the accuracy of information, the models use mainstream online newspapers and data labeled by journalists. There are several practices for labeling. It is mostly done by experts or researchers making use of journalist-managed sources. 74 The difficulties that arise in this process relate to the fact that there are many cases in between truth and falsity. The example of satire is particularly interesting: some models consider it “false information” while others do not. 75 The methods of data labeling are not always clear. 76 The entire process depends on the subjectivity of the evaluator. 77 Truth is often not so clearly distinguished from untruth. Although journalists are trained in fact-checking, their judgment is also subjective. Researchers by themselves do not seem to be able to trace the line between false information and the in-between cases of misinformation. 78 Given the state of the art in detecting misinformation, strengthening the Code of Practice on Disinformation is a very important step in the right direction. The journalists entrusted with the mission to form these models must continue to receive rigorous training in professional ethics to be able to form models that are reliable to the extent possible.

Researchers suggest that the solution to the problem of developing a misinformation detection model should focus on where the model will be implemented, in order to reduce the risk of false positives. 79 They also suggest involving users in determining what should be considered false information, as well as exploring models that adapt to “the changing nature of truth.” 80 The DSA attempts to include users in some respects by giving them the possibility to “flag” speech they consider hateful or misinformation. 81 Any solution in the area of misinformation should involve raising awareness, as some have found this serves the role of “immuniz[ing]” users. 82

Similar difficulties emerge in algorithms’ efforts to identify hate speech. Content moderation as practiced by major platforms themselves is highly problematic. Scholars are alert to the dangers of imposing excessive limits to freedom of expression. 83 False positives are very frequent, especially when algorithms are entrusted with the mission to impose limits upon speech, as has been the case throughout the pandemic. 84 This is because moderation technology is not completely accurate, and it is not certain whether hate speech detection algorithms are capable of detecting all nuances of speech. 85 European Commission reports note that the average removal rate of suspicious communication is 63.6%. 86 Any user may report a case of hateful content, and a large number of communications are removed following notifications to major platforms submitted by the “trusted flaggers,” which are organizations all over Europe that already participate in online monitoring exercises. 87 There is a growing difference of treatment between general users’ notifications and those sent by trusted flaggers. 88 In several instances, major social media platforms have disagreed with the notifying organizations. 89 Under the DSA, the National Digital Services Coordinators will play an important role in evaluating the platforms’ decisions in enforcing national standards of hate speech. This means that local community standards will carry great weight.

Receiving technology has also improved significantly. Those receiving information online can now affect the content of information that is reaching them. Furthermore, they can edit the content that reaches them so that is not felt as hateful or insulting. For instance, it is possible to create a smart filter that reformulates hate speech or replaces it with something that approximates its semantic value. 90 Any user can use this technology during online searches. Social media platforms are not using it yet, though it is worth exploring whether it is preferable for social media platforms to use this technique instead of limiting speech because paraphrasing technology allows for solutions that are not black and white. Users have the possibility to alter what reaches them, while the speaker’s expression is not limited entirely. It is extremely important to explore the philosophical and epistemological status of this practice.

The use of paraphrasing technology in the area of hate speech has some significant advantages. It may lead to a situation where platforms no longer need to make decisions about moderating speech. However, this technology, if further used on online platforms, raises several concerns. The primary problem is that the speaker does not become aware of the changes made to her utterances. This technological development raises significant questions in relation to protecting the speaker’s autonomy and self-definition. The use of paraphrasing technology may get out of hand and completely distort the text of the author. The author must be protected against further uses of the paraphrased text.

Under international human rights law, speech may be regulated only if the principle of proportionality is respected. For instance, the European Convention on Human Rights foresees the factors that the European Court of Human Rights uses in its proportionality analysis for evaluating government limitations on speech. 91 The question that emerges here is whether, if harm to others is to be averted, the speech limitation is an appropriate and proportionate response. There are several interests and core values that are protected by freedom of expression. 92 The classical defenses for free speech emphasize the importance of exercising this liberty for individuals and societies alike. 93 Free speech has been characterized as “the most human right.” 94 Democratic interests are also served by the protection of freedom of speech. 95 All these interests and values are seriously compromised when a person’s speech is altered.

We need to think further about whether it is permissible for platforms to use these filters. To limit speech through moderation and inform users accordingly (as the DSA foresees) may be more preferable than to paraphrase a user’s speech. Implementing paraphrasing mechanisms may involve a serious infringement of a user’s speech, without them knowing about it or having the ability to defend themselves against the practice. Further research is required in this area to explore the ethical issues that arise from the prospect of using paraphrasing technology in the area of online hate speech.

Further research in social psychology is also required to evaluate the effects of exposure to hate speech online and to detect the practices of extremists once blocked from online social media platforms. NGOs in favor of eliminating limits to speech claim that blocking extremists from platforms leads them to further radicalization on less moderated platforms. 96 Early research in communication and media studies indicates that bans on right-wing extremists imposed by mainstream social media platforms (Facebook and Twitter) very likely led them to “migrate” to other platforms (such as Telegram) which offer enhanced privacy and anonymity along with opportunities to gain publicity, coordinate, and mobilize. 97 The decrease in transparency in these alternative social media platforms may reduce the size of extremists’ audiences yet increase the radicalization of the audience members that remain. 98 The same research suggests that outright bans might not be the best way to reduce extremists’ influence; gradual bans administered to a few actors might be preferable, 99 since gradual bans cause serious coordination problems for such users.

V. Possible Areas of Conflict with U.S. Law

The DSA will likely create a clash in free expression standards between the U.S. and the EU. Some U.S. states, such as Texas and Florida, have already enacted legislation prohibiting the platforms from engaging in viewpoint discrimination. In fact, Texas and Florida have enacted legislation in response to the moderation practices platforms have implemented in conformity with the EU Code of Conduct. Two federal courts of appeals have examined the constitutionality of the relevant legislation under the First Amendment and are split in their rulings.

Florida’s SB 7072 100 prevents social media platforms from unfairly censoring, shadow banning, deplatforming, or applying post-prioritization algorithms to Florida candidates, users, or residents. The Eleventh Circuit found that this law violates the First Amendment rights of social media platforms. 101 For the court, the social media platforms express themselves through their content-moderation decisions. 102 The platforms are “curating” speech, and this activity is analogous to the editorial judgments of the press, which the Supreme Court has held are protected under the First Amendment. 103

Texas’ HB 20 prohibits large social media platforms from censoring speech based on speaker viewpoint . 104 The legislation provides that a social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on the viewpoint of the user or another person, the viewpoint represented in the user’s expression or another person’s expression, or a user’s geographic location. The Fifth Circuit held the legislation constitutional under the First Amendment. 105 The court found that the law regulates the platforms’ conduct, not their speech. 106 It applied the common carrier doctrine, which allows states to impose non-discrimination obligations on communication and transportation providers that serve the public. 107 The court disagreed with the Eleventh Circuit’s ruling that platform editorial discretion is protected under the First Amendment. 108 For the Fifth Circuit, the platforms use algorithms to screen out certain content, which does not involve any editorial control. Furthermore, the platforms disclaim any reputational or legal responsibility for the content they host. 109 They merely engage in viewpoint-based censorship with respect to expression they already have disseminated. 110 The court cited in this respect 47 U.S.C. § 230, which provides that the platforms “shall [not] be treated as the publisher or speaker” of content developed by other users. 111

The Fifth Circuit’s ruling, if upheld, will likely clash with the DSA on hate speech. Under HB 20, limiting hate speech is impermissible viewpoint discrimination. While the Fifth Circuit also noted that HB 20 “expressly permits the Platforms to censor any unlawful expression and certain speech that ‘incites criminal activity or consists of specific threats,” 112 it may still conflict with the DSA on the definition of incitement. The DSA refers to the Code of Conduct, which platforms have agreed to abide by, to define hate speech. The Code defines it, in part, as “public incitement to violence or hatred.” 113 This standard is much broader than the Brandenburg criteria in the U.S., according to which only speech that encourages imminent lawless action that will very likely occur may be outlawed. 114 The standard is motivated by the need to preserve social peace. In Europe, incitement is limited independently of whether it encourages imminent lawless action or not. Thus, if the common carrier doctrine is upheld, social media companies will have to modify their moderation standards in the U.S. compared to the rest of the world, a task which can be challenging. The common carrier doctrine enables access to social media platforms and enhances speech rights. 115 In this respect, it mirrors the spirit of the DSA. The application of the doctrine, though, is likely to have negative unintended consequences in relation to the platform’s abilities to moderate harmful speech.

VI. Conclusion

Increased online speech has raised concerns about hate speech and disinformation. The EU is tackling the problem with enhanced online speech regulation through the DSA. The EU has opted in favor of a sophisticated system of government regulation of online speech. The system enhances the mechanisms of complaints available to citizens who feel excluded from social media. It also foresees an important role for Independent Administrative Authorities and the Digital Services Coordinators to oversee the content moderation platforms engage in. The major online platforms’ moderation practices will be supervised by these coordinators. This supervision will contribute to defining the standards of “illegal hate speech” that the platforms will need to limit. Local community standards will carry great weight in the formation of these criteria. Although the DSA offers a sophisticated system of regulating online social media platforms, further research is needed on its implementation. There are important technological challenges that apply to companies implementing the DSA and to companies’ self-regulating content. Research is needed on improving the accuracy of algorithms that limit hate speech. Research is also needed in the area of social psychology to investigate the impact that online incitement to hatred and violence may have. Furthermore, research is needed about whether alternative technologies, such as paraphrasing technology, are appropriate in the area of limiting hate speech online. The enhanced procedural requirements that the DSA imposes are balanced out by the opportunities for redress it institutes for users whose rights have been violated.

In any attempt to limit misinformation, we must be conscious of the limits of the state of technology and of the challenges that technological developments raise for democracy. In the area of misinformation, appropriate training of journalists is required as they elaborate the models that are evaluating the truth or falsity of information available online. Given the shortcomings of the state of technology, strengthening the Code of Practice on Disinformation is a very important step. Further research on improving the state of the art of misinformation technology is also necessary. The global impact the Act is likely to have makes all the more compelling the need for further research on several aspects related to its implementation.

Another challenge in the application of the DSA for the platforms relates to potential conflicts with U.S. law. In the U.S., content moderation has been practiced by the platforms themselves thanks to the absence of regulation in this area. Platforms have generalized the standards of content moderation they had to develop to abide by EU requirements. Whether the platforms will be able to engage in content moderation in the U.S. will depend on future court rulings on the constitutionality of legislation against viewpoint discrimination. If the common carrier doctrine is upheld in the U.S., the platforms will need to maintain different standards of moderation in the U.S. compared to Europe. Applying the common carrier doctrine will have the positive intended consequence of protecting users against exclusion from platforms and the negative unintended consequence of limiting the platforms’ ability to engage in content moderation. In the area of incitement to hatred and violence, platforms will not be able to apply the same standards of content moderation in the U.S. that they apply in Europe.

  • 1 Digital Services Act, 2022 O.J. (L 277) 1 [hereinafter DSA].
  • 2 2022 Strengthened Code of Practice on Disinformation , Eur. Comm’n (June 16, 2022), https://perma.cc/5SMQ-ZGYM.
  • 4 See Ioanna Tourkochoriti, Should Hate Speech Be Protected? Group Defamation, Party Bans, Holocaust Denial and the Divide Between Europe and the United States , 45 Colum. Hum. Rts. L. Rev. 552 (2014).
  • 5 Stephen Gardbaum, The “Horizontal Effect” of Constitutional Rights , 102 Mich. L. Rev. 387, 388 (2003).
  • 6 For hate speech in France, see Loi 2020-766 du 24 juin 2020 visant à lutter contre les contenus haineux sur internet [Law 2020-766 of June 24, 2020 on Combatting Hate Speech Online], Journal Officiel de la République Française [J.O.] [Official Gazette of France], June 24, 2020, p. 1, https://perma.cc/GMR9-DKDS. See also Décision 2020-801 DC du 18 juin 2020 [Decision 2020-801 of June 18, 2020], Journal Officiel de la République Française[J.O.] [Official Gazette of France], June 24, 2020, p. 5. For misinformation during electoral political campaigns in France, see Loi 2018-1202 du 22 Décembre 2018 relative à la lutte contre la manipulation de l’information [Law 2018-1202 of December 22, 2018 on Combatting the Manipulation of Information], Journal Officiel de la République Française [J.O.] [Official Gazette of France] Dec. 23, 2018, p. 3, art. 1 (modifying art. L. 163-2.-I. of the Electoral Code), https://perma.cc/849X-ZQ59; Décision 2018-773 DC du 20 décembre 2018 [Decision 2020-773 of December 20, 2018], Journal Officiel de la République Française [J.O.] [Official Gazette of France], Dec. 23, 2018, p. 79.
  • 7 For hate speech in Germany, see Gesetz zur Verbesserung der Rechtsdurchsetzung in den sozialen Netzwerken [Act to Improve Enforcement of Law in the Social Networks], BGBl. I, S. 3352 of Sept. 1, 2017, (Netzwerkdurchsetzungsgesetz, “NetzDG”), https://perma.cc/KRM9-THKD. See also Jörn Reinhardt: “„Fake News”, „Infox”, Trollfabriken. Über den Umgang mit Desinformationen in den sozialen Medien”, 225/226 Vorgänge 97–108 (2019); Claudia Haupt, Regulating Speech Online: Free Speech Values in Constitutional Frames , 99 Wash. U. L. Rev., 751–86 (2021).
  • 8 Code of Conduct on Countering Illegal Hate Speech Online , Eur. Comm’n (June 30, 2016) [hereinafter EU Code of Conduct ], https://perma.cc/72CG-NDMQ.
  • 9 DSA pmbl. ¶ 7.
  • 10 See Gardbaum, supra note 5.
  • 12 See generally Mark Tushnet, The Issue of State Action/Horizontal Effect in Comparative Constitutional Law , 1 Int’l J. Const. L. 79 (2003); Charles L. Black, Jr., Foreword: State Action, Equal Protection, and California’s Proposition , 81 Harv. L. Rev. 69 (1967); Louis Michael Seidman & Marc V. Tushnet, The State Action Paradox , in Remnants of Belief, Contemporary Constitutional Issues 49–71 (1996); Robert Glennon & John E. Novak, A Functional Analysis of the Fourteenth Amendment ‘State Action’ Requirement , 1976 Sup. Ct. Rev. 221 (1976).
  • 13 See Spandana Singh, Everything in Moderation Case Study: Facebook , New America (July 22, 2019), https://perma.cc/Y3JK-MTSU; Alexis C. Madrigal, Inside Facebook’s Fast-Growing Content-Moderation Effort , Atlantic (Feb. 7, 2018), https://perma.cc/5WLX-MRTY.
  • 14 See Katie Klonick, The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression , 129 Yale L.J. 2418 (2020).
  • 15 Tim Wu, Is the First Amendment Obsolete? , Knight First Amend. Inst. (Sept. 1, 2017), https://perma.cc/Z4R6-TEE4 (noting that the First Amendment was elaborated in an information-free world and focused exclusively on protecting speakers from government). Wu argues that the First Amendment must be adapted to promote healthy speech environments by addressing a number of speech control techniques that have arisen due to communications technologies. First Amendment doctrine presupposed that information is scarce, that few people would be willing to invest in speaking publicly, and that listeners have abundant time to evaluate the information available to them. All these assumptions, together with the idea that the government is the main threat to the “marketplace of ideas,” are now obsolete. In our information-rich world, listeners are overwhelmed with information and attentional scarcity is an important issue. Furthermore, the government is no longer the only threat to free speech. Abusive online mobs, reverse censorship through counter programming, and the use of propaganda bots are also important threats.
  • 16 See Brian Leiter, The Epistemology of the Internet and the Regulation of Speech in America , 20 Geo. J.L. & Pub. Pol’y 903 (2022).
  • 17 Id. at 3.
  • 18 Id. at 4.
  • 19 Id. at 13.
  • 20 Id . at 14.
  • 21 See Readout of the White House Task Force to Address Online Harassment and Abuse Launch , The White House (June 17, 2022), https://perma.cc/FU46-5M7P.
  • 22 DSA pmbl. ¶ 45.
  • 24 Id. at 17, 26, 29.
  • 26 See id. at 7, 11–12, 15–16, 82.
  • 27 See id. at 12–13.
  • 28 EU Code of Conduct , supra note 8.
  • 29 Id. at 1. For this definition, the code refers to the Council Framework Decision 2008/913/JHA of 28 Nov. 2008 on Combating Certain Forms and Expressions of Racism and Xenophobia by Means of Criminal Law, 2008 O.J. (L 328) 55.
  • 30 EU Code of Conduct , supra note 8, at 2.
  • 32 See generally Anu Bradford, The Brussels Effect: How The European Union Rules The World (2020) (discussing the “Brussels effect”).
  • 33 Didier Reynders, 7th Evaluation of the Code of Conduct 5 (Nov. 2022), https://perma.cc/5TFS-XDCD.
  • 35 Id. at 1.
  • 36 See Evelyn Douek, Content Moderation as Systems Thinking , 136 Harv. L. Rev. 526, 540 (2022).
  • 38 DSA art. 23.
  • 39 Id. art. 23.
  • 40 Id. art. 16.
  • 41 Id. art. 16(1).
  • 42 Id. art. 17.
  • 43 Id. art. 18.
  • 44 DSA arts. 38–49.
  • 45 Id . art. 24.
  • 46 Id. art. 33.
  • 47 See Douek, supra note 36, at 598.
  • 48 DSA pmbl. ¶ 54.
  • 51 Id. art. 35(1).
  • 52 Id. pmbl. ¶ 58.
  • 53 See Douek, supra note 36, at 540.
  • 54 DSA art. 52.
  • 55 Id. art. 52(2).
  • 56 Id. art. 52(3).
  • 57 2022 Strengthened Code of Practice on Disinformation , supra note 2.
  • 58 Douek, supra note 36, at 544.
  • 59 2022 Strengthened Code of Practice on Disinformation , supra note 2, measure 21.1.
  • 60 Id. measure 22.4, QRE 22.4.1.
  • 61 Id. commitment 24.
  • 62 Id. commitment 31.
  • 63 Id. commitment 32, measure 32.2.
  • 64 Id. commitment 33, measure 33.1.
  • 65 See Sille Obelitz Søe, Algorithmic Detection of Misinformation and Disinformation: Gricean Perspectives , 74 J. Documentation 309, 309–31 (2018).
  • 66 Lynn E.M. de Rijk, Who Gets to Decide What Is True? The Free Speech Problem and the Importance of Datasets to False Information Detection Models 1 (2022) (unpublished research thesis) (on file with author). I am grateful to Lynn for the references to literature in Linguistic and Communication Sciences in this paper.
  • 67 Id. at 4.
  • 68 Tolga Bolukbasi et al., Man is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings , 29 Advances in Neural Info. Processing Sys. 1, 1–8 (2016); Robyn Speer, ConceptNet Numberbatch 17.04: Better, Less-Stereotyped Word Vectors , ConceptNet (Apr. 24, 2017), https://perma.cc/FQ9V-QF33.
  • 69 Emily M. Bender & Batya Friedman, Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science , 6 Transactions Ass’n for Computational Linguistics 587, 587–604 (2018).
  • 70 Bolukbasi et al., supra note 68, at 8.
  • 72 De Rijk, supra note 66, at 11.
  • 73 Id. at 13.
  • 74 Id. at 21.
  • 75 Id. at 17.
  • 76 Id . at 21.
  • 77 Id. at 23.
  • 78 De Rijk, supra note 66, at 23.
  • 79 Id. at 24.
  • 80 Id. at 25.
  • 81 DSA art. 16.
  • 82 See Nico Grant & Tiffany Hsu, Google Finds ‘Inoculating’ People Against Misinformation Helps Blunt Its Power , N.Y. Times (Aug. 24, 2022), https://perma.cc/UQ7D-MWSS.
  • 83 Douek, supra note 36, at 548.
  • 84 Id. at 549–50.
  • 85 Id. at 569.
  • 86 Reynders, supra note 33, at 1.
  • 87 Id. at 2 (in 2022, IT companies surveyed removed 63.6% of content flagged, most of which is reported by “trusted flaggers”); DSA pmbl. ¶ 61.
  • 88 Reynders, supra note 33, at 2.
  • 90 See Jianing Zhou & Suma Bhat, Paraphrase Generation: A Survey of the State of the Art , in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 5075–86 (Ass’n of Computational Linguistics, 2021).

Freedom of expression.

1. Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. This article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises.

  • 92 See generally Joshua Cohen, Freedom of Expression , 22 Phil. & Pub. Affs. 207 (1993) (discussing the following interests: an expressive interest, an interest in obtaining information in view of finding out which is the right way to act, and an interest in obtaining information from a secure source on the conditions that are necessary for the pursuit of our goals and aspirations).
  • 93 John Stuart Mill, On Liberty 21–61 (John Gray ed., 1998) (offering a classical consequentialist defense for free speech).
  • 94 See Eric Heinze, The Most Human Right: Why Free Speech Is Everything (2022).
  • 95 Kent Greenawalt, Free Speech Justifications , 89 Colum. L. Rev. 119, 125–29 (1989); see also Eric Heinze, Free Speech and Democratic Citizenship(2016).
  • 96 Jacob Mchangama et al., Thoughts for the DSA: Challenges, Ideas and the Way Forward Through International Human Rights Law 5 (2022).
  • 97 Aleksandra Urman & Stefan Katz, What They Do in the Shadows: Examining the Far-Right Networks on Telegram , 25 Info. Commc’n & Soc’y 904, 904–23 (2022).
  • 98 Id . at 918–19.
  • 99 Id. at 919.
  • 100 Fla. SB 7072.
  • 101 NetChoice, LLC v. Attorney General, Florida, 34 F.4th 1196 (11th Cir. 2022).
  • 102 Id. at 1212.
  • 103 See , e.g. , Miami Herald Pub. Co. v. Tornillo, 418 U.S. 241 (1974).
  • 104 H.B. 20.
  • 105 NetChoice, LLC v. Paxton, 49 F.4th 439 (5th Cir. 2022).
  • 106 Id. at 448.
  • 108 Id. at 488.
  • 109 Id . at 464.
  • 111 Protection for Private Blocking and Screening of Offensive Material, 47 U.S.C. § 230(c)(1) (2018).
  • 112 NetChoice , 49 F.4th at 452.
  • 113 EU Code of Conduct , supra note 8, at 1; see also Council Framework Decision 2008/913/JHA of 28 Nov. 2008 on Combating Certain Forms and Expressions of Racism and Xenophobia by Means of Criminal Law, 2008 O.J. (L 328) 55.
  • 114 Brandenburg v. Ohio, 395 U.S. 444 (1969).
  • 115 See Eugene Volokh, Treating Social Media Platforms Like Common Carriers? , 1 J. Free Speech L. 377 (2021).

Aaron Hall Attorney

Legal Approaches to Competition Law Compliance in Digital Markets

Effective competition law compliance in digital markets necessitates a nuanced understanding of the complex and rapidly evolving market dynamics, characterized by rapid innovation, network effects, and low barriers to entry. A robust compliance framework is crucial to identify, assess, and prioritize risks in digital markets, considering the evolving digital landscape, emerging technologies, and changing regulatory requirements. Companies must develop a deep understanding of their digital market dynamics, antitrust regulations, and merger control to navigate complexities and mitigate risks. By grasping these intricacies, businesses can develop a sophisticated compliance program that guarantees sustainable growth and profitability in the digital economy, and further exploration of these concepts is necessary to reveal a thorough understanding of digital market compliance.

Table of Contents

Understanding Digital Market Dynamics

Frequently, digital markets exhibit unique dynamics that set them apart from traditional brick-and-mortar economies. The digital landscape is characterized by rapid innovation, network effects, and low barriers to entry, leading to increased competition and market volatility. In this environment, market segmentation plays a critical role in understanding consumer behavior and preferences. Digital markets often exhibit distinct segments, such as geographic, demographic, or behavioral segments, which require tailored marketing strategies and competitive approaches.

To effectively navigate these complexities, businesses must develop a deep understanding of their digital market dynamics. This involves analyzing market structures, identifying key players, and evaluating the impact of digital platforms on competition. Additionally, companies must continually monitor and adapt to changes in the digital landscape, such as shifts in consumer behavior or the emergence of new technologies. By doing so, businesses can develop effective competition strategies that account for the unique characteristics of digital markets and ultimately drive sustainable growth and profitability.

Risk Assessment and Compliance Strategies

In the fast-paced digital economy, companies frequently encounter a multitude of risks that can threaten their competitive advantage and even their very existence. Effective risk assessment and compliance strategies are vital to mitigate these risks and guarantee sustainable business operations. A robust compliance framework is pivotal to identify, assess, and prioritize risks. This framework should be tailored to the company's specific digital market dynamics and business model.

Risk matrices can be a valuable tool in this process, enabling companies to visualize and quantify risks. By plotting the likelihood and impact of each risk, companies can focus on the most critical areas and allocate resources accordingly. A thorough risk assessment should also consider the evolving digital landscape, including emerging technologies and changing regulatory requirements. By integrating risk assessment into their compliance strategies, companies can proactively address potential risks and maintain a competitive edge in the digital market. Ultimately, a well-designed compliance framework and risk assessment strategy can help companies navigate the complex digital landscape and guarantee long-term success.

Antitrust Regulations in Digital Space

In the digital space, the exercise of market power takes on new forms, posing unique challenges for antitrust regulators. The accumulation of vast amounts of data, network effects, and the ability to influence user behavior can confer significant market power on digital companies. As a result, regulators must develop novel approaches to address these emerging issues and guarantee effective enforcement of antitrust regulations in the digital economy.

Digital Market Power

Several sectors have witnessed a significant shift towards digitization, and the digital market has emerged as a vital arena where companies can wield substantial market power. In this context, understanding digital market power is pivotal for effective competition law compliance. Market boundaries and power metrics are vital components in evaluating digital market power.

Relevant product market Revenue market share Network effects, scalability
Relevant geographic market Profit margins Data-driven decision making
Multi-sided platforms User engagement metrics Dynamic pricing, personalization
Digital ecosystems Barriers to entry and expansion Rapid innovation cycles

In digital markets, traditional power metrics such as revenue market share may not be sufficient to capture market power. Instead, metrics such as user engagement, profit margins, and barriers to entry and expansion may be more relevant. Additionally, digital market characteristics like network effects, scalability, and data-driven decision making can contribute to market power. By considering these factors, companies can better evaluate their digital market power and guarantee compliance with competition law regulations.

Enforcement Challenges

How can antitrust regulators effectively enforce competition law in the rapidly evolving digital landscape, where traditional enforcement tools often prove inadequate? The answer lies in adapting to the unique challenges posed by digital markets. One significant hurdle is data privacy, as regulators must navigate complex data protection laws to access evidence of anti-competitive conduct. Additionally, the borderless nature of digital markets creates jurisdictional hurdles, as regulators must coordinate with counterparts across multiple jurisdictions to tackle global antitrust issues.

To overcome these challenges, regulators must develop innovative enforcement strategies. This may involve leveraging advanced data analytics to identify patterns of anti-competitive behavior, as well as collaborating with international partners to share intelligence and best practices. Also, regulators must stay abreast of emerging trends and technologies, such as artificial intelligence and blockchain, to anticipate potential competition law issues. By adopting a proactive and adaptive approach, antitrust regulators can stay ahead of the curve and guarantee effective enforcement of competition law in digital markets.

Merger Control in Digital Markets

In the sphere of digital markets, merger control plays a critical role in ensuring that transactions do not harm competition. The digital deal screening process is a vital component of this regime, as it enables authorities to identify potential competition concerns at an early stage. The merger clearance process, which involves a thorough assessment of the transaction's impact on the market, is a key safeguard against anti-competitive outcomes.

Digital Deal Screening

Digital markets' increasingly complex and dynamic nature has led to a growing need for effective merger control mechanisms, as even seemingly innocuous deals can have far-reaching consequences for competition and innovation. In this framework, digital deal screening is a critical component of merger control in digital markets. It involves a thorough deal review to identify potential competition concerns that may arise from a proposed transaction. This digital scrutiny is vital to prevent anti-competitive outcomes, such as the creation of monopolies or the foreclosure of rival firms.

Effective digital deal screening requires a deep understanding of the digital market's specific characteristics, including network effects, data-driven business models, and rapidly evolving technologies. Regulators must be equipped with the necessary tools and expertise to conduct a rigorous deal review, evaluating the potential impact of the transaction on competition and innovation. This includes analyzing the parties' market position, the level of concentration in the market, and the potential for vertical or horizontal effects. By conducting a thorough digital deal screening, regulators can identify and address potential competition concerns, ultimately promoting a competitive and innovative digital market.

Merger Clearance Process

The merger clearance process is a crucial step in guaranteeing that transactions in digital markets do not harm competition. In digital markets, mergers and acquisitions can have significant implications for competition, and it is crucial to evaluate the potential impact of such transactions on the competitive landscape. The merger clearance process involves a thorough review of the proposed transaction to determine whether it is likely to substantially lessen competition or create a monopoly.

The process begins with the notification of the proposed transaction to the relevant competition authority, which then assesses the deal against specific deal thresholds. These thresholds vary by jurisdiction but typically relate to the size of the transaction and the parties involved. If the transaction meets the thresholds, a filing fee is payable, and the authority conducts a detailed review of the transaction. The authority may request additional information from the parties, and in some cases, may even impose conditions on the transaction to mitigate any potential competitive harm. Throughout the process, parties must cooperate fully with the authority and provide all required information to facilitate a smooth and timely review.

Abuse of Dominance Prohibitions

Market leaders wield significant influence over their respective industries, and with this power comes the risk of abusing their dominant position. This is particularly concerning in digital markets, where network effects and economies of scale can rapidly amplify market leverage. To mitigate this risk, competition authorities employ dominance metrics to assess the scope of a firm's market power. These metrics include market share, revenue, and profitability, as well as qualitative factors such as barriers to entry, customer switching costs, and the presence of countervailing power.

Vertical Agreements and Restraints

Vertically integrated firms often establish contractual agreements with their upstream suppliers or downstream distributors, which can have a profound impact on the competitive landscape. These vertical agreements can take various forms, including exclusive dealing arrangements that restrict a supplier's ability to sell to other buyers or a distributor's ability to purchase from other suppliers.

Such agreements can have anti-competitive effects, such as foreclosing competitors' access to vital inputs or limiting their ability to enter new markets. To mitigate these risks, competition authorities scrutinize vertical agreements for potential restraints on competition.

  • Territorial limits, which restrict a distributor's ability to sell products outside a designated territory, can be particularly problematic.
  • Exclusive dealing arrangements can also raise concerns, as they may foreclose competitors' access to crucial inputs or limit their ability to enter new markets.
  • Agreements that impose minimum purchase requirements or minimum resale prices can also have anti-competitive effects.
  • Competition authorities may require firms to modify or terminate such agreements to restore competition in the market.

Digital Market-Specific Compliance Challenges

In the digital marketplace, unique compliance challenges arise from the distinct characteristics of digital goods and services. These challenges are exacerbated by the rapid pace of innovation and the ever-evolving digital landscape. The intangible nature of digital products and services, coupled with the ease of scalability and adaptability, creates complexities in identifying and addressing competition law concerns.

Moreover, online ecosystems, characterized by network effects and feedback loops, can give rise to unique competition law issues. The interconnectedness of digital platforms, apps, and services can lead to complex relationships between competitors, suppliers, and customers, making it difficult to determine the boundaries of relevant markets and assess competitive effects. In addition, the use of algorithms and artificial intelligence in digital markets raises concerns about transparency, bias, and potential anti-competitive conduct. To navigate these challenges, companies operating in digital markets must develop a deep understanding of the digital landscape and its nuances, as well as the implications of competition law on their business strategies and operations.

Effective Compliance Program Development

Most companies operating in digital markets recognize the importance of establishing an effective compliance program to mitigate the risks associated with competition law non-compliance. An effective compliance program is a vital component of a company's overall risk management strategy, enabling companies to identify, assess, and mitigate competition law risks.

A well-designed compliance program should be based on a robust compliance framework that outlines the company's compliance policies, procedures, and standards. This framework should be supported by a strong compliance culture that encourages employees to prioritize compliance and report any potential compliance issues.

Key elements of an effective compliance program include:

  • A clear and concise compliance policy that sets out the company's commitment to compliance with competition laws
  • Regular training and awareness programs to educate employees on competition law risks and compliance obligations
  • A designated compliance officer or team responsible for overseeing and implementing the compliance program
  • A system for reporting and investigating potential compliance issues, including a whistleblower hotline and incident response plan

Auditing and Monitoring Compliance Efforts

Compliance programs are only as effective as their implementation and enforcement. To guarantee the success of a compliance program, auditing and monitoring compliance efforts are vital. This involves regularly reviewing and evaluating the program's implementation, identifying areas for improvement, and taking corrective action.

Track employee training participation Record of training completion certificates
Monitor whistleblowing reports Documented investigation and resolution processes
Analyze compliance policy updates Version control and change logs
Review third-party vendor contracts Audit reports on vendor compliance

Auditing and monitoring compliance efforts provide valuable insights into the program's effectiveness and identify potential risks. By tracking compliance metrics and maintaining audit trails, organizations can demonstrate their commitment to compliance and reduce the risk of non-compliance. Regular auditing and monitoring also enable organizations to refine their compliance programs, addressing emerging risks and guaranteeing continued compliance with competition laws in digital markets.

Frequently Asked Questions

How do i ensure compliance with rapidly evolving digital market regulations.

To guarantee compliance with rapidly evolving digital market regulations, stay abreast of regulatory updates and develop compliance roadmaps that incorporate proactive risk assessments, thorough gap analyses, and iterative adaptation to emerging standards and guidelines.

Can Digital Market-Specific Compliance Challenges Be Addressed With Traditional Methods?

Traditional compliance methods may struggle to address digital market-specific challenges, as Digital Complexity and Regulatory Gaps create unique obstacles; novel approaches are required to effectively navigate the intersection of technology and competition law.

What Role Do Data Protection Laws Play in Competition Law Compliance?

Data protection laws substantially influence competition law compliance by mandating data anonymity and stringent privacy safeguards, ensuring that companies prioritize consumer data security and fair market practices, thereby fostering a trustworthy digital ecosystem.

Are There Industry-Specific Guidelines for Digital Market Compliance Programs?

Industry-specific guidelines for digital market compliance programs are established through a combination of industry standards and regulatory frameworks, providing tailored guidance for companies to navigate complex digital market regulations and guarantee effective compliance.

Can Artificial Intelligence Be Used to Detect Antitrust Violations?

Yes, artificial intelligence can be leveraged to detect antitrust violations through AI monitoring and algorithmic auditing, enabling proactive identification of potential infringements and facilitating more efficient compliance programs.

  • Share full article

Advertisement

Supported by

What We Can Learn From Tim Walz and His Son, Gus

More from our inbox:.

  • Is Trump Funny?
  • Kennedy’s Endorsement of Trump
  • Using Antitrust Law to Encourage New Competitors

The Walz family at the Democratic National Convention.

To the Editor:

Re “ Tim Walz, Protect My Son as You Do Yours ,” by Tina Brown (Opinion guest essay, Aug. 24):

Thank you, Tina Brown, for expanding readers’ understanding of neurodivergent persons. As the proud father of a 14-year-old son with developmental disabilities, I, like Ms. Brown, recognized Tim Walz’s son, Gus, as “one of ours” — a sweet, sensitive-looking, neurodivergent person who appeared somewhat unsure of himself during his father’s nomination acceptance speech.

When Gus met his father’s declaration of love for him by standing up, pointing at the stage and shouting through tears “That’s my dad!,” my heart exploded.

My son’s third-grade teacher once asked his class of various neurodivergent children, “What do you want to be when you grow up?” He responded, “I just want to be a good dad.” I have never felt more recognized and honored in my life.

Neurotypical people have something important to learn from Gus Walz’s unfiltered love, my son’s thinking and Ms. Brown’s son’s (Georgie’s) matter-of-fact honesty. In our constant reading of others, we can miss the truth of our own experience.

Paul Siegel New York The writer is a professor of psychology at Westchester Community College and Purchase College, SUNY.

Who knew that Tina Brown and I might ever have anything in common, let alone that we could share a gigantic part of our emotional makeup as parents of neurodivergent children. The cult of Trump has amply demonstrated what Ms. Brown, Gwen and Tim Walz, and countless other devoted parents like us already know: Too many of the cruel, tiny-minded bullies who mocked and stalked our kids starting in early childhood have grown into adults who are just like that.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

IMAGES

  1. Digital law presentation

    digital law essay

  2. Introduction to Legal Technology

    digital law essay

  3. The 22 Laws of Digital Writing

    digital law essay

  4. Law and Technology Essay Template

    digital law essay

  5. Digital Essay Template in Word, Google Docs

    digital law essay

  6. Legal Studies Technology Essay

    digital law essay

COMMENTS

  1. PDF EDUCATING THE DIGITAL LAWYER

    the Law Lab project, a Professor of Law and the Director of Scholarship at Vermont Law School, a Research Fellow of the Gruter Institute for Law and Behavioral Research, and an Adjunct Professor at Dartmouth's Thayer School of Engineering. His research includes cloud computing governance, digital institutions for supporting entrepreneurship ...

  2. PDF the cambridge handbook of lawyering in the digital age

    nologists, and current developers of e-law platforms and services. larry a. dimatteo is the Huber Hurst Professor of Contract Law at the Warrington College of Business and Levin College of Law, University of Florida. He was the Editor-in-Chief of the American Business Law Journal, a 2012 Fulbright Professor, and author of fourteen books. His ...

  3. PDF Digital Technologies and Human Rights: a Legal Framework

    human rights system. The second part delineates the human rights law standards applicable in addressing the impacts of digital technologies, arising from the use of those technologies by both States and business enterprises. The paper ends with some policy recommendations. 1. Mapping digital technologies and their impact on human rights

  4. Artificial Intelligence and the Law

    Julian Nyarko, associate professor of law, on the algorithm he developed. At the same time, Lemley worries that holding AI companies liable when ordinary humans wouldn't be, may inappropriately discourage development of the technology. He and his co-authors argue that we need a set of best practices for safe AI.

  5. Contemporary Challenges and the Rule of Law in the Digital Age

    The article analyzes the impact of modern digital technologies used in the information society on democracy, human rights, and the rule of law in general. Both positive and negative aspects of such impact are considered. The importance of this topic is due to the need for further deepening of scientific knowledge related to the development of the rule of law in the information society and ...

  6. New digital rights: Imagining additional fundamental rights for the

    Law and digital technology, also referred to as IT law, is a functional area of law that has gotten a firm foothold between other legal disciplines over the past decades, both in legal practices and academia. New technological developments such as big data, the Internet of Things, quantum computing, blockchain technology and sophisticated ...

  7. e-Legislation: Law-Making in the Digital Age

    This essay may also be read as a companion piece to Nicholas Kasirer's paper on the successive material embodiments of Quebec's civil code entitled "If the Mona Lisa Is in the Louvre, Where Is the Civil Code of Lower Canada?" (Paper presented at Law Commis- ... 27 M.E. Katsh, Law in a Digital World (New York: Oxford University Press ...

  8. Artificial intelligence as law

    Information technology is so ubiquitous and AI's progress so inspiring that also legal professionals experience its benefits and have high expectations. At the same time, the powers of AI have been rising so strongly that it is no longer obvious that AI applications (whether in the law or elsewhere) help promoting a good society; in fact they are sometimes harmful. Hence many argue that ...

  9. Smarts contracts, data and big tech: How digitalisation is transforming

    By Maya Ffrench-Adam on May 27 2021 11:03am. Dr Mateja Durovic, reader of contract and commercial law at King's College London, explains how digitalisation is uprooting existing legal frameworks ...

  10. Digital Law Journal

    Theoretically, "Digital Law" is based on "Internet Law", formed in English-language scientific literature, which a number of researchers consider as a separate branch of Law. ... This essay discusses thing-ness in the context of digital assets, how simulated thing-ness differs from physical thing-ness, and the problems that arise from attempts ...

  11. How technology is changing the legal sector

    Conclusion. This piece has looked at how technology is changing the legal field through the creation of new legal roles. These roles are emerging due to technology both changing the traditional organisation of firms and creating a demand for technologically competent professionals. As detailed, new legal roles are expected to implement vast ...

  12. Teaching digital and global law for digital and global students

    CUHK Internet and the Law course. Our case study comprises an undergraduate LLB elective course at CUHK taught by Angela Daly in 2018-19, which incorporated global and digital tools and content utilising the student as producer approach to stimulate collaboration between the teaching team, Footnote 7 students, and external collaborators. The course had 28 students, of whom 9 were ...

  13. Full article: The digital services act: an analysis of its ethical

    Three keywords were used to describe the DSA: 'ethics', 'digital economy', and 'competition law'. They were combined using AND, where appropriate. The search was limited to publications made available between December 2016 and April 2021 and identified 80 unique papers for review. After an initial review of titles and abstracts, 13 ...

  14. The Digital Services Act and the EU as the Global Regulator of the

    This Essay discusses the Digital Services Act, the new regulation enacted by the EU to combat hate speech and misinformation online, focusing on the major challenges its application will entail. The Digital Services Act and the EU as the Global Regulator of the Internet | Chicago Journal of International Law

  15. Understanding the Legal Framework for Competition Law in Digital

    Effective oversight and enforcement are crucial components of a well-functioning competition law framework in digital markets, as they certify that regulatory bodies and agencies can detect and address anti-competitive practices in a timely and efficient manner. The primary objective of oversight and enforcement is to safeguard compliance with ...

  16. Legal Approaches to Competition Law Compliance in Digital Markets

    Effective competition law compliance in digital markets necessitates a nuanced understanding of the complex and rapidly evolving market dynamics, characterized by rapid innovation, network effects, and low barriers to entry. A robust compliance framework is crucial to identify, assess, and prioritize risks in digital markets, considering the ...

  17. Opinion

    The writer is a lecturer in environmental law and policy at Boston University. A version of this article appears in print on , Section A , Page 19 of the New York edition with the headline: What ...

  18. Nalchik

    Nalchik Arc De Triumph. The word "Nalchik" literally means "small horseshoe" in Kabardian (or Circassian, a Northwest Caucasian language) and Karachay-Balkar (a Turkic language). It is a diminutive of na'l, a common Middle Eastern word ( Arabic, Persian, Turkish) for "horseshoe", possibly from the ancient Scythian, 'nalak" (horseshoe).