The future of mediation could look dramatically different, as AI-based mediators are poised to automate aspects of dispute resolution. While human mediators won’t disappear any time soon, explains professional mediator Audrey Berland in her insightful dive into the future of her industry, AI-based mediators could collaborate with humans to resolve disputes, potentially with more speed and impartiality. While this possible development boasts many positive attributes, facing the legal and ethical concerns surrounding this emerging technology, while confronting its risks of bias, is essential.
- AI-based mediators could automate elements of dispute resolution in the future.
- Virtual mediators exhibit several strengths, but also several limitations.
- AI mediation raises considerable ethical and legal concerns.
AI-based mediators could automate elements of dispute resolution in the future.
AI is poised to revolutionize dispute resolution processes, due to the emergence of AI-based mediators, or “virtual mediators,” which use machine learning techniques and advanced algorithms to help resolve conflict. AI mediators can either apply predefined laws or regulations to a case to guide the decision-making process or comb through an enormous data set of past mediation cases to inform their suggestions.
Yet AI-based mediators shouldn’t replace human mediators entirely. Rather, human mediators should use AI mediation platforms as tools to offer a more objective perspective to those in conflict.
“AI-based mediators can process large amounts of data quickly and efficiently, which can result in a faster and more efficient mediation process.”
The processes that humans and machines use to resolve conflict are strikingly similar: While a human relies on knowledge, skills and experience to offer dispute-resolution recommendations, an AI relies on programming. AI-based mediators contribute to dispute resolution by receiving information – arguments, evidence, laws, and so on – from both parties in various formats, including text, audio and video. Next, they use natural language processing (NLP) to identify key weaknesses and strengths in both parties’ arguments and to compare the conflict to past cases to identify laws and precedents. Finally, they generate resolution suggestions or a list of potential options. Additionally, AI could act as a communication facilitator between the parties, encouraging productive dialogue through generated responses.
Virtual mediators exhibit several strengths, but also several limitations.
AI-based mediators are limited. For one, if the AI receives biased data, it won’t be able to perform its job fairly or accurately, and establishing whether the programmer has fed the AI skewed data is nigh on impossible. Moreover, parties in a dispute resolution often withhold information to try to earn themselves the best possible outcome, skewing the results. AI mediators also lack true emotional intelligence, and participants might not feel truly “heard,” as the AI mediator can only mimic empathy. While human mediators can offer creative solutions to problems, AI’s suggestions are rigid and lack nuance.
“The accuracy and ‘fairness’ of the AI platform is only as good as the data it is fed.”
However, AI-based mediators also boast several advantages: They can quickly analyze massive data sets, from which they can draw solutions. Virtual mediators can work faster and more cost-effectively than human mediators. They could also, theoretically, respond more impartially than human mediators, given that they don’t have emotions. Algorithms adept at pattern recognition could also detect small changes in language to gauge human emotion, picking up on “syntax signaling hospitality” and de-escalating conflict scenarios when people display anger.
AI mediation raises considerable ethical and legal concerns.
AI mediation triggers several complex legal considerations. In particular, people tend to worry about how these platforms will use their personal data. The developers of AI-based mediators must be transparent about how they’ll use and protect users’ personal data, while complying with data protection laws and relevant regulations. Other implications have not yet been ironed out. For example, if an AI-based mediator produces a document or report, is it admissible in court? Or if an AI-based mediator makes a mistake, who is responsible? One possible solution would require developers to cover liabilities with insurance and provide indemnification.
“AI-based mediators can be more efficient and cost-effective, but at the end of the day, there is no substitute for human mediators who have the ability to adapt their approach to the specific needs of each case and bring emotional intelligence to the process.”
As AI mediation technology emerges, a slew of ethical considerations arise. For example, will the AI-based mediator be able to ensure people are truly making empowered choices when it comes to mediation? Will it make impartial judgments, if the data it runs on contained biases? And can it be transparent, when complex algorithms that few people understand are responsible for churning out decisions? Ultimately, AI-based mediators won’t fully replace human mediators, as mediation requires emotional intelligence, adaptability flexibility and empathy.
About the Author
Audrey Berland works as a mediator for Miles Mediation & Arbitration.