As part of a headline project announced this summer, Stats Perform and AI integrity partner Signify pledged to enhance Threat Matrix a sophisticated monitoring tool to help tackle and tackle social media abuse witnessed across all sports markets.
Project leads Jake Marsh Global Head of Integrity at Stats Perform and Jonathan Hirshler CEO of Signify outline the multi-layered challenges of tackling online abuse by expanding the remit of sports integrity to monitor nuanced engagements across digital platforms – a conundrum facing all businesses in safeguarding their digital environments.
SBC: Hi Jake and Jonathan… thanks for this interview. Stats Perform has announced a partnership with ethical data science company Signify and their headline Threat Matrix service tackling online abuse.
Why have Stats Perform chosen to tackle the prominent matter of online abuse in the interest of all sports stakeholders?
Jake Marsh (Global Head of Integrity at Stats Perform): Social media abuse is recognised as one of the most serious problems facing sport, with calls from fans, players, teams, leagues, and even governments to take action. At Stats Perform we recognise that the sports integrity field needs to expand to include an increased focus on this issue, especially given its impact on sport and the welfare of its participants.
As part of our approach to being a responsible sports stakeholder we are offering a service that assists federations, rights-holders and teams in identifying the nature of the problem they are facing and empowering them to take action against online abusers. We had been speaking with Signify since the start of 2021 and it became clear we shared a common philosophy in taking a proactive approach to tackling sports integrity issues and online abuse in particular.”
SBC: Can the complexities of identifying online abuse (hate) be simply resolved by an algorithmic solution? How have Stats Perform and Signify approached the technical boundaries and realities of identifying actual threats?
Jonathan Hirshler (CEO – Signify): This was almost the exact question we asked ourselves in developing Threat Matrix in the first instance. Can we use a mixture of AI, machine learning and human intelligence (we call it augmented intelligence) to identify targeted online abuse, with all the contextual and nuanced terms used across different sports and industries?
We’ve been working on training, refining, and developing our solution for over two years, and as a result are now able to pick up far more comprehensive results across a broader range of issues that athletes have been asking for help with, than we’ve seen from anyone else, including the platforms. We have a specific focus in highlighting the real issues and identifying abusive account owners – we are driven by getting to the source of the problem and offering a real deterrent.
Our secret weapon in this endeavour is an academic Communications Threat Assessment Protocol called CTAP-25. Developed by security experts Theseus, CTAP-25 identifies a number of signifiers that help determine where a message or account could become a clear and present danger to an athlete. Our methodology is grounded in thinking like this. It’s allowed us to develop a super-smart, always learning approach to identifying targeted online threats.”
SBC: How does the monitoring of abuse differ from other Stats Perform wider sportstech projects in terms of scope, size and technical remit?
JM: Proactive monitoring of social media is a huge task to undertake manually which is why Signify’s Threat Matrix service leverages AI natural language understanding and machine learning to identify the targeted abuse at scale. This is then combined with expert analysis delivering a meanings-based assessment of this data, blending the speed and scale of AI with the nuance of human interpretation to produce evidence-based reports and actionable recommendations.
In this respect there are similarities to our other sportstech Integrity offerings such as Betting Markets Monitoring and our unique Performance Integrity Analysis (PIA) service. PIA includes video analysis along with the use of advanced models and metrics powered by our Opta database, combining quantitative data with qualitative analysis undertaken by our in-house performance integrity experts. Whilst the offerings may be different the fundamental core of using AI and tech solutions aligned with human analysis is key to these services.
SBC: Following high profile events this summer, should online abuse of athletes be included within the UK government’s proposed Online Harms Bill?
JH: The Online Safety Bill (nee. Online Harms) has the potential to be world-leading and really set a new bar in terms of how social media companies need to take more responsibility in protecting their own users. If the lawmakers get this right, specifying athletes or any other type of individual within the Bill shouldn’t matter – we need to ensure that all users across these platforms are protected from the vile abuse that’s been on the rise in recent years.
The evidence we developed for the Professional Footballers’ Association was submitted to the Committee that’s scrutinising the Bill, cited by ex-players like Rio Ferdinand… Who really hit home what it feels like to be on the receiving end of online abuse. This was used to underpin our client’s dialogue with social media platforms, leading to tangible change in how the issues are being addressed. If the Bill is successful, it will provide regulatory bodies like OFCOM with real teeth to hold those who fail to act accountable. This could also provide a useful stimulus for other countries and lawmakers to take similar action.
SBC: For sports does online abuse have a ‘duty-of-care conundrum’? Which stakeholder (club, governing body, media, broadcaster) carries the duty of monitoring hateful content?
JM: We would agree that this has been an ongoing challenge when dealing with online abuse in sport. Both on a macro level, in terms of which organisations should have primacy on the issue, and on a more micro level whereby we have seen confusion over whose remit it should fall under within those organisations. For example, should responsibility for this sit with a Head of Communications, Player Welfare or perhaps under a Diversity, Equality and Inclusion role, and many sports bodies are still working this out?
Ultimately, all sports industry stakeholders have a degree of duty of care in this area. Whilst in some sports we may see a federation or governing body take the lead, others may instead have a club/team focussed approach that looks at the problem more from an employer’s duty of care perspective. Clearly, the social media platforms themselves also have a responsibility to try and counter this abuse. Either way, it’s vital that all bodies work together collaboratively, sharing information, resources and best practices to ensure players and officials are protected from targeted online threat and discriminatory abuse.”
SBC: Evaluating real-life examples and engagements, can open content/opinion-led platforms ever be considered ‘safe environments’ for all audiences?
JH: Will we ever get to a position where there is no abuse and social platforms are 100% safe? Probably not, but if we can help clients to really hold some of these bad actors to account by taking tangible action – we could see a sea change across sport and social media.
It’s also worth pointing out that whilst our initial focus has been on public social media, we’re also seeing problems reported on private social channels. To avoid a situation where abusive account owners simply switch from public messages to direct messages, we’ve developed innovative techniques that identify threatening / abusive messages sent via DM’s. We’ve done this sensitively, developing a smart process that protects the privacy of the victim.
SBC: Finally, how will Stats Perform and Signify gauge actual progress on minimising Online Abuse, an issue that continues to challenge the biggest technology firms (Facebook, Twitter, etc)?
JH: With an AI-based service like Threat Matrix, it is possible to monitor the performance of social media platforms in dealing with this issue. Our recent work with the Professional Footballers’ Association is a great example of this. We analysed more than six million tweets targeting professional footballers across the UK during the 2020-21 season, identifying an uptick in targeted racist and discriminatory abuse being sent with fewer discriminatory messages being moderated in the second half of the season vs the first half.
We’ve proven how it’s possible to create a real benchmark, illuminating the size and scale of the problem across any given sport. Once you have this, it’s entirely possible to monitor increases/decreases in recorded targeted abusive messages – alongside the tactics that need to be countered.
JM/JH: “The partnership approach Stats Perform and Signify have developed is based on working with interested stakeholders across the sporting world to help understand the nature of the problem (how bad is it for your sport, really?) and then providing the tools to do something about it.”