Jump to content
Wikimedia Meta-Wiki

Research:ReferenceRisk

From Meta, a Wikimedia project coordination wiki
This is an archived version of this page, as edited by FNavas-WMF (talk | contribs) at 14:16, 13 March 2024. It may differ significantly from the current version .


  [[|ReferenceRisk]]   [[|Measurement plan]]   [[|Testing]]  


This research documentation page is currently under construction.

Created
15:22, 22 February 2024 (UTC)
Duration:  2024-February – 2024-August
References, Knowledge Integrity, Disinformation

This page documents a research project in progress.
Information may be incomplete and change as the project progresses.
Please contact the project lead before formally citing or reusing results from this page.


In a nutshell

This page will hold all updates and information related to the ML score developed by WMF Research tentatively named, referencerisk. The score seeks to make ir easier to understand the quality of references on Wikipedia.

What is this project?

A typical Wikipedia article has three atomic units that combine to craft the claims we read — 1) the editor that creates the edit 2) the edit itself 3) the reference that informs the edit. This project focuses on the latter of the three.

Wikipedia's verifiability principle, expects all editors to be responsible for the content they add, ruling that the "burden to demonstrate verifiability lies with the editor who adds or restores material". Would this edict be followed to the letter, every claim across Wikipedia would be dutifully cited inline. Of course, life falls short of perfection, and it is exactly the inherently imperfect participation of the human editor that leads to change, debate and flux, creating "quality" claims and articles, by any standard, in the long term.[citation needed]

Then, there is the additional task of understanding the reference itself. What is in the reference? Where does it come from? Who made it? Wikipedia communities have various efforts in trying to lessen that task, namely the reliable sources list.

Yet, there is no silver-bullet solution to understanding how our communities, across languages and projects, manage citation quality.

A basic visualization of this ML model

As a collaboration between Wikimedia Enterprise and Research with the set goal of refining and productionizing previous work by the Research’s Citation quality ML model from the following paper — "Longitudinal Assessment of Reference Quality on Wikipedia", we seek to lessen the burden of understanding the quality of a single reference. The result of which will cater to everyone from individual volunteer editors to high-volume third-party reusers.

Both Research and Enterprise understand that a broad range of actors in the online knowledge environment stand to benefit from the ability to evaluate citations at scale and in near real time.

Because manually inspecting sources or developing external algorithmic methods are costly and time-consuming we would like to host a scoring model that may be leveraged by customers and the community to automatically identify low and high quality citation data.

What’s next?

  • Post quarterly updates
  • Build community-centered performance testing strategy

AltStyle によって変換されたページ (->オリジナル) /