Jump to content

User:Sarahhem23/Report

From Wikipedia, the free encyclopedia

Wikipedia Advising Report

[edit]

I would recommend the Members of the Wikipedia community and the Wikimedia Foundation (WMF) to utilize generative AI. This report outlines the key challenges and offers recommendations for the WMF to address AI's role within Wikipedia. The advice integrates principles of Wikimedia’s Foundation mission to empower and engage people around the world to develop educational content under a free license and share it globally.

However Wikipedia decides to incorporate AI, there should be clear standards and protocols for AI use that align with Wikipedia’s standards. Meaning that there should be a policy outlining how, when, and where AI-generated content is acceptable. There should be some sort of disclosure label signaling it is AI generated. There should also be quality benchmarks that must be met before drafts with assistance from AI can be published. Drawing from course discussions on norms and rules, along with violations of rules. Disclosures help ensure that the community and readers are informed, which aligns with Wikipedia’s commitment to transparency. Quality standards are also necessary to ensure that AI content does not compromise Wikipedia’s reputation for accuracy and reliability.

I think that before incorporating AI to the Wikipedia community, the WMF should create educational resources to inform users how AI works and how they should use it on Wikipedia. Including things like best practices for fact-checking, planning an article, editing, etc. As mentioned in a variety of class discussions including those about norms and rules, it seems that transparent education helps foster a culture of collaboration. Having the proper training and workshops in place will enable editors to feel confident using AI in the community.

A review system that checks AI generated content would also be helpful. Due to risks of misinformation, bias, and factual inaccuracies there should be a human editor that reviews anything that has AI content before being made public. In this case, editors will be able to flag content that needs further human verifications. For example, in the history section of a Wikipedia article, if content is added or edits are made by AI, there should be some sort of alert and a process to review it before it is made available on the article itself. This is important in order to uphold Wikipedia's quality standard and mitigate AI’s potential to introduce errors.

Lastly, AI should be used to foster inclusivity and diversity. This may be an issue if models are trained on datasets that lack diversity or have a pre existing biases. Wikipedia should collaborate with AI providers that ensure their models are trained on diverse datasets. Additionally, Wikipedia should prioritize amplifying content from underrepresented perspectives. Drawing from our course conversations on getting marginalized/underrepresented more engaged in online communities, I think using AI can be a way to help Wikipedia deliver neutral and global representation.

Overall, the integration of generative AI into Wikipedia presents both opportunities and challenges. Through clear standards, community education, robust content review, and active monitoring, the WMF can harness AI as a productive tool while safeguarding the integrity and mission of Wikipedia. By thoughtfully addressing these issues, WMF can ensure that AI enhances, rather than diminishes, the collaborative, human-driven nature of Wikipedia.