User:Jenynfur/Report: Difference between revisions
started the subpage |
article |
||
Line 1: | Line 1: | ||
== Introduction == |
|||
Jenny Yang subpage for Wikipedia advising report (Task #7-B) |
|||
The goal of the Wikimedia Foundation (WMF) is to build a Wikipedia community to place where people can collaborate to create educational content that promotes community engagement. The Wikipedia community already identified several potential use cases for AI tools like content planning and article structure suggestions or automated edit monitoring. The Wikipedia community has already identified several potential uses for AI tools, such as assisting with content planning, suggesting article structures, and automating edit monitoring. However, introducing AI tools, particularly bots, presents unique challenges, especially concerning editorial quality, content integrity, and the strength of community-driven processes. Given these conditions, I suggest implementing clear policies and accountability measures for AI-based bots to ensure responsible use, promoting harmlessness, resource efficiency, and protection from unintended consequences. |
|||
== Generative AI bots == |
|||
'''Responsibility for Unsourced or Harmful Content''' |
|||
One of the key considerations when creating content on Wikipedia is managing harmful content and unverified information. In class discussions, many students suggested developing automated filtering bots or enhancing user reporting functions. These suggestions reflect concerns about harmful content causing indirect harm and weakening a positive, healthy community atmosphere. AI bots can aid in generating quality content, but they also pose potential risks. In particular, if advanced AI generates harmful or unverified content, issues of responsibility and response measures become critical. |
|||
To mitigate these risks, Wikipedia should consider preventive measures such as monitoring AI bot activities, securing authority to take disciplinary action when necessary, improving the user reporting system, strengthening content verification processes, and enhancing the harmful content filtering system. While Wikipedia bots currently filter harmful content, there have been cases where full blocking was not achieved. |
|||
''ex) Hinduphobia on Wikipedia: How One of the World’s Most Trusted Resources Is Struggling with Hate (Ballard, 2024)'' |
|||
This article raises concerns about anti-Hindu bias on Wikipedia. Researchers found that Wikipedia’s open-editing model may allow anti-Hindu rhetoric to permeate entries on Hinduism, Indian history, and prominent Hindu figures, potentially mirroring similar biases seen on social media. Wikipedia’s reliance on volunteer editors and questionable sources could lead to the persistence of misinformation, creating an echo chamber of bias against Hindu culture. Unfortunately, there are currently no sufficient mechanisms in place for bots to identify or filter such bias and misinformation. Strengthening bot support to help uphold Wikipedia’s neutrality goals seems essential.<ref>{{Cite news |last=Ballard |first=Sam |date=11-06-2024 |title=Hinduphobia on Wikipedia: How One of the World’s Most Trusted Resources Is Struggling with Hate |url=https://www.worldreligionnews.com/wikipedia/hinduphobia-on-wikipedia-how-one-of-the-worlds-most-trusted-resources-is-struggling-with-hate/ |access-date=11-08-2024 |work=World Religion News}}</ref> |
|||
'''So, who should be responsible for harmful content?''' |
|||
In these cases, the responsibility would primarily fall on the bot creators. Wikipedia currently holds creators accountable for their bots' actions, and this accountability is likely to extend to AI-powered bots. While specific responsibilities are not clearly defined, it appears that penalties such as activity suspensions can be imposed. |
|||
The Bot Approval Group (BAG) should implement stricter review processes for advanced AI bots. Although AI self-awareness remains speculative, a clear and structured accountability framework is necessary for all bots that may publish unverified or erroneous content. General penalties imposed on bot creators by other platforms include account restrictions, bot deletions, rigorous reapproval processes, and public warnings or criticism within the community. These measures should be reinforced. Especially in cases where AI bots cause significant harm, immediate intervention is warranted. A tiered warning approach may be insufficient, given the high potential for harmful content generated by AI bots. Wikipedia should establish a policy allowing indefinite suspension of bots proven to cause harm, enabling community members to vote on recovery only if the issues have been demonstrably resolved. |
|||
== Reference == |
|||
<references /> |
Revision as of 07:48, 9 November 2024
Introduction
The goal of the Wikimedia Foundation (WMF) is to build a Wikipedia community to place where people can collaborate to create educational content that promotes community engagement. The Wikipedia community already identified several potential use cases for AI tools like content planning and article structure suggestions or automated edit monitoring. The Wikipedia community has already identified several potential uses for AI tools, such as assisting with content planning, suggesting article structures, and automating edit monitoring. However, introducing AI tools, particularly bots, presents unique challenges, especially concerning editorial quality, content integrity, and the strength of community-driven processes. Given these conditions, I suggest implementing clear policies and accountability measures for AI-based bots to ensure responsible use, promoting harmlessness, resource efficiency, and protection from unintended consequences.
Generative AI bots
Responsibility for Unsourced or Harmful Content
One of the key considerations when creating content on Wikipedia is managing harmful content and unverified information. In class discussions, many students suggested developing automated filtering bots or enhancing user reporting functions. These suggestions reflect concerns about harmful content causing indirect harm and weakening a positive, healthy community atmosphere. AI bots can aid in generating quality content, but they also pose potential risks. In particular, if advanced AI generates harmful or unverified content, issues of responsibility and response measures become critical.
To mitigate these risks, Wikipedia should consider preventive measures such as monitoring AI bot activities, securing authority to take disciplinary action when necessary, improving the user reporting system, strengthening content verification processes, and enhancing the harmful content filtering system. While Wikipedia bots currently filter harmful content, there have been cases where full blocking was not achieved.
ex) Hinduphobia on Wikipedia: How One of the World’s Most Trusted Resources Is Struggling with Hate (Ballard, 2024)
This article raises concerns about anti-Hindu bias on Wikipedia. Researchers found that Wikipedia’s open-editing model may allow anti-Hindu rhetoric to permeate entries on Hinduism, Indian history, and prominent Hindu figures, potentially mirroring similar biases seen on social media. Wikipedia’s reliance on volunteer editors and questionable sources could lead to the persistence of misinformation, creating an echo chamber of bias against Hindu culture. Unfortunately, there are currently no sufficient mechanisms in place for bots to identify or filter such bias and misinformation. Strengthening bot support to help uphold Wikipedia’s neutrality goals seems essential.[1]
So, who should be responsible for harmful content?
In these cases, the responsibility would primarily fall on the bot creators. Wikipedia currently holds creators accountable for their bots' actions, and this accountability is likely to extend to AI-powered bots. While specific responsibilities are not clearly defined, it appears that penalties such as activity suspensions can be imposed.
The Bot Approval Group (BAG) should implement stricter review processes for advanced AI bots. Although AI self-awareness remains speculative, a clear and structured accountability framework is necessary for all bots that may publish unverified or erroneous content. General penalties imposed on bot creators by other platforms include account restrictions, bot deletions, rigorous reapproval processes, and public warnings or criticism within the community. These measures should be reinforced. Especially in cases where AI bots cause significant harm, immediate intervention is warranted. A tiered warning approach may be insufficient, given the high potential for harmful content generated by AI bots. Wikipedia should establish a policy allowing indefinite suspension of bots proven to cause harm, enabling community members to vote on recovery only if the issues have been demonstrably resolved.
Reference
- ^ Ballard, Sam (11-06-2024). "Hinduphobia on Wikipedia: How One of the World's Most Trusted Resources Is Struggling with Hate". World Religion News. Retrieved 11-08-2024.
{{cite news}}
: Check date values in:|access-date=
and|date=
(help)