Jump to content

User:Xypheria/Report

From Wikipedia, the free encyclopedia

Wikipedia Advising Report (Task #7-B)

Generative Ai is an incredibly hot topic in today’s society as many are still trying to determine what are some ethical ways in which we can utilize this new technology. Generative AI has many applications and are applicable to many fields, such as health care, software development, financial services, media, advertising, marketing and so much more, but despite all of the possible ways in which it may help it still requires huge oversite from actual people to ensure the work that it provides is trustworthy and sustainable.

               With aspects of gen ai being cost effective, does that mean it’s the same for Wikipedia? I don’t believe so as what you’d be paying with is your time, which is what you’d be using to continuously go over the material that these tools provide you with large ticket items. So, one thing that we need to come to terms with is that the use of gen ai will always require oversight, and that sometimes this oversight might deter new users from interacting with the Wiki community for assistance let alone utilizing those tools as that may be too complex for new members to handle. Things like creating templates, or modules, summarizing articles or even reliable sources should be kept to human editors as this is not only the area in which it could provide an increased likelihood of subpar work from current versions of gen ai, but they also are the largest opportunities for wiki editors of all stages to learn, so I believe that the use of gen ai should be relegated to other items that may require more specialized work.

               One such item could include the verification of the credibility of certain sources, ensuring that those weren’t also generated by ai as that seems to be an incredibly well-known issue of current generative ai. Of course, there should always be trust but verify mentality behind this usage, as there should be behind all uses of gen ai. Using ai in such a way like this would provide an adequate way of determining the cost-effectiveness of these programs as this could reduce the amount of time spent of verifying sources, and provide easy opportunities to teach new and existing users on how to use generative ai and how to spot some of the mistakes generated through its use, as well as simple mistakes done by editors who ignore the rules of “trust but verify” and the smaller problems those may create versus the possible larger problems that come with the use of gen ai in larger edits.

               With uses of gen ai with larger items, it leaves more possibilities of intaking incorrect information which can diminish the credibility of Wikipedia and deter others from participating in the community, or with each other as they would no longer need to rely on others on what to advise them to do or avoid. By keeping the use of gen ai, to specific actions similar to the one mentioned above, allows Wikipedia to continue developing needs for participation by its editors and not people who are simply able to use gen ai. Through the interactions of editor to editor you can continue to build and maintain the integrity of the community because of the amount of effort that is put into the relationships of new and existing wiki editors.

               While some collateral damage of not introducing normal usage of gen ai could be the deterrence of new or existing users who are hoping to work with gen ai, I believe the quality of work that we currently have is very much worth the risk to ensuring that the usage of ai is kept to a minimum.