Jump to content

User:Relaxbear4649/sandbox: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Fixed minor errors, still need to find sources to balance out the constructive uses section.
m Added more citations.
Line 121: Line 121:
=== Fake news ===
=== Fake news ===
{{Main|Fake news}}
{{Main|Fake news}}
With a lack of regulations for deepfakes include videos of political officials displaying inappropriate behavior, police officers shown as shooting unarmed black men, and soldiers murdering innocent civilians. With such hyper-realistic videos being released on the Internet, it becomes very easy for the public to be misinformed, which could lead people to take actions, thus contributing to this vicious cycle of unnecessary harm. Additionally, with the rise in fake news in recent news, there is also the possibility of combining deepfakes and fake news. This will bring further difficulty to distinguishing what is real and what is fake. Visual information can be very convincing to the human eyes, therefore, the combination of deepfakes and fake news can have a detrimental effect on society.<ref name=":4" /> Strict regulations should be made by social media companies and other platforms for news.<ref>{{Cite journal|last=Hall|first=Kathleen|date=2018|title=Deepfake Videos: When Seeing Isn't Believing|url=https://scholarship.law.edu/cgi/viewcontent.cgi?article=1060&context=jlt|journal=Catholic University Journal of Law and Technology|volume=27(1)|pages=51–75|via=}}</ref>
With a lack of regulations for deepfakes, there are several concerns that arises. Some concerning deepfake videos that can bring potential harm includes depiction of political officials displaying inappropriate behavior, police officers shown as shooting unarmed black men, and soldiers murdering innocent civilians may begin to appear although it may have never occurred in real life.<ref>{{Cite journal|last=Chesney|first=Robert|date=2019|title=Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics|url=https://pustaqa.com/deepfakes-and-new-disinformation-war-coming-age-of-post-truth-geopolitics/|journal=Foreign Affairs|volume=98(1)|pages=147–55|via=}}</ref> With such hyper-realistic videos being released on the Internet, it becomes very easy for the public to be misinformed, which could lead people to take actions, thus contributing to this vicious cycle of unnecessary harm. Additionally, with the rise in fake news in recent news, there is also the possibility of combining deepfakes and fake news. This will bring further difficulty to distinguishing what is real and what is fake. Visual information can be very convincing to the human eyes, therefore, the combination of deepfakes and fake news can have a detrimental effect on society.<ref name=":4" /> Strict regulations should be made by social media companies and other platforms for news.<ref>{{Cite journal|last=Hall|first=Kathleen|date=2018|title=Deepfake Videos: When Seeing Isn't Believing|url=https://scholarship.law.edu/cgi/viewcontent.cgi?article=1060&context=jlt|journal=Catholic University Journal of Law and Technology|volume=27(1)|pages=51–75|via=}}</ref>


=== Personal use ===
=== Personal use ===
Line 147: Line 147:
One way to prevent being a victim to any of the technology mentioned above is to develop artificial intelligence against these algorithms. There are already several companies that have developed artificial intelligence that can detect manipulated images by looking at the patterns in each pixel.<ref>{{Cite journal|last=Bass|first=Harvey|date=1998|title=A Paradigm for the Authentication of Photographic Evidence in the Digital Age|url=https://heinonline.org/HOL/Page?handle=hein.journals/tjeflr20&id=309&collection=journals&index=|journal=Thomas Jefferson Law Review|volume=20(2)|pages=303–322|via=}}</ref> By applying a similar logic, they are trying to create a software that takes each frame of a given video and analyze it pixel by pixel in order to find the pattern of the original video and determine whether or not it has been manipulated.<ref>{{Cite journal|last=Wen|first=Jie|date=2012|title=A Malicious Behavior Analysis Based Cyber-I Birth|url=https://search.proquest.com/docview/1490889853?accountid=14496|journal=Journal of Intelligent Manufacturing|volume=25(1)|pages=147–55|via=}}</ref>
One way to prevent being a victim to any of the technology mentioned above is to develop artificial intelligence against these algorithms. There are already several companies that have developed artificial intelligence that can detect manipulated images by looking at the patterns in each pixel.<ref>{{Cite journal|last=Bass|first=Harvey|date=1998|title=A Paradigm for the Authentication of Photographic Evidence in the Digital Age|url=https://heinonline.org/HOL/Page?handle=hein.journals/tjeflr20&id=309&collection=journals&index=|journal=Thomas Jefferson Law Review|volume=20(2)|pages=303–322|via=}}</ref> By applying a similar logic, they are trying to create a software that takes each frame of a given video and analyze it pixel by pixel in order to find the pattern of the original video and determine whether or not it has been manipulated.<ref>{{Cite journal|last=Wen|first=Jie|date=2012|title=A Malicious Behavior Analysis Based Cyber-I Birth|url=https://search.proquest.com/docview/1490889853?accountid=14496|journal=Journal of Intelligent Manufacturing|volume=25(1)|pages=147–55|via=}}</ref>


In addition to developing new technology that can detect any video manipulations, many researchers are raising the importance for [[Privately held company|private corporations]] creating stricter guidelines to protect individual privacy. With the development of artificial intelligence, it is necessary to ask how this impacts society today as it begins to appear in virtually every aspect of society, including [[medicine]], [[education]], [[politics]], and the [[economy]]. Furthermore, artificial intelligence will begin to appear in various aspects of society, which makes it important to have laws that protect [[Human rights|humans rights]] as technology takes over. As the private sector gains more digital power over the public, it is important to set strict [[Regulation|regulations]] and laws to prevent private corporations from using personal data maliciously. Additionally, the past history of various data breaches and violations of [[Privacy policy|privacy policy]] should also be a warning for how personal information can be accessed and used without the person’s consent.<ref name=":7">{{Cite journal|last=Nemitz|first=Paul Friedrich|date=2018|title=Constitutional Democracy and Technology in the Age of Artificial Intelligence|url=https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2018.0089|journal=SSRN Electronic Journal|volume=59(9)|pages=1-14|via=}}</ref>
In addition to developing new technology that can detect any video manipulations, many researchers are raising the importance for [[Privately held company|private corporations]] creating stricter guidelines to protect individual privacy.<ref name=":6" /> With the development of artificial intelligence, it is necessary to ask how this impacts society today as it begins to appear in virtually every aspect of society, including [[medicine]], [[education]], [[politics]], and the [[economy]]. Furthermore, artificial intelligence will begin to appear in various aspects of society, which makes it important to have laws that protect [[Human rights|humans rights]] as technology takes over. As the private sector gains more digital power over the public, it is important to set strict [[Regulation|regulations]] and laws to prevent private corporations from using personal data maliciously. Additionally, the past history of various data breaches and violations of [[Privacy policy|privacy policy]] should also be a warning for how personal information can be accessed and used without the person’s consent.<ref name=":7">{{Cite journal|last=Nemitz|first=Paul Friedrich|date=2018|title=Constitutional Democracy and Technology in the Age of Artificial Intelligence|url=https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2018.0089|journal=SSRN Electronic Journal|volume=59(9)|pages=1-14|via=}}</ref>


=== Digital literacy ===
=== Digital literacy ===
Another way to prevent being harmed by these technology is by educating people on the pros and cons of digital cloning. By doing so, it empowers each individual to make a rational decision based on their own circumstances. Furthermore, it is also important to educate people on how to protect the information they put out on the Internet. By increasing the [[Digital literacy|digital literacy]] of the public, people have a greater chance of determining whether a given video has been manipulated as they can be more skeptical of the information they find online.<ref name=":6">{{Cite journal|last=Brayne|first=Sarah|date=2018|title=Visual Data and the Law|url=https://www.cambridge.org/core/journals/law-and-social-inquiry/article/visual-data-and-the-law/8B12259713E882DF54116E30105AB245|journal=Law & Social Inquiry|volume=43(4)|pages=1149–1163|via=}}</ref>
Another way to prevent being harmed by these technology is by educating people on the pros and cons of digital cloning. By doing so, it empowers each individual to make a rational decision based on their own circumstances.<ref>{{Cite journal|last=Maras|first=Marie-Helen|date=2018|title=Determining Authenticity of Video Evidence in the Age of Artificial Intelligence and in the Wake of Deepfake Videos|url=https://doi.org/10.1177/1365712718807226|journal=The International Journal of Evidence & Proof|volume=23(1)|pages=1-8|via=}}</ref> Furthermore, it is also important to educate people on how to protect the information they put out on the Internet. By increasing the [[Digital literacy|digital literacy]] of the public, people have a greater chance of determining whether a given video has been manipulated as they can be more skeptical of the information they find online.<ref name=":6">{{Cite journal|last=Brayne|first=Sarah|date=2018|title=Visual Data and the Law|url=https://www.cambridge.org/core/journals/law-and-social-inquiry/article/visual-data-and-the-law/8B12259713E882DF54116E30105AB245|journal=Law & Social Inquiry|volume=43(4)|pages=1149–1163|via=}}</ref>





Revision as of 23:52, 23 April 2019

Week 10 Peer Review by Starshine44

I thoroughly enjoyed reading your article, I feel like I learned a lot. It is a very interesting topic! You do a good job at explaining your topic as you introduce your article.

In the sentence:

This essentially guarantees digital immortality, allowing loved ones to interact with those who passed away.

I would suggest maybe changing the word "guarantees" to "creates".

There are a few instances where there is inconsistency of spaces between the end of a sentence and the citation. This seems to happen randomly on my article sometimes when I save, so it's worth doing a skim for every now and then.

Not having any background in this subject, I was looking for an even more simplified version of some of the subjects, such as deepfakes, in the intro. But, after I read the deepfakes section, I felt that it was adequately described.

This sentence is a little confusing:

With a lack of regulations for deepfakes include videos of...

Under "concerns", I would suggest changing the heading from "personal use" to maybe something more like "impersonation". "Personal use" makes me think of how someone would use this technology for their own uses or benefit.

I think the paragraph at the end, where you ask if it should be moved from the technology section, is in a good place. I wouldn't move it.

I really like the flow of your article: how you introduce it all, go into specifics, and wrap it up with solutions and ways a person can protect themselves. It ends on a positive note, and that is nice as a reader, because after reading all of this scary information, I feel like I have some power if I were to be in a situation involving this technology. It is very informative and has an encyclopedic tone that brings this complex technology to a reasonable reading level. Great job!

Week 9 Peer Review

Alissa,

Great work on your article this week!

I thought your article was very thorough and really explored the topic of digital cloning well. I thought that everything in the article was relevant to the topic, that the article takes a very balanced perspective––often displaying the pros and cons of digital cloning technology, that the citations work, that your sources support your claims in your article, and that each fact appears to be supported by an appropriate and reliable reference. In addition, your article had an encyclopedic tone. Here are a couple of recommendations that I think you could make:

  • In your first sentence you use “that” three times. Instead, you could say: Digital cloning is an emerging technology that involves deep-learning algorithms which allows one to manipulate currently existing audio, photos, and videos that are hyper-realistic.[1]
  • At the end of the article, you ask if the paragraph about private corporations and the broader impact of digital clones on society should be under the “technology” subheading of the larger section “preventative measures.” I think that this paragraph should be under this section because it elaborates on and provides context for the implications of this technology in profoundly affecting society

Week 08 - Peer review by QuixoticWindmills

Good job so far!  I really like your sections, you have some really interesting work on the positive side of digital cloning.  You may want to fix Digital Immorality to Digital Immortality though, that confused me for a second.  I’m also confused by the hyphenation of “movie-industry,” but it’s not something I know a lot about so it may be correct.  If it’s possible, I’d like more of a description of the technology behind digital cloning; the section seems more of what technology is used for rather than how the technology itself works.  If it’s too complicated to get into in a Wikipedia article I completely understand though, but an overview would be nice (I believe deepfakes uses some form of machine learning?).  The Concerns section has a lot of good points, but you could consider renaming “Ethical Problems” to something more specific, but that’s a minor nitpick.  I think your language in the article is neutral and descriptive but be careful with references to “you” and “our” - from my understanding, Wikipedia should written from an impersonal point of view, but maybe you should check with the leadership team about that. 

Week 08 - Peer review by Tm670

I found this article to be very intriguing and comprehensive. Though, I feel as though your mention of specific companies in the lead section makes it appear as though you are “calling them out.” While I know this is not your intention, if you were able to have a more general gesturing towards the companies, it would have a better encyclopedic tone to begin the article.  I feel as though a section of the “history” that follows could be a part of the lead section, unless there is a more specific case that you can reference. In another section, you explicitly state that some of its uses are “constructive” — that distinction might risk falling into blaming users or having assumptions about what people think about the technology. For example, could Deep Fakes also be used for good? These are large questions, but I think modifying the header can change the tone, too. I love your ending with “preventative measures” because it serves as the “public interest” or “public concern” section that most of us seem to also include. Great work! I look forward to future drafts.

Week 6: Peer Review by Travelqueen27

Love the amount of detail for your first draft! I am already so intrigued about digital cloning and look forward to what else you plan to contribute to the article. I did notice that in some sections you didn't cite at all for example in the section "Voice cloning" and "personal use." Especially for the first sentence where you define voice cloning I would cite where that information came from. Grammar and sentence structure looks great. In regards to your question on the bottom of the outline on whether that paragraph should stand on its own section I would say yes. However, I would not used the word "our" which comes off as including you and your opinions. I also would either reframe some of the statements or add additional citations because the paragraph is coming off to the reader as slightly bias toward more privacy laws. It is important to note that all sides must be considered because maybe not everyone agrees with additional privacy laws. Overall, great work with the amount of information presented and presenting it in a way that is easy to understand

Peer review from Nsjlcuwdbcc:

It’s impressive to see how organized your draft is! Personally, after reading it I really feel like this is an awesome topic and learned a lot from different sections! One of the most helpful things when I read this is the examples you used in almost all the sections. These cases make things clearer for me to understand and actually make it more interesting since sometimes I just got the picture in my head, especially the example about chatting with their clone! It’s just I was a little confused when reading the subsections in constructive use. I’m not sure if education use benefits from digital immorality, and maybe there are some overlap between these two. Also, I noticed you added a lot of hyperlinks, that nice! I think there is also a Wikipedia page about fake news, maybe you can do a hyperlink with that. I also remember there’s a template about how to write a summary if there’s already a Wikipedia page about your subsection. I don’t know if the information covered in that Wikipedia page is in the same direction, but it might help! I also see you discussed about concerns under multiple subsections and that makes your article more trustworthy! I am thinking that you could also discuss the positive aspects a little bit more explicitly, since the concern parts are more recognizable. This would be a great draft to keep working on!!

Article Evaluation

Information Privacy

This article introduced the idea of information privacy and gave a brief overview of how information privacy is seen in various fields. Some parts of the article seems a bit outdated. For example, there is a section dedicated to "Cable television", which is used less frequently in recent years. To update the page, I would suggest adding a section on smart phones or other smart devices. The section on "Safe Harbor program and passenger name record issues" seemed a bit irrelevant to the page because although it touches on the regulation of privacy in various countries, the connection between the section and the topic of information privacy is a bit unclear. Overall, the article is pretty neutral because it gives a clear overview for each section and there is no one section that has more information than the other sections.

After going through some of the sources, I found most of them to be academic and scholarly articles. However, there are a few links that brought me to either a website dedicated to editorials that is not or blogs. These sources may not be as reliable as the other scholarly journals because they are not backed up by other sources and are much more biased. Looking at the Talk Page, the article is part of three WikiProjects: WikiProject Computing, WikiProject Internet, WikiProject Mass Surveillance. The article is rated as C-class, which means that the article contains some information along with some sources but much more editing is needed in order to be considered complete. Most of the discussion talks about how some citations need to be updated as external links are no longer working or lead to websites in different languages.

Digital Privacy

This article introduces the different aspects of digital privacy and has a clear organization. However, I think some parts of the article lacks clarification and definition of terms. The article only gives a very generic definition and doesn't go too in-depth. Most of the information seemed relevant to the topic, but the article lacks citations to back up the information given. Another part that could be improved is the section, "Privacy and Information Breaches", which introduces the idea of information breaches. In this section, the article gives a hypothetical situation where a hacker retrieves individual information by targeting a certain platform, such as social media. However, to make this article more reliable, I think it would be beneficial to replace the hypothetical situation with a incident that has occurred in real life. The article overall maintains a neutral tone, allowing the reader to gain a brief knowledge on the topic of digital privacy.

After going through some of the sources, most of them are scholarly articles but some could be improved. There were a couple of TED Talks cited, which may not be as reliable as other sources because it is heavily biased because it is influenced by the speaker's personal experiences. Looking at the Talk Page of this article, it is part of WikiProject Internet and it is considered to be in the Stub category. This means that the information provided is very basic and it may not be useful for readers.

Citation Practice on Digital Footprint

Added citation to the statement: " Internet footprints are also used by law enforcement agencies, to provide information that would be unavailable otherwise due to a lack of probable cause."


Plans for new article

I will be gathering information on digital clones in order to write a new page on Digital cloning, which includes voice control, deepfakes, and other artificial intelligence that allows to create a clone of objects and humans. Some important aspects include the ethical concerns with the rise of such technology as well as legal concerns.

  • Already gathered 20 scholarly sources so continue to gather new sources
  • wrote 5 annotations so continue writing annotations



Digital cloning

Digital cloning is an emerging technology that involves deep-learning algorithms that allows one to manipulate currently existing audio, photos, and videos that are hyper-realistic.[1] One of the impacts of such technology is that hyper-realistic videos and photos makes it difficult for the human eye to distinguish what is real and what is fake.[2] Furthermore, with various companies making such technologies available to the public, they can bring various benefits as well as potential legal and ethical concerns.

Digital cloning first became popular in the entertainment industry. The idea of digital clones originated from movie companies creating virtual actors of actors who have passed away. When actors pass away during a movie production, a digital clone of the actor is synthesized using past footage, photos, and voice recordings to mimic the real person in order to continue the movie production.[3]

With the development of artificial intelligence in recent years, one can now create deepfakes. This involves manipulation of a video to the point where the person depicted in the video is saying or performing actions he or she may not have consented to.[4] In April of 2018, BuzzFeed released a deepfake video of Jordan Peele, which was manipulated to depict former President, Barack Obama, making statements he has previously not made in public to warn the public against the potential dangers of deepfakes.[5]

In addition to deepfakes, companies such as Intellitar now allows one to easily create a digital clone of themselves by feeding a series of images and voice recordings. This essentially creates digital immortality, allowing loved ones to interact with those who passed away.[6] Digital cloning not only allows one to digitally memorialize their loved ones, but they can also be used to create avatars of historical figures and be used in an educational setting.

With the development of various technology, as mentioned above, there are numerous concerns that arises, including identity theft, data breaches, and other ethical concerns. One of the issues with digital cloning is that there are little to no legislations to protect potential victims against these possible problems.[7]

Technology

Intelligent Avatar Platforms (IAP)

Intelligent Avatar Platform (IAP) can be defined as an online platform supported by artificial intelligence that allows one to create a clone of themselves.[6] The individual must train his or her clone to act and speak like themselves by feeding the algorithm numerous voice recordings and videos of themselves.[8] Essentially, this platform becomes a place where one lives eternally, as they are able to interact with other avatars on the same platform. IAP is becoming a platform for one to attain digital immortality, along with maintaining a family tree and legacy for generations following to see.[6]

Some examples of IAP include Intellitar and Eterni.me. Although most of these companies are still in its developing stages, they all are trying to achieve the same goal of allowing the user to create an exact duplicate of themselves to store every memory they have in their mind into the cyberspace.[6] Some include a free version, which only allows the user to choose their avatar from a given set of images and audio. However, with the premium setting, these companies will ask the user to upload photos, videos, and audio recordings of one to form a realistic version of themselves.[9] Additionally, to ensure that the clone is as close to the original person, companies also encourage interacting with their own clone by chatting and answering questions for them. This allows the algorithm to learn the cognition of the original person and apply that to the clone.

Potential concerns with IAP includes the potential data breaches and not getting consent of the deceased. IAP must have a strong foundation and responsibility against data breaches and hacking in order to protect personal information of the dead, which can include voice recording, photos, and messages.[8] In addition to the risk of personal privacy being compromised, there is also the risk of violating the privacy of the deceased. Although one can give consent to creating a digital clone of themselves before his or her physical death, they are unable to give consent to the actions the digital clone may take.

Deepfakes

As described earlier, deepfakes is a form of video manipulation where one can change the people present by feeding various images of a specific person they want. Furthermore, one can also change the voice and words the person in the video says by simply submitting series of voice recordings of the new person lasting about one or two minutes long. In 2018, a new app called FakeApp was released, allowing the public to easily access this technology to create videos. This app was also used to create the Buzzfeed video of former President Barack Obama.[5] With deepfakes, industries can cut the cost of hiring actors or models for films and advertisements by creating videos and film efficiently at a low cost just by collecting a series of photos and audio recordings with the consent of the individual.[10]

Potential concerns with deepfakes is that access is given to virtually anyone who downloads the different apps that offer the same service. With anyone being able to access this tool, some may maliciously use the app to create revenge porn and manipulative videos of public officials making statements they will never say in real life. This not only invades the privacy of the individual in the video but also brings up various ethical concerns.[11]

Voice cloning

Voice cloning is a deep-learning algorithm that takes in voice recordings of an individual and is able to synthesize a voice is very similar to the original voice. Similar to deepfakes, there are numerous apps, such as LyreBird, iSpeech, and CereVoice Me, that gives the public access to such technology. The algorithm simply needs at most a couple of minutes of audio recordings in order to produce a voice that is similar and it will also take in any text and will read it out loud. Although this application is still in the developmental stage, it is rapidly developing as big technology corporations, such as Google and Amazon are investing huge amounts of money for the development.[12]

Some of the positive uses of voice cloning include the ability to synthesize millions of audiobooks without the use of human labor. Another include those who may have lost their voice can gain back a sense of individuality by creating their own voice clone by inputting recordings of them speaking before they lost their voices. On the other hand, voice cloning is also susceptible to misuse. An example of this is voices of celebrities and public officials being cloned and the voice may say something to provoke conflict despite the actual person has no association to what their voice said.[13]

Constructive uses

Education

Digital cloning can be useful in an educational setting to create a more immersive experience for students. Some students may learn better through a more interactive experience and creating deepfakes can enhance the learning ability of students. One example of this includes creating a digital clone of historical figures, such as Abraham Lincoln, to show what problems he faced during his life and how he was able to overcome them. Another example of using digital clones in an educational setting is having speakers create a digital clone of themselves. Various advocacy groups may have trouble with schedules as they are touring various schools during the year. However, by creating digital clones of themselves, their clones can present the topic at places where the group could not physically make it. These educational benefits can bring students a new way of learning as well as giving access to those who previously were not able to access resources due to environmental conditions.[10]

Arts

Although digital cloning has already been in the entertainment and arts industry for a while, artificial intelligence can greatly expand the uses of these technology in the industry. The movie-industry can create even more hyper-realistic actors and actresses who have passed away. Additionally, movie-industry can also create digital clones in movie scenes that may require extras, which can help cut the cost of production immensely. However, digital cloning and other technology can be beneficial for non-commercial purposes. For example, artists can be more expressive if they are looking to synthesize avatars to be part of their video production. They can also create digital avatars to draft up their work and help formulate their ideas before moving on working on the final work.[10]

Digital immortality

Although digital immortality has existed for a while as social media accounts of the deceased continue to remain in cyberspace, creating a virtual clone that is immortal takes on a new meaning. With the creation of a digital clone, one can not only capture the visual presence of themselves but also their mannerism, including personality and cognition. With digital immortality, one can continue to interact with their loved ones after they pass away, which can possibly end the barrier of physical death. Furthermore, families can connect with multiple generations, forming a family tree, in a sense, to pass on the family legacy to future generations, providing a way for history to be passed down.[6]

Concerns

Fake news

With a lack of regulations for deepfakes, there are several concerns that arises. Some concerning deepfake videos that can bring potential harm includes depiction of political officials displaying inappropriate behavior, police officers shown as shooting unarmed black men, and soldiers murdering innocent civilians may begin to appear although it may have never occurred in real life.[14] With such hyper-realistic videos being released on the Internet, it becomes very easy for the public to be misinformed, which could lead people to take actions, thus contributing to this vicious cycle of unnecessary harm. Additionally, with the rise in fake news in recent news, there is also the possibility of combining deepfakes and fake news. This will bring further difficulty to distinguishing what is real and what is fake. Visual information can be very convincing to the human eyes, therefore, the combination of deepfakes and fake news can have a detrimental effect on society.[10] Strict regulations should be made by social media companies and other platforms for news.[15]

Personal use

Another reason deepfakes can be used maliciously is for one to sabotage another on a personal level. With the increased accessibility of technologies to create deepfakes, blackmailers and thieves are able to easily extract personal information for financial gains and other reasons by creating videos of loved ones of the victim asking for help.[10] Furthermore, voice cloning can be used maliciously for criminals to make fake phone calls to victims. The phone calls will have the exact voice and mannerism as the individual, which can trick the victim into giving private information to the criminal without knowing.[16]

Creating deepfakes and voice clones for personal use can be extremely difficult under the law because there is no commercial harm. Rather, they often come in the form of psychological and emotional damage, making it difficult for the court to provide a remedy for.[4]

Ethical implications

Although there are numerous legal problems that arises with the development of such technology, there are also ethical problems that may not be protected under the current legislations. One of the biggest problems that comes with the use of deepfakes and voice cloning is the potential of identity theft. However, identity theft in terms of deepfakes are difficult to prosecute because there are currently no laws that are specific to deepfakes. Furthermore, the damages that malicious use of deepfakes can bring is more of a psychological and emotional one rather than a financial one, which makes it more difficult to provide a remedy for. Allen argues that the way one’s privacy should be treated is similar to Kant’s categorical imperative.[4]

Another ethical implication is the use of private and personal information one must give up to use the technology. Because digital cloning, deepfakes, and voice cloning all use a deep-learning algorithm, the more information the algorithm receives, the better the results are.[17] However, every platform has a risk of data breach, which could potentially lead to very personal information being accessed by groups that users never consented to. Furthermore, post-mortem privacy comes into question when family members of a loved one tries to gather as much information as possible to create a digital clone of the deceased without the permission of how much information they are willing to give up.[18]

Existing laws in the United States

In the United States, copyright laws require some type of originality and creativity in order to protect the author’s individuality. However, creating a digital clone simply means taking personal data, such as photos, voice recordings, and other information in order to create a virtual person that is as close to the actual person. In the decision of Supreme Court case Feist Publications Inc. v. Rural Television Services Company, Inc., Justice O’Connor emphasized the importance of originality and some degree of creativity. However, the extent of originality and creativity is not clearly defined, creating a gray area for copyright laws.[19] Creating digital clones require not only the data of the person but also the creator’s input of how the digital clone should act or move. In Meshwerks v. Toyota, this question was raised and the court stated that the same copyright laws created for photography should be applied to digital clones.[19]

Right of publicity

With the current lack of legislations to protect individuals against potential malicious use of digital cloning, the right of publicity may be the best way to protect one in a legal setting.[3] The right of publicity, also referred to as personality rights, gives autonomy to the individual when it comes to controlling their own voice, appearance, and other aspects that essentially makes up their personality in a commercial setting.[20] If a deepfake video or digital clone of one arises without their consent, depicting the individual taking actions or making statements that are out of their personality, they can take legal actions by claiming that it is violating their right to publicity. Although the right to publicity specifically states that it is meant to protect the image of an individual in a commercial setting, which requires some type of profit, some state that the legislation may be updated to protect virtually anyone's image and personality.[21] Another important note is that the right of publicity is only implemented in specific states, so some states may have different interpretations of the right compared to other states.

Preventative measures

Technology

One way to prevent being a victim to any of the technology mentioned above is to develop artificial intelligence against these algorithms. There are already several companies that have developed artificial intelligence that can detect manipulated images by looking at the patterns in each pixel.[22] By applying a similar logic, they are trying to create a software that takes each frame of a given video and analyze it pixel by pixel in order to find the pattern of the original video and determine whether or not it has been manipulated.[23]

In addition to developing new technology that can detect any video manipulations, many researchers are raising the importance for private corporations creating stricter guidelines to protect individual privacy.[12] With the development of artificial intelligence, it is necessary to ask how this impacts society today as it begins to appear in virtually every aspect of society, including medicine, education, politics, and the economy. Furthermore, artificial intelligence will begin to appear in various aspects of society, which makes it important to have laws that protect humans rights as technology takes over. As the private sector gains more digital power over the public, it is important to set strict regulations and laws to prevent private corporations from using personal data maliciously. Additionally, the past history of various data breaches and violations of privacy policy should also be a warning for how personal information can be accessed and used without the person’s consent.[7]

Digital literacy

Another way to prevent being harmed by these technology is by educating people on the pros and cons of digital cloning. By doing so, it empowers each individual to make a rational decision based on their own circumstances.[24] Furthermore, it is also important to educate people on how to protect the information they put out on the Internet. By increasing the digital literacy of the public, people have a greater chance of determining whether a given video has been manipulated as they can be more skeptical of the information they find online.[12]


See also

Deepfake

Deep learning

Virtual human

Artificial intelligence

Digital media

Post-mortem privacy


References

  1. ^ Floridi, Luciano (2018). "Artificial Intelligence, Deepfakes and a Future of Ectypes". Philosophy & Technology. 31(3): 317–321.
  2. ^ Borel, Brooke (2018). "Clicks, Lies and Videotape". Scientific American. 319(4): 38–43.
  3. ^ a b Beard, Joseph (2001). "CLONES, BONES AND TWILIGHT ZONES: Protecting the Digital Persona of the Quick, the Dead and the Imaginary". Berkeley Technology Law Journal. 16(3): 1165–1271.
  4. ^ a b c Allen, Anita (2016). "Protecting One's Own Privacy In a Big Data Economy". Harvard Law Review. 130(2): 71–86.
  5. ^ a b Silverman, Craig (April 2018). "How To Spot A Deepfake Like The Barack Obama–Jordan Peele Video". Buzzfeed. {{cite web}}: Cite has empty unknown parameter: |dead-url= (help)
  6. ^ a b c d e Meese, James (2015). "Posthumous Personhood and the Affordances of Digital Media". Mortality. 20(4): 408–420.
  7. ^ a b Nemitz, Paul Friedrich (2018). "Constitutional Democracy and Technology in the Age of Artificial Intelligence". SSRN Electronic Journal. 59(9): 1–14.
  8. ^ a b Michalik, Lyndsay (2013). "'Haunting Fragments': Digital Mourning and Intermedia Performance". Theatre Annual. 66(1): 41–64.
  9. ^ Ursache, Marius. "Eternime". {{cite web}}: Cite has empty unknown parameter: |dead-url= (help)
  10. ^ a b c d e Chesney, Robert (2018). "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security" (PDF). SSRN Electronic Journal. 26(1): 1–58.
  11. ^ Suwajanakorn, Supasorn (2017). "Synthesizing Obama" (PDF). ACM Transactions on Graphic. 36(4): 1–13.
  12. ^ a b c Brayne, Sarah (2018). "Visual Data and the Law". Law & Social Inquiry. 43(4): 1149–1163.
  13. ^ Fletcher, John (2018). "Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post-Fact Performance". Theater Journal. 70(4): 455–71.
  14. ^ Chesney, Robert (2019). "Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics". Foreign Affairs. 98(1): 147–55.
  15. ^ Hall, Kathleen (2018). "Deepfake Videos: When Seeing Isn't Believing". Catholic University Journal of Law and Technology. 27(1): 51–75.
  16. ^ Poudel, Sawrpool (2016). "Internet of Things: Underlying Technologies, Interoperability, and Threats to Privacy and Security". Berkeley Technology Law Review. 31(2): 997–1021.
  17. ^ Dang, L. Ming (2018). "Deep Learning Based Computer Generated Face Identification Using Convolutional Neural Network". Applied Sciences. 8(12): 1–19.
  18. ^ Savin-Baden, Maggi (2018). "Digital Immortality and Virtual Humans" (PDF). Postdigital Science and Education. 1(1): 87–103.
  19. ^ a b Newell, Bryce Clayton (2010). "Independent Creation and Originality in the Age of Imitated Reality: A Comparative Analysis of Copyright and Database Protection for Digital Models of Real People". Brigham Young University International Law & Management. 6(2): 93–126.
  20. ^ Goering, Kevin (2018). "New York Right of Publicity: Reimagining Privacy and the First Amendment in the Digital Age - AELJ Spring Symposium 2". SSRN Electronic Journal. 36(3): 601–635.
  21. ^ Harris, Douglas (2019). "Deepfakes: False Pornography Is Here and the Law Cannot Protect You". Duke Law and Technology Review. 17: 99–127.
  22. ^ Bass, Harvey (1998). "A Paradigm for the Authentication of Photographic Evidence in the Digital Age". Thomas Jefferson Law Review. 20(2): 303–322.
  23. ^ Wen, Jie (2012). "A Malicious Behavior Analysis Based Cyber-I Birth". Journal of Intelligent Manufacturing. 25(1): 147–55.
  24. ^ Maras, Marie-Helen (2018). "Determining Authenticity of Video Evidence in the Age of Artificial Intelligence and in the Wake of Deepfake Videos". The International Journal of Evidence & Proof. 23(1): 1–8.