Jump to content

User:Reagle/QICs

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by BrazilSean (talk | contribs) at 17:00, 10 November 2015 (Nov 10 Tue - Gratitude). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Questions, Insights, Connections

Leave your question, insight, and/or connection for each class here. I don't expect this to be more than 3 or 4 sentences. Make sure it's unique to you if you can. For example:

  • Here is my unique question (or insight or connection).... And it is signed. -Reagle (talk) 19:54, 6 January 2015 (UTC)


Sep 15 Tue - Persuasion

So I just read The Science of Persuasion, and I wanted to respond to that before I started the textbook chapter, because my memory is trash.The article honestly kind of makes me nervous. Like, the information about the "6 tendencies of human behavior" is very important to understanding how advertisers and the like try to take advantage of common-folk like us, but not everyone has access to this information and they will continue to be bamboozled and tricked into doing/buying things that they don't want/need. And it's not just an issue with goods or services, really--I mean, aren't these the ways dictators get into power? They could be spewing nonsense, but as long as they are authoritative and somehow incorporate some of the 6 tendencies, they could gain some real power! And that's terrifying (*cough* Donald Trump *cough*)! But it also makes me think... like... are we as humans always going to be able to be reduced to these evolutionary traits? Education is the only way to better ourselves as a whole, but not everyone has access to it! Why???!!!! The world is stressful. We need to do better!!!! And another thing--the article says how people generally tend to favor/listen to people that are good looking, despite claiming that they would never let something as superficial as that skew their views/decisions. This is something I think about a lot, because I'm all for fighting against the ridiculous beauty standards/sexualization of people in the media, but then I read something like this and I'm like ????. But sex sells! Can we ever move past that as a society, or is it just too engrained in us as a species, what with our animalistic mating needs. I guess my main question is--where is the line drawn between accepting ourselves for what we are evolutionarily and trying to better ourselves despite our animalistic tendencies? This QIC has derailed I'm sorry--is this too much like a stream of consciousness? It just happened. I'm sorry, it's my first one. I also realize I've written, like, way too much, way too casually. So, sorry. -Kev.w.pri (talk) 20:35, 13 September 2015 (UTC)

Kev.w.pri, your response is a bit "stream of consciousness" but you identify some an important question about abuse of this knowledge. -Reagle (talk) 16:06, 15 September 2015 (UTC)

"If you would persuade, you must appeal to interest rather than intellect," said Benjamin Franklin.

In his article, "The Science of Persuasion," Cialdini carefully introduces and shares six basic tendencies of human behavior in generating a positive response: reciprocation, consistency, social validation, liking, authority and scarcity. These principles seem quite applicable and, in fact, extremely practical to adopt and employ into our lives because they allow us to wisely "govern our societal invovlements and our personal relationships."[1] Instead of aiming to deceiving or victimizing others, we, as responsible human beings, can utilize these methods as a vehicle for us to optimize our decision making and destress our lives. These techniques are not some sort of secretive weapons with limited access; they are helpful reminders and strategies for everyone to adopt and practice. We cannot necessarily disregard or change our natural psychological instincts and tendencies; however, we do have the ability to understand our own tendencies and to maximize our advantages and understandings. Isn't knowledge power, and don't we all have freedom of will to utilize these techniques or not? In addition to the six tendencies of human behavior, personalization can also be an effective form in generating positive responses. As advertising industry has immensely thrived through the lens of targeting consumers' needs and wants, personalization can also motivate and encourage people to join the world of editing Wikipedia. As much as we are already so narcissistic, maybe the application of personalization on Wikipedia will be the next generation of attracting people by tracking and acknolwedging their interests and values because as Cialdini emphasizes, "people prefer to say yes to those they like." User20159 (talk) 15:51, 15 September 2015 (UTC) [1]

Good job in summarizing, and you're already using a reference! (There are some typos/mispellings.) -Reagle (talk) 16:22, 15 September 2015 (UTC)

Although the article mentions that the six key factors in the social influence process operate similarly across different countries/culture, do you believe there are other external influences that affect a person's tendency to be persuaded? For example, in my opinion, Hofstede's cultural dimensions play an important role in how people from different cultures respond to persuasion. Chinese employees, as mentioned in ″The Science of Persuasion,″ responded primarily to authority. This action can be linked to China's higher rankings of power distance. Similarly, German employees' willingness to offer assistance in order to follow the rules of the organization connects heavily with the country's high uncertainty avoidance scores. Cultures scoring high on uncertainty avoidance, such as Germany, are less comfortable with change, risk taking and breaking rules. Can you spot any other connections using Hofstede's cultural dimensions?Andrea guerrerov (talk) 23:25, 14 September 2015 (UTC)

Nice connection! -Reagle (talk) 16:22, 15 September 2015 (UTC)

I found it really interesting in reading the Design Claims by Kraut and comparing them to the tendencies listed in Cialdini. Firstly, the tendencies in Cialdini and the effect that international cultures (p. 81) have on them was really interesting, particularly the fact that Spanish Citibank employees were more likely to comply if they had a strong friendship or liking of a person, which also made me think of Design Claims 10 and 11 specifically from Kraut. Separate from that, I also found it interesting that Design Claims 7 and 8 were about how fear campaigns lead to more contributions and evaluations on quality, as I had read before that fear claims do not usually make a great difference. I would have called the examples that followed more authority based than fear based, but can we call respect for authority and action due to fear the same thing?

I also wonder if the international differences with the tendencies listed by Cialdini would translate into online communities--are people from different countries more likely to respond to requests based on things such as authority, consistency, or friendship as well online? Finally, in respond to Kevin--I think you brought up a great point by saying there is a line between accepting ourselves and our animalistic tendencies and trying to better ourselves. Since this culture (in using your example of beauty standards) was build by our own ideas and beliefs, I think we could absolutely better ourselves and this society we've created, but I think it's going to be a lot harder to break down than it was to build up.Smfredd (talk) QUIC #1

Good question about fear. See this link for your corrected typos. -Reagle (talk) 16:22, 15 September 2015 (UTC)

Kevin raises some pretty good points in his paragraph. The tendencies listed in Cialdini and the Design Claims listed in Kraut are a lot like super powers; they can be used for good or evil. Therefore, education on interpreting and using media should become more prevalent in primary and secondary schools nationwide. Additionally, if students are exposed to persuasive tactics from an early age, they may have an increased ability to utilize them in positive ways.

Of course, the term 'positive' is subjective; communities can have controversial goals, such as hate groups, or use controversial tactics to spread their message, such as hacktivist groups. I'm specifically thinking of Design Claims 7 and 8, which point out the power of fear campaigns. Is it ethical to use fear campaigns, especially if they are exaggerated or untrue? Even if the intentions of the community are good, do the ends justify the means? - Hayden.L (talk) 23:19, 14 September 2015 (UTC) QIC #1

Let's discuss! -Reagle (talk) 16:22, 15 September 2015 (UTC)

After reading the chapter by Kraut and the persuasion article by Cialdini, I saw how things began to connect. One of the more interesting points, which everyone has consistently raised, so far is the interest in how people will follow someone based on their appearance or the similarities between them. From the chapter, Kraut mentions how we tend to like people that are similar to us. I know many of us would like to disagree that we are open to many types of people and personalities, but when reflecting on the people who are around you most, wouldn't you say you are similar to them? Because you more often than not get to choose your, friends you find something about them that attracts you. That relates to the 'liking' point that Cialdini raises. We are able to create rapports with people that end up becoming our own friends. Ltruk22 (talk) 13:29, 15 September 2015 (UTC) QIC #1

Good point. -Reagle (talk) 16:22, 15 September 2015 (UTC)

While I enjoyed both readings, I found Cialdini's article particularly intriguing based on a recent TED Talk I watched. One of the six tendencies of human behavior that are associated with persuasion is liking. One phrase in particular that stood out to me was "familiar faces sell products." In 2013, Alessandro Acquisti filmed a talk entitled "What will a future without secrets look like?" in which he discusses ongoing research into what implications online privacy (or a lack thereof) has on the future of digital marketing. Based on findings that show that people do not recognize themselves in facial composites, but respond positively to those images, new technology is being created that will use a composite image of your two top Facebook friends to form a customized spokesperson for an ad directed at you without you being able to detect that the manipulation has occurred. My question would then be, can this principle be applied to enact the opposite effect; a consumer being dissuaded from a product, such as anti-smoking ads? With so much information that we volunteer on these social networking sites, what's to say a composite of two Republican presidential candidates wouldn't dissuade a staunch Democrat from buying a particular product? The more we expose ourselves to the digital world, the more advanced technologies will be able to sell to us with the same, if not more, success of today's leading salesmen. Wikibicki (talk) 16:14, 15 September 2015 (UTC)

-Reagle (talk) 16:22, 15 September 2015 (UTC) Interesting question! -Reagle (talk) 16:22, 15 September 2015 (UTC)

I found Cialdini's "The Science of Persuasion" extremely interesting to read and especially his 6 factors to influence persuasion. First, I completely agree with his idea of Reciprocation. Whenever I am going anywhere and I see a booth or table set up with free gifts or samples I almost certainly am going to stop. I thought Cialdini was extremely insightful when he said these free items are almost always "unsolicited and perhaps even unwanted gifts". The idea that a company can create a few branded accessories and double their sales is extremely effective and interesting. I also found it extremely interesting that the Authority factor does just apply to people in actual positions of authority but also people who convey or portray an image of authority. They mention that people are much more likely to follow a jaywalking man if he is in a suit and tie as opposed to casual wear. It seems to me like this could be a very easy way for people to get manipulated into following a fake authority figure. Finally, with respect to social validation, Cialdini mentions that a huge problem with this factor is that it can easily persuade many people to follow a bad or unhealthy trend. They give the example of smoking and underage drinking and how many campaigns actually propel this stereotype by running ads that note how large of a problem this is. My question is, is there anyway to effectively run a campaign for against products like this while still conveying the whole scope of the problem?Johnmdaigneault (talk) 16:46, 15 September 2015 (UTC)

Sep 18 Fri - A/B testing

1) Christian's article "The A/B Test: Inside the Technology that's Changing the Rules of Business" really challenged me to consider my personal opinion on the matters of testing and using data information. I haven't fully developed my opinion yet, but my initial reaction is that as a creative professional, I am skeptical of what the consequences might be if this type of testing were to become a prominent part of my job. For the past year, I was a Junior Copywriter at an ad agency, working on Samsung and Starbucks. My job was to work with a designer to create campaigns. What is the perfect combination of words to make that sixteen-year-old girl walk into a Starbucks and try the new Coconut Frappaccino? The process is creative, collaborative, and fascinating. Many of my creative directors are amazingly talented; they are able to look at a headline and judge how sucsessful it will be. Their brains work in amazing ways, and I have the utmost respect for them because they are able to make their decisions based on years of studying, experimenting, and experiencing the world of advertising. While my creative directors fit the description of a "Hippo", In the end they make the final call because they are creative geniuses. It forces me to ask: If A/B testing became a part of my everyday life in advertising, what would happen to the hierachial structure? To quote Christian, "The person at the top doesn't make the call, data does." How much less creative would I force myself to be? Would it eliminate jobs? The meetings where I sit with my creative director arguing about which wording would be the most effective, the debates about whether red font is too jaring or just jarring enough, these are my favorite parts of the job. This is where creativity and having a trained visual eye actually matters. Maybe using Kaitlin Jenner in an ad will offend people, or maybe they'll love it! There's just no telling until it's out into the world. That's the exciting part, waiting to see the world's reaction. This is eliminated as soon as the answers can be given by a simple test. As an artist, writer, and professional who values creativity above just about everything else in this world, I admit I am skeptical of where we draw the line.

Which moves me to reflect on the extreme possibility: What if the line isn't drawn clearly, and A/B testing moves out of the confines of marketing and is integrated into our personal lives? If I could test which Instagram filter would get more likes, If I could test which profile picture would be best for a dating app, If I could test whether I should put an ottoman on the left side or the right side of the room, what would that do to our society? Sure, everything would be safe, and I could go to bed at night peacefully knowing that my Instagram was widely appreciated. But where's the fun in that? At what point do we care way too much about what other people think? Personally, I believe we're not supposed to have these options. Somethings are meant to be left to personal taste, creative self expression, and good old experimentation.

I'm also aware that I sound like a grumpy grandmother, shaking her cane at the kids in her yard. This is a relatively consistent trend in my life. --Nataliewarther (talk) 14:31, 17 September 2015 (UTC)

Good questions. And in *Communication in a Digital Age* we do discuss folks who do A/B test their dating profile pics! -Reagle (talk) 16:28, 18 September 2015 (UTC)

2) The Article The A/B Test: Inside the Technology that's Changing the Rules of Business was very informative in expressing what A/B testing was, what it can determine and how it is and will be used; however, I feel that the author takes an angled approach that is clearly in support of A/B Testing. Surely this approach has elicited many people like Natalie to question the realm of the creativity as it applies to marketing and advertising, and to question whether the future of creative services will require creative individuals or if it will be sustainable through computer testing using data manipulation to find the winning variable. He refers to the way in which A/B Testing will change the way companies approach web development, and vicariously change some fundamental aspects of business in general. Some of his claims tend towards a future of business practices that are detached from the human process, but going off of Natalie's conversation of Instagram, these practices would take the personal connection out of expression. You see, if everything fell behind the line of A/B testing and decisions were made based on quantifiable findings then the appreciation for quality would disappear, the appreciation for art, and self expression.

I think that what is missing in this conversation is the part where we remember the ways in which individuals interact online, and the way that people are more connected to companies, individuals, and brands because of commonalities that are believed to be shared. Brands begin to gain traction within large audiences when they are able to fully understand and connect with their audience and customers on a personal level, without the insights of these individuals the beginning stages of A/B testing would not come to fruition. Thus, rather than assuming that A/B testing will become the holy grail of web development, advertising and styling, I think it is important that we understand the value that the content creators bring to the table. If the hierarchal system within creative development teams did not exist and the lower end developers were all pushing their designs forward in hopes of falling in favor of the data than the company might eventually fall into a branding convention that is not consecutive or true to brand, but rather is wholly focused on pleasing data. In turn, brands will lose followings and appreciation. - Alexisvictoria93 (talk) 00:06, 18 September 2015 (UTC)


QIC #2

"I am a big believer in reason and facts and evidence and science and feedback…"(Christian, 2014), President Obama was quoted saying this as he was campaigning for president at Google. In the article, The A/B Test: Inside the technology that's changing the rules of business by Brian Christian, he explains how much testing is done on the internet for the sake of the business. From his experience at Google he brought a lot to the table in Obama's campaign by testing out the best way to rake in supporters and donations. But it goes further. As he sold this to other businesses, test and try different advertising without the public knowing, he stumbled on ways to improve websites and boost people's sales.

Since there was data behind this concept and it did help the campaign in the long run, can it be hard to say this is wrong? Following up on the quote from Obama mentioned above, I completely agree with that. I believe in research to help find a better cure, and I believe in feedback: for companies and people to be able to be the best they can be. I am just a little unsure of this process. I understand this is the way the world works and there is no one who is going to stop it, because it is making those advertisers/coders life simpler. The world is now about simplifying things and by sneaking in test groups on the internet many people will not know if they are being tested. That raises another point of, "is it ethically correct". In my mind, it's not. Relating back to Natalie's point, advertisers take pride in their work especially after they have been on a project for a long time, tweaking it and changing the fonts and colors and the most exciting time is to see that ad in a magazine or on a billboard. That will all be taken away when everything is tested online without us knowing and the ad that works better will possibly be the one printed. Ltruk22 (talk) 12:37, 18 September 2015 (UTC)

If you use Facebook, you are A/B tested many times a day probably, but probably didn't realize it! -Reagle (talk) 16:28, 18 September 2015 (UTC)

Sara Fredd, QIC #2 The article on A/B testing went into detail about how there are really no lessons when it comes to A/B testing-or rather, the lessons are still unclear, but there is not enough time between data collection and implementing changes to understand these lessons. The article gave IGN and its changing buzzwords as an example of how it was not clear as to why the opposite method had worked on the homepage in years previous, however after A/B testing they saw a shift in success to a different method. This "listening to data" is another way for the producer to know what the customer wants before the customer does, which is something Steve Jobs designed Apple products around.

The data can be an incredibly useful tool in things such as campaigns and marketing, however I have to question the ethics. Is it ethical, or responsible, for a company or product marketing team to be selling an object or idea, whatever it may be, without really understanding why consumers chose this? If it is really not the consumer's job to know what they want, shouldn't those selling products to us know what we want, but also why? Smfredd (talk)

Isn't it unethical to deny consumer's their desires just because you don't understand it? -Reagle (talk) 16:28, 18 September 2015 (UTC)

Natalie Wheeler QIC #1 Although I am a communication studies major I also love the sciences and find concrete answers through data just as compelling as creativity, because of this I was enthralled by the article, "The A/B Test: Inside the Technology that is Changing the Rules of Business". I live for efficiency and with this testing tool it takes the guesswork out of decisions and assumptions, which Siroker states, tend to be incorrect. I do see the problem with this as well though because it can lead to companies that are obsessed with tiny little changes so much that they miss the bigger picture completely. Another issue I see with this is the morality of it. Although these tests are usually minor and subtle and are only used for internal purposes, I do not like the idea that I am a constant test subject and I would postulate that many other people would have the same sentiment. Big-data does not seem as intrusive as real time data and does not bother me in the same way as constant small testing does.The real time aspect adds a non-stop dimension which seems to add a slightly more intrusive undertone to it.

Even with the growing use and popularity of A/B testing I do not think that it has the possibility to completely takeover the advertising industry. Television still continues to be the number one place where overall advertising money is spent. Since advertising on TV remains number one, and it seems that it would be nearly impossible to conduct A/B testing there, I am not concerned with a complete loss of creativity and big ideas. Jobs and Ford made incredible points that for big changes you cannot go to the consumer because they do not know what they want or need until a radically new concept or idea is presented to them. Even with a large number of companies using A/B testing the only big changes I see are increased efficiency in advertising, but no decreases in creativity or radically new ideas. Natawhee7 (talk) 15:28, 18 September 2015 (UTC)

Good point about Jobs; I fixed a typo, can you find it? -Reagle (talk) 16:28, 18 September 2015 (UTC)


Andreas Nussbaumer QIC #1 Inside his article "The A/B Test: Inside the Technology That's Changing the Rules of Business," Brian Christian of Wired discusses online A/B testing and its increasingly ubiquitous use on the web. What A/B testing allows businesses to do is take a web page, break it down into its component parts, tweak those parts, and then publish two separate versions of said webpage (the original and the modified) whereby users are diverted to one or the other so as to (often unwittingly) provide the businesses with statistical feedback. The feedback is then used to compare the two (or more) pages to assess the effectiveness of an alteration. Changes can be as minor as the font type of a single word and still increase traffic by a sizable percentage. So, essentially A/B testing is a trial-by-error tool that enables businesses to consistently fine-tune their webpages to the unconscious preferences of web-users through a kind of evolution of mutations trending towards digital ergonomics.

Christian points out that this technology was used during the 2008 Obama campaign and how small changes increased donor percentages by up to 40%. When I first read this, I felt a pang of deception; that I as an American was being quantified, numerated, and thereby taken advantage of. But then I began thinking about it in terms of mutuality. People are generally autonomous and determined–especially web-surfers–and so will usually not visit a website without a certain intention–though here you could throw in search engine adverts or content aggregators that act like virtual markets of discourse and information, and suggest that the hidden hand of influence and data-collection is for some reason unethical. I won't do that, namely because we live in a free market where people can advertise however they like according to the laws–which brings into question: how can old laws cope with ever-evolving new technologies? Well, that's a heavy question that I can't answer. But I do feel free to say that most people would be more welcoming to involuntary trial methods such as A/B testing if they understood that by participating in such processes, they help businesses to create more user-friendly and intuitive products (i.e. webpages) that people may not only want to use, but need to use. It is a shaping tool for businesses' online efficacy, one that I see few reasons to condemn but might like to suggest the implementation an option to check to see if you're being A/B tested. After all, in a mostly democratic world where computers are becoming more and more pertinent to the sustaining and development of our lives, I think it's only fair that those with the biggest and baddest computers provide full transparency, or at least translucency, to the heaping mass of people they graze off of. -Anussbaumer (talk) 20:19, 18 September 2015 (UTC)

Anussbaumer, do note that this is late, QICs are due before class. -Reagle (talk) 21:49, 18 September 2015 (UTC)

Sep 22 Tue - Gaming motivation

After reading the Reagle and Kraut pieces, I think there are two principles that are important in regards to rating systems. The first is that there is no system that won't be gamed. If the reviewers themselves are not manipulating the ratings, then the system designer most likely created the scheme in a way that would prompt certain responses (and gamed the system themselves). Second, all ratings systems have a bias. Reagle explains several different ways the photo.net system was biased, including the instability of a seven-item scale and the mate-rate and revenge-rating biases. Ratings can be biased in a human and numerical way. Therefore, I think meta-quantification (as described by Reagle) and status (like Kraut's Design Claim 28) are the most significant ways online communities can prevent manipulation.

One website that uses meta-quantification very well is Yelp. The popular reviews website requires each user to make a profile which shows their recent reviews along with other user information. Some basic statistics are listed here such as the number of reviews written, date joined and whether or not they have Elite status. (Elite is earned by leaving a certain number of quality reviews and the user is rewarded with invitations to exclusive parties and Yelp-branded swag; I can't speak to the success of the Elite status feature, but how it conforms and differs to Kraut's design claims is probably worth a paper in itself.)

I am more interested in the addition of a histogram showing the distribution of ratings a user awarded on the site. Readers can have a greater understanding of the reviewer's disposition and evaluate the review itself more fairly. Therefore, the reader becomes responsible for interpreting reviews which may help avoid some of the issues described on photo.net. - Hayden.L (talk) 20:54, 21 September 2015 (UTC) QIC #2


As I was reading the beginning of Reagle's piece, he begins talking about photo.net, which was created the year I was born. Reagle talks about how this application used ratings, which was a relatively new concept on the internet and in the art world. I really started to think about this. This means that my whole life, my interactions with media and the internet have incorporated rating behavior. So, I started reflecting on what was the earliest form of "social rating" I was exposed to. I've decided I think the first (and arguably, one of the more poisonous?) forms of rating was in middle school, my MySpace page had a "Top 8". The idea is to show who your top 8 friends are, in order of importance, on your page. While it seems trivial, as a middle school girl, this was extremely important social structure. I still remember how upsetting it was when my very best friend Lauren moved me from #1 to #2, bumping Chelsea Obeidy up in the rankings. I was crushed. The ranking held some serious weight: it was like Lauren was establishing a new social order. This digital ranking in an online world actually dictated the way we interacted in the real world. I now understood that I was not Lauren's best friend, and this changed our relationship.

The concept of revenge ranking is interesting and slightly anxiety-provoking. Obviously the main difference between the initial scale ranking system on Photo.net and the types of ranks we have now is that the original scale allowed you to rank a photo negatively. I don't think I've ever experienced this type of scale, and I can imagine how harmful it can be to one's ego. How horrible would it be to upload a piece of your art, only to find out the world thinks it's a 3 when you think it's a 10? Personally, I'm thankful that Photo.net decided against this system. As Reagle states, "People are seduced by numbers" (Reagle, p.1), and I imagine it was difficult for artists to ignore the rankings even if they didn't agree with them.

I'm pretty interested in how I've never even stopped to consider that a lot of our online interactions involve a "ranking culture". I've never known the internet or social media that doesn't incorporate some sort of liking or ranking. I wonder if I would still want to interact with certain things like Instagram if there wasn't a liking component. If you upload an Instagram, and you have no idea how the world perceived it and how widely liked or disliked it was, is it still satisfying? If we're being honest with ourselves, how much of our desire for "rank" is just a desire to satisfy our own egos? I'm having fun playing with these questions in my head.

--Nataliewarther (talk) 22:18, 21 September 2015 (UTC) QIC #2

Nataliewarther, if you are interested, this is discussed more extensively in chapter 6 ("Shaped") in my book Reading the Comments, though we don't read that in this class. Also, give some thought to your first few sentences. Could you make it more concise and snappy? -Reagle (talk) 16:56, 22 September 2015 (UTC)

The content analyzed by both Reagle and Kraut reminded me of Instagram's digital evaluations which include comments and/or numerical ratings. As several Photo.net users questioned the motivation of the system, I constantly ask myself "what is the point of the rating system in this social media? Why is the number 11 so important to users? While I assume the rule was created merely for design purposes (to take up less space on your feed), reaching 11 likes has become a powerful motivation tool for social validation. As described in Reagle's piece, less and less people are focusing on the beauty of the photographs and are rather seduced by numbers.

The number of "likes" someone receives on Instagram could be connected to the ratings and comments received by Photo.net users. They are both perceived as feedback and if it is positive, it motivates "people's desire for self-enhancement, in other words, to feel good about themselves and maintain positive self-esteem" (Kraut & Resnick, 2011). That is the case for many Instagram users, who prefer to take the photo off Instagram if it did not reach the 11 likes in order to avoid the "embarrassment." If the user does not want to delete the photograph, he/she might even ask friends to like the photo. Once it crosses the 11 likes, people's motivation to upload more photos and receive non verbal feedback, such as quantitive performance measures, increases. As described on The Daily Dot, the number 11 is "a visible marker of a post's relative success or failure." Andrea guerrerov (talk) 22:20, 21 September 2015 (UTC)

Andrea guerrerov, good quote from K&R, you might want to include the page number. -Reagle (talk) 16:56, 22 September 2015 (UTC)

I'm worried that my style of QIC writing is too casual. Is it too casual? I just have an easier time expressing my ideas in this tone.

You can always write casual and then edit for more more formality, that is, try removing/editing those bits that are more approriate to speech. -Reagle (talk) 16:56, 22 September 2015 (UTC)

I'm a millennial. We're millennials. We grew up as the internet grew up. And as the internet grew up, so did social media. User:Nataliewarther makes a good point about MySpace, which I had totally forgot about! Top 8 was a HUGE deal. I would constantly check my friends' pages to make sure I was still on theirs. And I remember feeling SO overwhelmed when I felt it time to rearrange my own top 8. It was a big deal. Although it's fun to go back now and see who was on your top 8 way back when. But I digress.

It's funny that we read this section of your book (do I address you, Professor? Or do I refer to you as a random author. What's the etiquette for this situation?) so soon after Facebook revealed that it will be adding a Dislike button.

You can speak of "Reagle writes ..." -Reagle (talk) 16:56, 22 September 2015 (UTC)

I mean--why does that seem like a good idea? I know we've all probably said how badly we want to "dislike" something on facebook, or even more likely, have commented on something saying something along the lines of "I hate this" or "I don't like this" (hopefully on a friend's post and as a joke). But it just seems like another way for people to hurt other people. It's another way for people who feel the need to share their opinion with everyone, especially with those who didn't ask for it. I don't think it's Facebook's best move--but that's what we all said about Timeline too, so who knows?

I've heard it's more of an "empathy" button actually. -Reagle (talk) 16:56, 22 September 2015 (UTC)

But I also couldn't stop thinking about Instagram during this read. I recently started a separate Instagram account for myself where I could post my photography (does that make me pretentious?). Photography has always been something I've loved, and I wanted to kind of make myself get back into it, so I made this Instagram. Also, I've read a lot of articles about people who have started real photography careers through Insta, so yeah. But it's made me realize just how much importance I place on the amount of likes a picture gets. It hasn't been about my photography yet--it's been about posting at the most opportune times so that the most amount of people will see it and like it and using the best, most efficient hashtags. And I'm not saying that all isn't important! It is! Especially when trying to actually build a following, because people trust people with high likes. But it's not doing anything for my photography! People aren't commenting to tell me what they like or dislike, how I can improve or where I shine. They're my friends throwing me a like because they think the picture is pretty, or because they like the fact it was taken in New Orleans, or during one of my adventures from the past year. I'm usually SO pro-social media, but like, really... why the **** do we care so much about likes? Yeah yeah, social validation and feeling liked and important is all well and good. But has it gotten to an unhealthy point of obsession? (Hint: probably). I've been adjusting and allowing myself to care less about that, but dammit it's tough. What would the internet be if we had never introduced ANY sort of judgement-passing buttons. Would people post anything? Sure, the likes are a great way to know that people are seeing our things and appreciating them, hopefully. But do we post things so that we can share them with the world, or just so that the world knows we're here? Are we just trying to prove to ourselves that we matter, when in reality, in the vast nothingness that is the internet (and I guess, y'know, the universe), we really don't. But nah that's garbage--I truly believe that everyone matters in one way or another!!!!

In Reading the Comments I speak of how this goes back to the origins of the Web's popularity with HotOrNot and Youtube. -Reagle (talk) 16:56, 22 September 2015 (UTC)

Why do we post things? The world may never know (or care, honestly).

That all being said, my instagram is Kevin.Priolo

--Kev.w.pri (talk) 23:09, 21 September 2015 (UTC)


"Don't use Wikipedia, kids!"

This is something that we all have been told in high school (or still in college), but this is also something that we let it go in one ear and out the other. I clearly remember my old high school days when Wikipedia was my ultimate resource for all my research homework despite our teachers' concerns about our dependence on Wikipedia. I still relied on Wikipedia, not because my teachers never provided us understandable, justifiable reasons to utilize Wikipedia, but because I simply found Wikipedia useful and helpful, as Wikipedia itself claims to be.

In chapter 2 of Encouraging Contribution to Online Communities, Kraut and Resnick explore various ways in which intrinsic and extrinsic motivations can enhance people's participation and performance in online communities. Interestingly, Kraut and Resnick suggest that "the designers and managers don't care if the contributions are the result of intrinsic or extrinsic motivations" (p. 61) because both types of motivations can eventually lead people to perform activities. I agree with designers and managers, not because any type of motivation can lead to better online performance, but because these types of motivations wouldn't even matter if we were already a part of online Wikipedia community. Ever since kids start attending school, they are told, or almost brainwashed that Wikipedia is simply bad without reasonable, acceptable reasons; however, Wikipedia, as I am learning more about it in this class, is a constructive, knowledgeable community, where students can wisely utilize and begin enriching their knowledge. I'll admit it -- I didn't know that we can freely edit and contribute our knowledge on Wikipedia until I decided to take this class. I honestly wish that I learned about Wikipedia a lot earlier so that it could be deeply rooted in my online performance and activities. We all know how useful Google is because we've been using it ever since we were first introduced to the world of Internet. If kids were taught appropriate use of Wikipedia in school, maybe we wouldn't need to worry so much about lack of editing or types of motivations that may even fail to work because kids would be knowledgeable enough appropriately use as well as responsibly edit. Instead of focusing on discovering psychological motivations, we first need to find ways in which we can educate people about Wikipedia. User20159 (talk) 03:13, 22 September 2015 (UTC)

User20159, I agree -Reagle (talk) 16:56, 22 September 2015 (UTC)

After reading the rest of chapter 2 from Kraut and Reagle's article, they both draw upon points that can relate to our current state in social media. At first I was thinking Kraut's chapter was how we motivate people to participate on the internet, but it came to be about different ways people manipulate and manage the internet and sites to their own personal advantage. Although people can be genuine on the internet, that can be few and far between. People can receive positive feedback on posts or comments to allow self-improvement; there can always be online 'cheerleaders'. These people can also be your friends. When i think of a situation where this happens is on Instagram. Relating it back to Kraut, people may throw others a like as a sense of false praise, and for whatever odd reason that will motivate us into possibly posting more photos. For our generation it is rewarding that we receive likes on our pictures but there is an element of reciprocity that engages us and we will like other peoples photos because they liked ours. It also has become a joke for us that you know you have friends on Instagram that are "a guaranteed like" and we count on those likes to push us to the 11 likes.

When reading about photo.net and understanding the process they went through to create a somewhat early form of Instagram I kept thinking what it would be like if Instagram didn't just have likes, but they had a rating system. And I think that could get very ugly. How would you rate someone's selfie? And would people be manipulated more if they received a 1 as a rating? Would that influence cyberbullying more? After thinking about those questions, Instagram would be a harmful place if that same rating system was in place. Ltruk22 (talk) 13:10, 22 September 2015 (UTC) QIC#3

Coincidently, I recently blogged about selfie-shaming! -Reagle (talk) 16:56, 22 September 2015 (UTC)

Professor Reagle's article "Revenge rating and tweak critique at photo.net" is particularly relevant to a recent announcement that Facebook will be adding a dislike button. Zuckerberg and his team of developers envision the dislike button as being used to express sympathy, particularly for photos or posts that have to do with tributes to deceased loved ones for example. However, we know from learning about the professor's six characteristics of evaluation learned from photo.net that quantitative mechanisms beget their manipulation. The use of karma whoring is something that we may see on our Facebook feeds with the release of this new update, similarly to what Reddit users have encountered with mass downvoting. Another principle from the professor's reading, that it is difficult to quantify the qualitative, is likely to cause a lively discussion within the Facebook community as well. Dissatisfaction has already been documented in blogs and Twitter posts positing the potential problems a dislike button will cause. So what can Facebook developers learn from the successes and failures of previous online rating systems? How to best allow users to give feedback was largely debated on the forums of photo.net, motivating developers to consider a variety of different evaluation systems. Perhaps to lessen the harshness of either "liking" or "disliking," Facebook can consider implementing a system of emoji ratings to express sentiments towards photos or posts. For example, a smiley face could be used to express general approval; a crying face could be used to express sympathy; perhaps a fire emoji could allow users to express attraction, etc. Zuckerberg and his peers need to decide where the line is for Facebook between freedom of expression and a culture of perpetuating thoughtless negativity. Wikibicki (talk) 14:01, 22 September 2015 (UTC)

Wikibicki, interesting idea! -Reagle (talk) 16:56, 22 September 2015 (UTC)

Reading the rest of chapter 2 from Kraut has gotten me thinking about the way in which my parents raised me. It wasn't until I was reading about intrinsic and extrinsic motivation in chapter 2 that I realized how closely they resemble ways my parents used to motivate me to do chores or run errands. For instance, while growing up my mother would always tell me that if I'm doing a task or chore I don't like then I should try to make it fun so it'll make the time go by faster, or in deciding what I wanted to choose for a career path she would always remind me that doing something I enjoy for a living makes it feel less like work. And if I enjoy doing the work or task it is motivation in itself, which is the definition of intrinsic motivation. Although I didn't know it, and I doubt she did either, she was already teaching me how to motivate myself. Another example, this time of extrinsic motivation, is how my parents would convince me to read. As a kid I wasn't an avid reader until I was offered $5 every time I finished a book(an example of a monetary reward). However, this ended after reading 13 books in a summer. After reading the end of the chapter, it's incredible how much of this content does not only apply to the business world or online communities, but in everyday life.

This is my first QIC and I'm not sure if this was a little too personal, if so I shall strive for improvements! BrazilSean (talk)

BrazilSean, I'm not as strict as in my lower level class, but your beginning could be snappier: have a look at Writing Responses -Reagle (talk) 16:56, 22 September 2015 (UTC)

Reading Chapter 2 and professor Reagles article Revenge rating and tweak critique at photo.net I was influenced to consider the ways different types of motivation can influence the ways that we interact online in a world that requires users to find quantifiable means to evaluate content. According to Chapter 2, extrinsic motivation is when the motivating party is outside of the self, intrinsic motivation is influenced more strongly by the self. I relate this conversation to the way that people communicate on social media. No longer is social media simply a means of communicating with friends and relatives, but it has also become a way of establishing 'you' online. For some (many) the way that they are portrayed online is the product of extrinsic motivation, meaning they are influenced to post on social media because their friend's post similar things, or expect that they post in a certain way. Whether or not the individual is succeeding in portraying themselves online is often determined by how much traffic they get on a post, from the number of likes they get to the comments and the attention a post garners. This can and has been influenced very strongly by the people behind each platform. However, there are others who simply prefer to share their posts exactly as they want to without any consideration to how it will be perceived. The developers of social media platforms and other online communities keep this is mind when creating the functions of the platform. Twitter for example, has positioned itself as a medium for people to share the sometimes random thoughts that they have through the day. As more individuals began to use Twitter the higher the stakes became for what content you post and how your audience receives it.

Interesting to me because I find that people are often portrayed differently online than in person, and this is because we can edit our online selves. We have witness the dramatic effect that this can have on individuals, but what does it say about the future of expression? - Alexisvictoria93 (talk) 16:09, 22 September 2015 (UTC)

Alexisvictoria93, good question!

The line between objectivity and subjectivity is thin–to say the least–and appears ever thinner when caught up in the massive moving web of the internet. Using the online photography sharing website Photo.net as an example, Prof. Reagle traces and outlines the effects of the introduction and evolution of rating and ranking systems in an online community. Photography being an artistic discipline, one might already ask how (or why) one would place faith in a quantitative measure of a qualitative art form. Well, the given reason for why is that rating and ranking brings the most popular, discussed, or democratically deemed best photos to the fore for all users to see and talk about. This model brings to mind that of Reddit's, which promotes links and comments based on average score, number of ratings, and comments; but also other sites that revolve around reviews, around quantifying the qualitative, namely IMDB, Rotten Tomatoes, and the more palpable Yelp. (In fact, 'mate-rating' is a severe problem for Yelp, as users will have friends and family artificially up their ratings; as is 'revenge-rating,' when people feel the need to pan a whole establishment to appease themselves for one insulting employee.) So, while aggregated scores don't equate to a subjects quality, general consensus is highly persuasive and at the very heart of the value of group rating and ranking; "people are seduced by numbers" [2], specifically numbers conjured up by the 'wisdom of the crowd.'

It's hard to think about rating and ranking in online communities and not be reminded of the politically democratic process, where 'mate-rating' and petty slander and/or libel are rampant in the form of corporate contributions and 'anything Donald Trump says,' respectively. The similarities build up in the form of the six final conclusions, mentioning manipulation as a necessary corollary of measurement, loop-holes begetting loop-holes, as well as the funny fact that sex does indeed sell. It seems that the minute something is given a determinate value, it becomes currency in a synthetic but relevant system, currency that can be laid subject to price fixing. But is quantitative sorting necessary for subjective things? I'm not sure, but it is an ever-present phenomenon, one that I think arises from a single characteristic: volume. To swim in a sea of photos (or anything else for that matter) that are all equal is to drown. Thus whenever quantity reaches an upper limit or threshold, the content must then be sorted according to some quantitative metric–no matter how qualitative said metric might be at the core–in order to maximize usability. Whether it's Instagram, Goodreads.com, or Pinterest, feedback has become a necessary aspect of digital communication and content sharing. Anussbaumer (talk) 17:09, 22 September 2015 (UTC)

  1. ^ a b Cialdini, Robert. "The Science of Persuasion" (PDF). Scientific American.
  2. ^ http://reagle.org/joseph/2013/photo/photo-net.html

Sep 25 Fri - NEU Special Collections

NO QICs -Reagle (talk) 17:15, 24 September 2015 (UTC)


Sep 29 Tue - Kohn on motivation

Gratipay, also known as Gittip, is a service for donating money anonymously to people for their work contributions. Despite Chad Whitacre's desires and intentions of creating a sustainable crowd-funding platform, criticisms and harrassments have risen and addressed issues of privilege that "every person on the front page is male, and all the with photos are white." Speaking irrespectively of this controversial topic, Gratipay is such an innovative, practical tool for inviting and optimizing creative, talented people's work and knowledge contributions. Many find Gratipay obnoxiously disturbing because it directly deals with money, but Gratipay has noticeably developed because of that money, the reciprocal gift for free labor. As I have mentioned in my previous QIC, types of motivations, in my opinion, do not necessarily matter because any type of motivation can eventually lead to better online performance. However, I do believe that such external motives as money can strengthen intrinsic motivation, as it has been suggested through Motivation crowding theory. How incredible and rewarding is it for people to work on what they love and also get paid? Maybe it is a "popularity contest," but those criticisms, to me, sound like hatred and jealousy.

Although I may already sound like a huge advocate of Gratipay, I don't necessarily agree with Whitacre's belief that "Gittip is designed to reward people who act out of intrinsic motivation, not out of extrinsic motivation." I understand his pure intentions, but as long as money stays as "the second gift" on Gratipay, those criticisms will never disappear. I do, however, appreciate Whitacre's recognition of Gratipay's current emphasis on the second gift as well as his efforts to "bring out the first gift more." I wonder how these efforts will help Gratipay receive its much desired respect and deserving attention. User20159 (talk) 23:57, 26 September 2015 (UTC)

User20159, be careful about giving the appearence of only doing the easist/shortest reading. -Reagle (talk) 16:37, 29 September 2015 (UTC)

This article brings a great point to how money can change the effort and motivation in a relationship. It is true that introducing cash competition creates "unresolved resentment" and also "plants seeds of discontent" but Gratipay simply embodies the idea that intrinsic motivation, not extrinsic, should be rewarded. When teams or members feel resentment towards others for making more money than they do, is it really Gratipay's fault? Their system is built so that people can feel rewarded for the work they love to contribute to society, therefore any external emotions that come attached should be considered your problem. Not Gratipay's. The leader board acts a symbol of hope. You can be number one if you work hard enough. Members have to know whether or not the website can bring in enough income to sustain their work, otherwise, how would they know if their hard work and dedication is all for nothing. One could argue that if you are truly passionate you will be intrinsically motivated regardless of "how much money you make" but the cold reality is that we do what we love to do in order to sustain our own well being, financially and mentally. We are intrinsically motivated to sustain our psyche by doing what we are passionate about, while being extrinsically motivated to make ends meet by being rewarded. There is no either or. Gratipay helps people fulfill the promise that if you do what you love, it can also satisfy your extrinsic needs as well. As people are feeling resentment and a lack of motivation seeing other people succeed while they fail, putting in the "same amount" of work, it is really up to the creator to realize that the issue is on their end. No one is the blame for your failures other than yourself. No one is the thank for your success other than yourself. Ahn.cha (talk) 18:31, 28 September 2015 (UTC)

Ahn.cha, be careful about giving the appearence of only doing the easist/shortest reading. I think you could've brought Kohn to bear on this.

The part about the resentment article that got to me was the line "But my point is that my resentment of David (or Subbable, or Patreon, or …) is my problem. Why let it distract me from building Gittip?". This is something I feel that I've been really realizing the past few months/years. It's probably something I've been dealing with my whole life. As someone who loves to think of things in terms of the Seven Deadly Sins (Pride, Greed, Wrath, Envy, Sloth, Gluttony, Lust), I've always thought of myself as someone who's "sin" was envy. "Why can you do this cool thing and I can't?!" or "Why are you succeeding at that thing you're passionate about and I'm not?!". It's all just unnecessary resentment born from envy. I started really realizing this when I actually started making moves to ~follow my dreams~. In 2013, while I was taking a semester off from school, I started taking improv classes at UCB in NYC. And there, when I saw the crazy amount of talented people I would be competing with in the comedy world, I would occasionally get pissed off and demotivated (that's a word, right? I could google it but I think it's more fun to reference the fact that I'm not sure in parenthesis. You're welcome, everyone reading this. I'm a delight). That just is not healthy! It's such a weird thing. It really is just a whole lot easier to be annoyed at someone else for being awesome than it is to be annoyed at yourself for being lazy.

But why is this such a natural reaction for people? To feel this sort of envy/loathing/anger towards people who are better than us? Is it something natural/animalistic? Or has it been bred in us through the competitive nature of our culture to constantly do more, be more, and make more money?

There's always going to be people doing better work than you. That can't be a reason to stop--that's got to be a reason to keep fighting to better yourself. Keep going, do what you love, and write ONLY using cliches when doing QICs.

-Kev.w.pri (talk) 20:16, 28 September 2015 (UTC)

Kev.w.pri, be careful about giving the appearence of only doing the easist/shortest reading. I think you could've brought Kohn to bear on this.

Alfie Kohn's piece reminded me of a 2009 TedTalk by Dan Pink titled "The Puzzle of Motivation". The authors had similar stances regarding money as a form of motivation. Kohn states, "Those who had been paid, it turned out, now spent less time on [the puzzle] than those who hand't been paid. It appeared that working for a reward made people less interested in the task" (Kohn, 1999 p. 70). Similarly, Pink mentions the group, in his particular experiment, who was given $20.00 to perform the puzzle, took 3 1/2 minutes longer than those who weren't rewarded. He mentions, "if you want people to perform better, you reward them right? Bonuses, commissions...incentivise them. But that's not happening here!" He concluded that instead of motivating people by rewarding them with money, it was actually dulling their thinking and blocking their creativity. Is it true then, that rewards have the power to make a task less interesting? Are we blocking our minds from performing well and rather focusing only on the reward?Andrea guerrerov (talk) 21:28, 28 September 2015 (UTC)

Andrea guerrerov, Good engagement with Kohn -Reagle (talk) 16:51, 29 September 2015 (UTC)

"Rewards offer a 'how' answer to what is really a 'why' question" (Kohn p. 90).

This quote stayed in my mind throughout the reading because I feel that it frames the problem particularly well. We shouldn't think, "How can we get someone to (eat their vegetables, buy our product, contribute to our forum)?" but instead "Why would someone want to do this?" It's difficult to grasp the answer to that question in the case of many menial tasks, or when the individual is a small child who doesn't understand the value of eating vegetables (and that the only reason someone would ever eat spinach is to have dessert). Rewards support a consequentialist viewpoint where the ultimate goal is a positive outcome; they create a system where quantity is valued over quality. Is reading 25 books in one summer 'more educational' than reading a few?

Kohn highlights the tension between quantity, quality, and reward systems. Performance-contingent rewards can be successful, but "the studies showing any advantage to basing a reward on the quality of performance are in the minority" (Kohn p. 86). So if not rewards, what can foster intrinsic motivation?

I don't believe it's an easy answer, and it often depends on the task. I believe education and attempts to answer the "'why' question" are one of the ways teachers, parents, marketers, and product designers can motivate and engage their audiences. I took Business Ethics last fall, in which we talked about different ethical theories. Actions can be shifted from a consequentialist to a deontological point of view, where the moral goodness or positivity is in the performance of the action itself and not the tangible reward. Of course, this solution could be philosophically sound but psychological factors like the human desire for rewards and praise greatly affect its success. - Hayden.L (talk) 00:48, 29 September 2015 (UTC) QIC #3

I like that you connected with consequentialist and deontological! -Reagle (talk) 16:51, 29 September 2015 (UTC)

"Introducing money into a relationship changes how we approach the relationship. I'll work as hard for a candy bar as for a box of chocolates, until you tell me that the candy bar is worth 50¢ and the chocolates $5. Then I'll resent you for wanting me to work so hard for so little, and I'll slack off."

This line is what really stuck with me from the Resentment article, because it reminds me of a frequent argument I had with my parents when I was younger. I used to argue that if I forgot to do my chores for one day, and lost all of my allowance that week as a result, I had no reason to continue doing my chores for that week. If I was going to be paid for 1 week of chores if I did 1 week of chores, but no money at all if I only did 6 days of chores, what was my motivation to do any chores if I forgot on Monday? Needless to say, the system was changed pretty shortly after I'd reasoned that out. Whitacre argues that Gittip doesn't fall under the "Effort for Payment" model because "in a transactional system like "Effort for Payment" describes, the promise of the reward comes first, and the effort comes in response to the promise of the reward." What I find interesting is that Whitacre, and possibly my parents as well, are having an issue with communication.

Whitacre was operating under the assumption that his audience was acting on intrinsic motivation - a personal drive to complete a project, much like my parents were operating under the assumption that the promise of money would make me want to do my chores. However, Whitacre's audience was operating under the assumption that they would be rewarded - extrinsic motivation - to encourage them to work, much like I was operating under the assumption that the allowance was my money already that I just had to complete boring tasks to collect.

It all boils down to this difference: Whitacre, and Dr. and Mrs. Torma believed that paying upon completion was the correct way to do things. Whitacre's Audience, and myself, believed that we should be paid up front for work that we are expected to do. I think that the resentment Whitacre's audience felt towards him/his platform (at least along the lines of the "Effort for Payment" model) and the resentment I felt towards my parents if I lost my allowance for the week could be resolved by more clear communications of what the service/allowance provider expects. - Torma616 (talk) 05:31, 29 September 2015 (UTC)

Relatedly, Whiteacre makes a disction between the prosocial behaviour being before or after the reward (i.e., "first gift"). -Reagle (talk) 16:51, 29 September 2015 (UTC)

I found myself fully engrossed in Kohn's chapter Punished by Rewards, and kept thinking about how these concepts apply to my own personal life. As he discussed the disadvantages of setting up a punishment and rewards system between two individuals, I couldn't help but think of my childhood. My dad often used punishments and rewards to discipline or motivate my sister and I to complete a task. Once he'd announced the punishment or reward, he was extremely disciplined in following through. I remember specific instances so strongly. For example, my Dad got tickets for my sister and I to see N'sync when I was in the third grade. He was so excited to tell us, and I know that he purchased these tickets out of a genuine desire to bond with us and give us a gift. But, he used the tickets as a motivator for good behavior. I was told if I missed the bus before the concert, I wouldn't be allowed to go. And then the day came that I missed the bus. It was probably the first time I felt honest crushing disappointment; watching that bus pull away could very well have been the saddest moment of my short life. I think my Dad was truly disappointed as well, because it meant he had to deny me of something he didn't want to deny me of. In this way, no one won. My sister took a friend, and I resented her for being better at time management than me. I certainly always felt like my sister and I were competing for better rewards and punishments from my dad. As Kohn states, "The central message of all competition, in fact- is that everyone else is a potential obstacle to one's own success" (p.55). This was certainly the case with my sister and I. We weren't able to have a relationship that wasn't competitive until adulthood, because my Dad typically set us against each other (whoever mows the lawn can have a friend over). My dad didn't listen to my pleas to forgive me, or give me any other punishment (PLEASE). My sister and I certainly felt resentment towards him in these situations, which is sad, because I know he was doing his best to implement rules that would teach us certain lessons. All of these ideas were reinforced in reading what Kohn had to say about rewards and their effect on relationships. Interestingly enough, now that the days of punishment and rewards are over between my dad and I, he is my best friend and we have become infinitely closer. I no longer see him as the scary disciplinary figure, which allows us to have a more honest relationship. When I do things for him, it is out of a genuine desire to do so because I love him. I thought about this when Kohn addressed unexpected rewards; for example, "for helping me out yesterday, here's a banana". This type of reward feels so much more genuine and free of selfish motivation.

I find it fascinating how this all applies to online systems such as Gratipay. A lot of the other students seem to think Gratipay is harmless, but I absolutely can understand how some might criticize it. The introduction of an intrinsic motivator will always limit the other motivators. For example, I might be motivated to contribute to Wikipedia because I care about that community, or the topic at hand. That is a motivation that benefits the entire community. Additionally, I may water the plants in my family's flower box because I love my family and I want our home to be beautiful. But if Wikipedia started paying the top contributors, or if I started receiving an allowance for watering flowers, this changes things. I would expect these rewards, and not complete the task unless the reward was delivered. Personally, I know I would not be motivated to contribute to an online community just once, if I knew I wasn't going to contribute enough to get paid but others were. As Kohn states, "Some people do not get the rewards they were hoping to get, and the effect of this is, in practice, indistinguishable from punishment" (p. 53). In my opinion, the most enjoyable online communities are the ones who emphasis the community part.

--Nataliewarther (talk) 14:05, 29 September 2015 (UTC)

Nataliewarther, excellent engagement! BTW: I think in a number of instances it would be "my sister and me" as objects. -Reagle (talk) 16:51, 29 September 2015 (UTC)

I understand the idea of payment as gratitude for a service, be it a skill or product, which is what Gittip or Gratipay are doing. However I can also understand the issues that come up with complete transparency in terms of a "leaderboard" on the front page of the website. I get that it allows new or returning users to see that people can make money and survive with Gratipay, but I cannot completely buy into Whitacre's idea that it is not focused on monetary payments when that is the main focus here. In fact, I think this could even foster more resentment in the community-while Whitacre warns against resentment by using his own examples in article , why then does he focus on money even thought he knows he is "playing with fire"? If he wants Gratipay to focus on the "first gift" of free services that require users to take a risk, why does he not then promote the first gifts these people have been doing otherwise? In Kohn's chapters we read that people will not always put their best effort in if they are being rewarded with money-in a community that values hard work and well developed "products", wouldn't it be best to highlight the most active producers to motivate others to do more high quality work as well? Smfredd (talk) 15:17, 29 September 2015 (UTC)

That's the question! :-) BTW, there's some typos: "money even thought he knows" -Reagle (talk) 16:51, 29 September 2015 (UTC)

According to the resentment article Whitacre says, "Gittip is designed to reward people who act out of intrinsic motivation, not out of extrinsic motivation." But at what point does the model change when people become familiar with Gratipay and no longer do something or help with a project as a result of intrinsic motivation. He identifies two gifts of Gratipay, one is the work done, and the other is the compensation received. So this brings me to reciprocity in persuasion. What kind of effect does this type of transaction have on the way that the givers and receivers react and anticipate the work and compensation as gifted rather than a form of trying to ensure that both parties are giving a receiving in a comparable nature?

Alexisvictoria93 (talk) 16:06, 29 September 2015 (UTC)

Alexisvictoria93, nice connection with persuasion, but be careful about giving the appearence of only doing the easist/shortest reading. -Reagle (talk) 16:51, 29 September 2015 (UTC)


Oct 02 Fri - Relational commitment

As I read Reagle's chapter on comment, I found myself only being able to personally relate to one side of this complex story. Reagle addresses multiple instances where comments have negative consequences. As Winer suggested, comments can "interfere with the natural expression of the unedited voice of an individual" (p.2). Blogs were also identified as platforms for "a target for those who wish to exploit it via spam and manipulation" (p.7). There is clearly a side of the web that is under attack from trolls and haters and houses bully battles, but this is far from what my personal experience with comment has been.

Lucky for me, I related much closer to the positive effects of comment on social media platforms. I've experienced comment as informative, social, and helpful. As is stated on page 16, comment can affect status, help decisions, and alter behavior. It certainly adds a whole new element into the type of intimate sharing we see on social media platforms. Introducing input from your peers when you post something can change the lens through which you post. For example, I'm much less inclined to include profanity in my posts ever since I accepted my conservative step mother's friend request. But being allowed to specify reaction adds a whole new dynamic. It makes web interactions communal, rather than individualistic. If I post a song on Facebook it is for the benefit of everyone who sees my feed, as well as myself. It is a way for me to keep a "timeline" of what I'm doing, thinking, and feeling, and it allows the people closest to me to watch those progressions happen with me. If they are so inclined, they can comment about what I'm doing, which often makes the interaction even more positive.

In assessing my own personal experience, I thought a lot about how I primarily use web services that force me to use my own name. I think this is an important factor, as Reagle also addresses. I'd like to talk about Facebook in general, as I think it implements the most successful comment structure. Reagle's chapter discussed how platforms often transform from intimate serendipity to filtered sludge. This made me think about how Facebook used to be for young users (you had to have a college or high school email), so it was a safe space to share things deemed inappropriate by people like your parents or your boss. This would be the intimate serendipity. The way I use Facebook today is very different. I am friends with my parents, my aunts, my boss, all of my work colleagues, and many people from my past. I don't carefully filter what I upload, and I don't sensor myself much based on who sees my page. I understand that not everyone has a boss or work environment as liberal as mine, but I think many young people are starting to lean in this direction. Personally, I love it. Zuckerburg has addressed this directly; when people wanted different pages for different audiences, he said no. You get one page, you are one person, and everyone is going to see it. It is a relief to not have to put an act on with my parents or with my boss. Everyone knows exactly who I am, where my politics lie, what my social life is like, and who I associate with. It's a transparency that society is not used to, but I strongly believe it can do incredible things. I think this improves relationships by making professional colleagues seem more human and relatable. It is much easier to be nice to John from IT after finding out that he and his wife just had a baby, and that is why he's tired and grumpy today. It is more fun to brainstorm with Kelly after noticing that she went to a concert that I'd wanted to go to. These things build bridges, and are tools at our disposal. Comments in these worlds are usually interactive and positive, solidifying the micro-communities. Another interesting observation is that these communities on the web are successful because they also exist in real life. I can understand how comments might not be so positive within a micro web community if they were built solely within the confines of the internet.

Perhaps Facebook is filtered sludge. But the networks inside the filtered sludge are still strong and highly functional. Just as there is a supportive community on Twitter for cancer victims, there are micro communities all over social media that can bring us together. So yes, comments may be nasty and alienating, but I'll continue to chose to avoid the platforms where that type of behavior is generated. Here's to hoping I can keep this up forever.

--Nataliewarther (talk) 19:12, 1 October 2015 (UTC)

Nataliewarther, Interesting, you seem to be arguing you get more empathy/connection through FB. BTW: you need a space between the "p." and number. -Reagle (talk) 17:08, 2 October 2015 (UTC)

Reagle's introduction on comment in the age of the Web reminded me of Howard Rheingold and his book Net Smart: How to Thrive Online. As Reagle points out that comments can be made anonymously and asynchronously, thus permitting a giddy sense of freedom, and this freedom, according to Rheingold, is especially attractive to those who "are not ready to blog or that form of publishing." Instead, they "participate by reading, tagging, subscribing, and commenting." Although this "power law of participation" provides many online consumers opportunities to engage in various forms, many have abused the freedom by creating unsupportive, insulting comments.

Reagle suggests that "the easiest way to avoid comments is not to have them (p. 3)," and it is clearly not the best way to wisely deal with issues of commenting. Rheingold, in his book, strongly emphasizes the importance of our intentions to achieve our created goals with "mindfulness." Although I did learn in class that mindful use of digital media means "thinking about what we are doing, cultivating an ongoing inner inquiry into how we want to spend our time," I didn't really realize or absorb the meaning of mindfulness until recently when I read a beauty blogger's angry post about an anonymous user's hateful comment. The comment said: "You're not as skinny as you think you are. Your fashion style is hideous, but why are you trying to be a fashion blogger? You're a beauty blogger. Post beauty blogs, not fashion blogs!" The fashion blogger, in response, resented about how stupid and immature the comment is by posting a picture of the comment as a blog post. The blogger's frustration was indeed comforted by numerous supporting comments, but the blog post also backfired on her. Surprisingly, many of her readers showed disappointment, stating that her blog post about the comment was "not the best way to address the issue," and those comments were neither supportive or insulting but genuinely critical. The blogger since then hasn't posted or said anything.

BTW, User20159 citation is outside quotation marks. -Reagle (talk) 17:08, 2 October 2015 (UTC)

I'm blaming neither the anonymous commenter nor the blogger, but I do believe that both lack online mindfulness that Rheingold values. Instead of verbally attacking the blogger, the commenter had a chance to consider the purpose of his or her comment and to critically address his or her perspectives. Conversely, the blogger could have approached this issue in a less aggressive way because well-created posts will more likely to receive well-constructive feedback and criticism.

Although I support Reagle's hope to find ways to "develop a robust self-esteem that can handle ubiquitous comment (2015, para. 59)," I strongly dislike the idea of anonymity of commenting to begin with. Kraut and Resnick, on the other hand, claim that "anonymity of individual group members fosters community identity and strong group norms (2011, p. 87)." I'm not quite sure if Kraut and Resnick would consider the anonymous commenter as a group member because this commenter did not make any "notable" contribution, but I would consider the commenter as a group member because he or she still actively participated by commenting. It's highly possible that the anonymity of commenting allowed the commenter to leave the insulting note. Why do comments have to be anonymous? I don't deny benefits of being anonymous online, but being identifiable and visible will encourage so many online users to thoughtfully consider purposes of their comments and contribute constructive feedback, and continuously motivate them to show commitment in online communities. I find it quite funny that Winer himself disabled comments from his own blog. If haters will stay hating, why not eliminate anonymity? We'll no longer need to fearfully avoid comment. Instead, we'll have opportunities to hear and understand constructive feedback. User20159 (talk) 00:50, 2 October 2015 (UTC)

Good questions. -Reagle (talk) 17:08, 2 October 2015 (UTC)

Reagle discusses the value of commenting (and interacting with content in general, such as the form of likes) and how users create an identity around their interactions. Kraut analyzes the ways in which users develop commitments, which are either bond-based or individual-based.

This week, I have a question. I have worked in several digital strategy departments on different co-ops, and the key objective for our work is to "increase engagement". Inevitably, discussion will turn to how much user engagement is worth. Is a Facebook like worth more than a comment? What about a share? What if the comment is actually just tagging another person's profile; is that a share or a comment? Or is it neither, because that user obviously doesn't understand the features of Facebook?

The conversation repeats for every social media channel; Twitter, Instagram, Tumblr, Pinterest, Snapchat... Brands are constantly fighting for space online. So my question is, do these bonding tactics work for brands, acting as an individual, in online communities? Are people more attached to products that come up on their newsfeed? Why do people interact with brands online (they want that brand to be part of their identity, as much as the brand wants their business?)? And how much are those likes really worth?

I know Nataliewarther mentioned the excitement of teenage girls after getting replies from the Starbucks Frappacino account. My own experience shows that users love to respond to questions posed by companies in social media posts (Such as a recent post by a home decor brand: "Which rug do you like better? Like for A, Share for B!"). I feel more emotionally connected to the brands I follow on Snapchat, because their stories show up next to my friends'.

Hopefully we can gather some thoughts on this, because then we could probably write an industry white paper on it and sell it for a lot of money!--- Hayden.L (talk) 04:20, 2 October 2015 (UTC) QIC #4

Hayden.L, Great question, we should figure out if there's any quantification of these various types of comments. -Reagle (talk) 17:08, 2 October 2015 (UTC)

Kraut and Resnick wrote about how self-disclosure along with interpersonal similarities fosters closeness and commitment, along with things such as posting photos and recent activity. However bonding this disclosure can be, Kraut and Resnick also wrote that using a pseudonym can increase self-disclosure and thus bonding. While I am aware of the difference between being anonymous and talking under a pseudonym, I came to wonder whether the bonds formed under this pseudonym can be compared or called the same as those made under true identity disclosure. I also wonder if the use of pseudonym versus real name has the ability to sway a user into the amount of information they are willing to disclose-for example, I have seen online blog users willing to give away their real names and birthdays, etc., but not what they do for a living. Vice versa, I have been people not wanting to give their full names and therefore use pseudonyms, however they will happily talk about their jobs, etc.

In reading the Reagle introduction, I found myself thinking whether we could even call some of these "online communities" communities in the first place-while 150 members is considered the perfect amount, I cannot recall a community that has that few members, if it is deemed a popular community. But then again, maybe commitment to the community doesn't take into concern popularity-but rather the bonds, which would also provide examples for Kraut and Resnick under examples of disclosure. I also find the use of Twitter as an example of community and comment as an rather complex idea-although users all belong to the community of Twitter, does it count as a community if your voice is not being heard? For example, a verified Twitter user with somewhere north of 2 million followers couldn't possibly see all comments simply due to the amount of them and the rate at which they are tweeted. While Reagle quoted that "intimacy doesn't scale", where then do we find the cutoff for actual online intimacy and bonding versus a facade? (QIC #4) Smfredd (talk) 03:55, 2 October 2015 (UTC)

Smfredd, Great questions. -Reagle (talk) 17:08, 2 October 2015 (UTC)

Ren, Kraut, Kiesler, and Resnick talked about the different ways in which community attachment drives community commitment. All communities, including online communities, want members that are very committed to the community they are apart of because of their increased involvement and dedication. They highlighted three different theories, which can be applied to committed involvement among online communities. The three theories are affective commitment, normative commitment, and continuance commitment. I found all of these theories very interesting and their individual design claims interesting as I related them back to my own life and experiences. Although I am not a member of an overwhelming number of online communities I was able to connect my reasons for being in online groups back to the different theories, particularly affective commitment theory.

Affective commitment was understandably much more extensively discussed than the other two theories since it seemed to have many varying factors stemming from identity-based and bond-based affective commitment. Although it could be intensely broken down it boiled down to the fact that members were part for a community because they wanted to be. For the first time since joining this class I realized that a lot of the online communities that I am part of do not reside on only one medium. I would say that I am a part of several environmental communities online, but my involvement is not through one medium, rather it is scattered throughout many. With the activist organization Greenpeace, for example, I follow and retweet them on twitter, I also respond to their posts and favorite things. On Facebook I have liked their page and share posts that they provide to inform others and influence people to join the group as well. I receive emails and sign petitions from Greenpeace via email and also have gone to DC as a protestor to support Greenpeace in their environmental positioning. This is profound because it has used my deep involvement within other communities to propel my involvement and commitment to their organization and community. This is profound because it has used my deep involvement within other communities to propel my involvement and commitment to their organization and community.

Natawhee7, note sure who Ren is; also there's a repeated sentence here. -Reagle (talk) 17:08, 2 October 2015 (UTC)

I had always though of Greenpeace as a community, but never as an online community because when activism started there was no Internet or online communities to fuel change. Interestingly, now these communities have morphed into massive online communities. I am part of these environmental communities because I want to help fuel their mission and I am also close with other members in the organization. Through the reading I could also credit my involvement with the organization to social identity theory which caused me to "categorize [myself] as a rightful member of the group" due to sharing social categories with group members (Ren 2011). Before this reading I had never given much thought to why I joined certain environmental groups or why I stayed in some, participating heavily, and why I left others, but the detailed breakdown of all the different claims provided insight into my own involvement and motives for being in certain online communities. Natawhee7 (talk) 05:40, 2 October 2015 (UTC)

Nice application of the readings concepts! -Reagle (talk) 17:08, 2 October 2015 (UTC)

While reading Kraut's chapter on encouraging commitment in online communities, I immediately thought about the dynamics of Tumblr and its members. Initially, many of the identity-based principles of commitment seemed to define the Tumblr community. For example, design claim three states that "recruiting or clustering those who are similar to each other into homogeneous groups fosters identity-based commitment to a community." Many members of Tumblr choose to create their blogs as theme-based. For example, there are blogs dedicated to band fandoms, or celebrity fandoms, or a certain aesthetic, such as a "rosy" blog. Members of these types of blogs are proud of their commitment to their respective fandoms, and use the reblog and favoriting features to perpetuate their group's (or what their group stands for) popularity. In addition, Design Claim 12, which suggests that making group members anonymous will foster identity-based commitment is also present in the Tumblsphere. For the most part, Tumblr users choose an URL that resonates more with their interests than on identifying themselves so that their blog would be searchable by their birth name. Tumblr serves as a community fiercely loyal to each other, but essentially unrecognizable to one another. I would imagine this is similar to the WOW gamers mentioned in the chapter, whose true identity is hidden behind a username. However, there are definitely elements of bonds-based commitment in these anonymous communities as well. Design claim 19 states that "providing user profile pages and flexibility in personalizing them increases self-disclosure and interpersonal liking and thus bonds-based commitment." A lot of the Tumblr community uses their blogs as personal diary entries, often expressing feelings on hardships with interpersonal relationships, or mental illness. In accordance with this design claim, users reveal an extraordinary amount of personal history. My question would then be, what is the likelihood that an online user who has formed an intense bond-based commitment would be willing to meet with another user in person? I would be interested to see how participants in a study would feel in terms of whether their commitment to a particular online community increased, decreased, or stayed the same following an in-person meeting. Wikibicki (talk) 15:19, 2 October 2015 (UTC)

Wikibicki, you question reminds me of this thread on The WELL, one of the oldest extent online communities. -Reagle (talk) 17:08, 2 October 2015 (UTC)

Reddit is a great example of a committed online community. Though it might have started through the attractiveness of individual group members (particularly since it was founded in a college environment), I identify the website as an affective commitment community. "Redditors" experience "a feeling of being part of the community and helping to fulfill its mission" (Kraut, 2011, p. 79) through submitting content and engaging in discussion. Though most of the Design Claims describe "Redditors" commitment, I found 5, 6, 7, 8 and 11 the most prominent.

Design Claim 5 mentions "people are attracted to the community to the extent that they identify with the domain, topics, or causes on which the community is based and fin them meaningful" (Kraut, 2011, p. 83). Those who subscribe to Reddit share a common interest (although they might have different opinions/views) and through the page's tagline, "the front page of the internet," they understand it is a user-generated news links in which everyone's votes organizes submissions' positions.

Design Claim 6 and 7 characterizes Reddit's system of organizing the entire community into "subreddits" based on areas of interest. "Subgroup identity can be as powerful as whole-community identity in eliciting commitment in its own right and enhance commitment to the whole community (Zaccaro & Dobbins 1989; Kraut, 2011, p. 83). When Reddit organized content by areas of interest, it gave members a more intimate experience, letting them search news based on what they wish to know or contribute. Nonetheless, the creation of IAmA furthered this process; members now had the option of engaging in a question and answer forum. The creation of this named group within the larger community, should have given "Redditors" the same experience as Ren and colleagues when given arbitrary names such as Eagles or Gorillas.

Design Claim 8 and 11 connects to IAmA's latest challenge. After Victoria Taylor's termination, the "subreddit" shut down for 24 hours and experienced horrible messages (including sexual harassment) towards Ellen Pao. As Kraut (2011) mentions, "…if a community is in danger of closing because its servers cost too much to run or it is danger of being overwhelmed by spam messages, everyone will be affected." "Redditors'" and moderators' commitment was evident throughout the incident; not only did they share their opinions through the website, but they also wrote articles expressing their disappointment towards the company and explaining the community why they decided to shut the page. The temporary shut down only proved that they were all affected by the unfortunate termination and preferred to come together for a purpose than exit the online community. Although I disagree with the horrible messages Pao received, the community joined together to protest against the termination of Ms. Taylor. "Redditors" are committed to the website and did not want to abandon the community they have built, therefore leaving no option to Pao but to resign. After all, that is what a committed online community is all about-- maintaining a strong bond despite any threats Andrea guerrerov (talk) 15:25, 2 October 2015 (UTC)

Andrea guerrerov, Nathan Matias is studying the Reddit shutdown and I think you might like his work if you wanted to pursue this further. He should be visiting us later in the semester to discuss gratitude too. -Reagle (talk) 17:08, 2 October 2015 (UTC)

As soon as I started reading this article, I immediately thought of two things: The infamous "Dear Fat People" video (along with it's responses), and the news that came out last year about how Facebook was manipulating what we saw, be it happy, sad, angry, or what have you (is that the right use of that phrase?).

The main thing I want to talk about is the Dear Fat People video, but quickly I'll just say that the line in Reagle's chapter "How does the nonstop stream of our own and others' photographs and status updates affect self-esteem and well-being?" is literally what facebook was trying to figure out, BUT they didn't tell any of us and were just toying with our emotions. Straight up.

What I want to talk about in regards to Dear Fat People (which I'm going to assume everyone knows about), is the response that kick-ass YouTuber [Grace Helbig] gave to the video. She wasn't mean or nasty in response to this horrid video full of fat-shaming and general rudeness--she voiced her opinion and brought up a great point. The girl responsible for the video claims that it was supposed to be satire or comedy--that she didn't actually mean any of the awful things that she said literally. Helbig points out, though, that the comments section of the video was disabled shortly after the video went live. Helbig argues that, if whats-her-name really wanted to be satirical and make something that people would talk about and bring light to a genuine health issue in America, then she would have left the comments open. The comments can be such an amazingly useful part of a community to discuss the good, the bad, and the ugly. You just have to sift through all the "ur gay"s and the "COME TO BRAZIL!"s. A comments section is a dangerous place, as I'm sure we've all come to know, but they can foster a community of discussion and growth and learning, imho.

Going off on another tangent, I think comments sections are so important for creators to interact with their fans, and I also fully believe that Dunbar's number is hugely important in comments sections. I remember when I first found the blog "[Hyperbole and a Half]" by Allie Brosh (now a hilarious book). It was back before it got wildly popular, and I could talk directly with Allie and other people who loved her hilarious cartoon-articles. I'm even friends with her on Facebook! That's how small the community was (and how awesome Allie was (is?)). I'm not sure how this ties in with my previous thought, but I'll wrap it up nicely in the next few sentences. It's been a long week, y'all.

Comments are an incredibly vital part to all online communities, whether we like it or not. They allow us to learn and communicate and make friends. They build us up and tear us down, but the internet would be such a different place without them. They're simultaneously the worst and kind of the best, but they're family. And they're here to stay (maybe).

-Kev.w.pri (talk) 15:28, 2 October 2015 (UTC)

I'm a big fan of boogie and liked his response. -Reagle (talk) 17:08, 2 October 2015 (UTC)

Chapter 3 of Kraut, focusing on relational commitment, opened my eyes to different types of commitment that arise in all types of communities. Not only for online communities but and physical group of people and how these concepts can be applied both ways. One concept I found to be interesting was how teams can be formed. Kraut mentions how people become part of a group because of their own interests not necessarily based on individual friendships. This comment brought me to think about how I have been part of different teams, especially coming to college and being on a team where I didn't know anyone. I came to Northeastern because of a common interest and immediately became part of a sports community where I eventually formed relationships with teammates. I had never really analyzed this in the concept of online communities though. People continue to be part of communities because they feel they have a name attached to their group and they have established relationships with people through the process.

When thinking about this in terms of subgroups and how people comment on things or give feedback, this is where the Reagle chapter integrated to Kraut. Kraut mentions that subgroups can sometimes cause conflict within a larger community and that is where it relates to commenting. People who comment on articles with negative comments then create a negative area for people to go to. That may deter them away or it may have a strong enough impact where people decide to punch back with more comments, which eventually get taken down. Reagle talks about Slate and the implementation of Facebook's comment box (p.8). The idea of having a comment section that links to your Facebook page would immediately reduce the unwanted comments on an article or page. This concept makes more sense to me than anything else, and why wouldn't all sites want to use this to prevent the negative attitudes people posses when they can be a ghost? Ltruk22 (talk) 15:36, 2 October 2015 (UTC)

Ltruk22, I wonder if you are asking: (1) why don't all sites require an identifiable login or (2) why don't they all use the Facebook login? Both are interesting questions. -Reagle (talk) 17:08, 2 October 2015 (UTC)

Oct 06 Tue - Ethics (interlude)

In her article, "Teaching Students to Study Online Communities Ethically," Amy Bruckman suggests and discusses appropriate ways in which educators can teach students how to "study online communities in an ethical fashion" (p. 82), including subject recruitment, choice of site, disguising, and self-presentation. As much as I think these approaches are so rudimentary yet extremely crucial for students to adapt as savvy or amateur online community members, it is also unfortunate to realize that this is only a graduate class.

This past summer I took Interpersonal Communication class where I conducted numerous face-to-face and phone interviews, gathering qualitative data. Dr. Speed Wiley, in the beginning of the sumester, required us to get the NIH (The National Institutes of Health) certificate. By completing the NIH Web-based training course "Protecting Human Research Participants," not only did I receive the official certificate for human research but also learned how to ethically conduct and study human study prior to Dr. Speed Wiley's lectures. The training course was simple, straightforward, and informative; it provided reading materials and videos, and tested our knowledge through quizzes.

The NIH certificate may just be one of the many requirements in the course of Interpersonal Communication, but I thought it was such a brilliant plan for Dr. Speed Wiley to have us take the training course in the beginning of the sumester. I wish Northeastern, in fact, all universities provided actual classes on ethical human study for either online or offline. I'm aware that not all majors research human study or strive to become committed members of online communities, but ethics is something that everyone who is online encounters everyday life whether through conducting human research, blogging, or just commenting.

Korean schools require students to take online or computer related classes starting in preschool. I still clearly remember my third grade teacher emphasizing "how to become a respectful, wise online user." I've disliked YouTube videos, but I've never done anything "unethical" in the norms of both Korean and American contexts. Maybe I was just born to be an ethical person, but I believe it's highly possible that it's because I've been well-educated on such topics as online ethics.

Korean schools don't provide students flexible course options that American schools freely do, but Korean schools don't fail to educate students on how to become ethical, wise online community members. "Norms of what is ethical" (p. 84), of course, vary nationwide, but I've always thought that American schools are behind on cultivating ethical, respectful online users. So yes, I do believe that academics should be held at a higher standard, and I'm sure the U.S. Department of Education values online ethics as much as other countries like Korea do. Can they not afford online classes or choose not to? Or do they just believe it's something that can be learned naturally? User20159 (talk) 21:43, 3 October 2015 (UTC)


After reading the articles about FaceBook experimenting on humans, I find these invasions disturbing. It is one thing to experiment to increase web hits, but to give out personal information so researching companies could see the effect on how newsfeeds influence emotion is very disturbing. In this regard, I do not believe that FaceBook was ethical. They should not be allowed to invade the such an intimate level of the mind [needs copy edit], especially when the users were left under the pretense that their information was safe with a community that are heavily invested in. The mass public is not realistically going to stop using FaceBook because of these experiments so companies like such feel like they have the ability to do what they want to. Even seeing that there was a complaint issued by the EPIC shows that there was at least a breath for concern. We should not have our personal information shared with companies, and be subjected to such invasive experiments. If FaceBook were to judge based on, say how a bigger like button would influence the number of times we like, so be it. That's for the website to function better and be more rewarding to the community. But something like sharing our personal information for profit and research is a clear violation of our moral rights. It begs me to wonder, is there any way we can actually read the terms and conditions without having to actually read it? Critics attack the efficiency of the privacy polices all the time, and a shows that only 3% of users actually read the terms carefully.[1] How can we better understand our legal rights?

Ahn.cha, interesting point about Terms of Service; be careful about giving the appearence of only doing the easist/shortest reading -Reagle (talk) 19:47, 5 October 2015 (UTC)

Christian Rudder says, "if you use the Internet, you're the subject of hundreds of experiments at any given time, on every site. That's how websites work." Based off what the article wrote, I am okay with such experiments being held. First of all, OkCupid is nowhere near the expansive, mammoth of an online community FaceBook is. People are on FaceBook for numerous reasons; networks, friends, family, music, etc. Not to mention free. OkCupid is a paid for, dating only website. That is way smaller and more specific than FaceBook could ever be. The website can tweak their code all they want because in the end run their goal is always the same for every person. They might want to see how many people date when the match is not right but they are told they are, but in the end no amount of influence can make them send messages, arrange meetings, and fall in love. That is a conscious decision. OkCupid simply uses said methods to initiate the interactions, while recording data that helps them continue their business.

Of course, we are all subject to some sort of experimentation online, but to what degree companies are willing to skin us is the real concern. How do we know if we can trust ourselves when we are being influenced? Dating is one thing; if you don't like a person, you don't like them. The ability to manipulate our personal psyche, however, is a legitimate concern. Ahn.cha (talk) 00:52, 5 October 2015 (UTC)


If there's one thing I've decided from when I sat down at my computer to read this week's readings until now, it's that I'm conflicted. My strategy with most controversial issues is to never state an opinion until I feel like I have a firm grasp on what I believe, but I'll do my best to voice the conversation currently taking place in my head.

I'll admit that when I first began to read Bruckman's article on educating students to be ethical online users, I thought the class seemed a little overkill. It seemed like a lot of hoops to jump through just to gather mundane information. But as I continued to read, it became clear that the information gathered is not always mundane, and that the regulations set by the IRB are there for good reason. As Bruckman addresses, not all of online testing is as casual as "what are the emotions of our users" or "how hot is this OK Cupid user". There is a lot more sensitive information being gathered, and that's where the line gets blurry for me.

It's interesting to me that the OK Cupid article "We Experiment on Human Beings" was prompted by the Facebook controversy. To me, these two situations feel different. On a personal level, I think I would be okay with Facebook tracking how many times I used words like "Happy" or "Sad", but I don't think I would be okay with a dating site actually misinforming me about my compatibility with another user. If I have come willingly to your site, based on your claim that you will help me accomplish a goal, you have told me that it will work, you have told me that you are invested in me achieving said goal, and then you literally misconstrue information in a way that would directly interfere with the accomplishment of said goal, then I think that makes you a lying jerk. That 100% interferes with my trust and commitment levels. But, I also must be sensitive to the fact that my argument isn't quantifiable. You cannot make regulations that suit every individual's personal opinion. We all end up on different points on the ethical scale, which is why I'm sympathetic to the idea that this is a very tricky subject.

That being said, I do think academics should be held to a higher standard. The Georgia Institute of Technology honors class may take it to the extreme, but I believe exposing students to both ends of the extreme is the best way to send them out into the world to make the most intelligent and ethical decisions. As my Dad always says, "You can never be overdressed or overeducated". I believe this to be especially true in these types of situations, where sensitivity and ethics are the largest players in the conversation.

--Nataliewarther (talk) 13:40, 5 October 2015 (UTC)

Nataliewarther, I think that your point about IRB regulations being set for a good reason is valid, but I can't help feeling that some of the regulations are actually preventing researchers from studying what could be extremely important topics in the online community. Bruckman talks about the exemption from documentation of consent (which is also noted as being different from exemption of obtaining consent.) The point is made that requesting that users "click to accept" for consent is not appropriate for children who are not able to give consent themselves. One of the most sensitive topics concerning online communities is the membership of its younger members, particularly in the world of cyber-bullying. Restrictions put in place by the IRB are meant to protect children from unethical testing, but if so many barriers are put in place, will the needs of our youngest online users truly be addressed? Perhaps the greater problem then is that while children are not able to "click to consent" they are allowed to "click to participate." In the online universe, particularly one with such anonymity, is important to understand that while we don't like to think of children being exposed or involved in the negative spaces of online communities, they certainly need to be accounted for.
As for your comments on the Ok Cupid article, I completely agree with you! I think that the experiment with "Love is Blind Day" was acceptable because the community was aware for that day that there would be no pictures, and could choose to opt out. This is completely different than the power of suggestion experiment, which knowingly gave users a false match percentage in order to test the site's matching algorithms. We tend not to put too much stock into Tinder and similar "match" apps because they don't boast the same formulas for matching that sites like Ok Cupid do. People are genuinely looking for authentic partnership on these sites, and taking advantage of the trust their users have that they're getting an optimized experience is extremely unethical. Wikibicki (talk) 15:41, 6 October 2015 (UTC)
Wikibicki, good point, I have some examples actually of why I think IRB can be overkill and unethical even. -Reagle (talk) 16:44, 6 October 2015 (UTC)

I always find myself debating whether or not I agree with online sites' experiments. While "Teaching Students to Study Online Communities Ethically" gave me peace of mind and complete trust on scholars who "conduct research online in an ethical fashion" (Bruckman, 2006, p. 83), both "We Experiment on Human Beings" and "Facebook: User influence experiments" challenged me to rethink my opinions. "We Experiment on Human Beings" specifically, shattered all my hopes on ethics, precisely when the author emphasized that if I'm an Internet user, then I am "the subject of hundreds of experiments, at any given time, on every site" (2014).

A couple of years ago, when Target used historical buying data to decipher one of the company's customer was pregnant (and send coupons for baby clothes and cribs to her house), I got shocked at how much of our personal information is out there. Though the Target example did not happen in an online community, Facebook and OkCupid are two sites that have even more control over our information. We have handed these companies everything there is to know about ourselves in a silver platter.

Andrea guerrerov, I mention this in my CDA class, but there's now some question if it might be apocryphal.

When we all decided to open a Facebook account, many of us did not know what we were trading in return for a free networking site. By agreeing to their terms and conditions, we gave them complete access to our information—including photographs and everything we decide to post. I seem to be okay with Facebook performing studies such as "Emotional Contagion Through Social Networks" because we "accepted" to give them all the resources to do so. As ignorant as this may sound (I am not as active in the website as many of my peers), I have always had in mind that when Facebook conducts online experiments, they do so with a purpose in mind. Nonetheless, this doesn't mean it is not controversial. Would I prefer if Facebook would create some sort of pop-up every time they conducted a study? Of course! But, would the results be the same? That, I am not so sure.

On the other hand, I find OkCupid's "We Experiment on Human Beings" completely unethical. I was freaked out from the moment I read the title. Perhaps it was the sincere acceptance of their acts, or the wording of the sentence, but from the beginning, I had a negative opinion about the article. The core of dating sites lies in the trust a person has on the website's accuracy to match you with the perfect significant other. Misinforming users about their compatibility with others is a complete lie; it is unprofessional and breaks the trust and commitment users have given the site for so long. I am not a member of the site, but I am sure no one "accepted" being misinformed about their compatibility...or did they?

"Teaching Students to Study Online Communities Ethically" is a perfect example that supports academics should indeed be held to a higher standard. These students had the opportunity to immerse in the complexities of conducting research online—to work with the IRB, learn from regulations, participate in interviews, and most importantly, to strive to have their papers published and become one more researcher whose work was performed in an ethical manner.Andrea guerrerov (talk) 22:19, 5 October 2015 (UTC)


Like Natalie and Andrea, I'm conflicted too. As is made apparent in Bruckman's article, there are what seems like a million regulations and rules and protocols to follow when conducting research. Being upfront about your research intentions is stressed multiple times throughout the article--"We do not want to deceive site members about who we are or what we are doing. We tell students that the fact that they are doing a study should appear prominently in their user profile... We ask students to be as open as possible about the project they are doing" (Bruckman 89).

I believe this transparency is SO important when doing research, especially when you are actually reaching out to people and interviewing them. We wander into the grey area, though, with instances such as the Facebook kerfuffle and OkCupid testing.

-Regarding Facebook- On the one hand, I understand that the results that Facebook would have gotten had they informed people of their study would have been completely skewed. If a person knows what they are saying is being monitored and, in a way, judged, then they are going to be much more selective and conscious of what they are posting. This seems like the kind of study where, in the future, we would say "Yeah, it wasn't ethical, but we learned a lot from it!". Think of how much doctors and scientists and the like have learned from doing unethical, dangerous, and downright f***ed up sh*t.Yes, it is AWFUL that they thought it okay to do that, but we as a population know so much more about how certain things work now because of them.

But on the other hand, NO! These are people! These are real, living people with real, potentially dangerous emotions. Imagine the amount of people battling depression who happen to have been on Facebook during the study. That could have been genuinely dangerous and harmful to their mental (and, in turn, physical) health. It's wrong, and I think facebook handled it all very poorly. It was manipulative, shady, and inconsiderate.

-Regarding OkCupid- This is wild. I've never really taken OkCupid very seriously, but in reality it IS a dating site and people go there to find ~~tru luv~~. People could have been missing some real chances for romance during their experiments! I feel less strongly about this one for some reason, but I should feel stronger. I think it's wrong to mess with peoples' lives in that way. I know I have ignored people on that godforsaken site if we had too low of a match or if they had no photo or something like that (I think I was on the site during that "Blind Date Day" and it was so dumb. But also it just proved how shallow I am, I guess). I don't know...

These studies are a hot mess, and I think we're going to have some good discussions tomorrow and I couldn't be more excited.

-Kev.w.pri (talk) 23:50, 5 October 2015 (UTC)


I understand many of the stipulations enforced in Bruckman's class, but I would like to play devil's advocate for a moment. How does transparency affect the value of the data?

In the OkCupid study, the results are most likely very accurate because people are unaware of the tests. In some online communities, transparency may not change the behaviors of the participants. However, there are many that would be wary of outsiders or resist outsiders altogether. Could a hacker group like Anonymous open itself to research? Even if the researcher followed protocol to gain access to the group, the group members may still be weary of the newcomer.

I find this question particularly interesting in terms of Bruckman's decision on page 87; students are not allowed to use the "ephemeral interactions" that they may encounter in their participant-research. Obviously, you can't ask every person you encounter online to agree to your study. But I think these one-off, fleeting interactions are part of the appeal of online communities, and that most research would not be complete by excluding them. - Hayden.L (talk) 02:54, 6 October 2015 (UTC) QIC #5


Bruckman's article was interesting to find out how many hoops are needed to jump through when conducting research in online communities. Because there are sometimes you don't know what or who you are dealing with online communities, children become a huge factor and worry. Children can't give consent and Bruckman made a good point that you can't put a button on a homepage of a website that asks for a parent's permission. (Bruckman p. 83) Who's to say a child can't get into a site they shouldn't be on? Even with the child locking of television stations and websites there can still be ways they figure that out. Bruckman's article is extremely thorough in the steps she and her students take in order to be ethical researchers. After reviewing the Facebook article and Rudder's OkCupid article it almost seems that what Bruckman is doing is unnecessary. These other sites are doing this A/B type testing and collecting data from it without any warning to their users. In the sense of OkCupid, I find it strange and odd they are experimenting this way with people's love lives. It gives the users false hope when they aren't making connections and the site is telling them they are compatible, but it is a test. On the other hand, some people who use the website might be okay with these tests because they are trying to find love and want whatever the site deems necessary to do so. Then it can be argued that you signed up for this website and you are allowing the people to run it to test things because the website itself is a data collector. Especially when looking at Facebook which is a site that is always evolving and trying to get people to participate in the community. It doesn't seem ethical to allow these sights to conduct their research this way, they should have to go through a similar process that Bruckman uses so there is more formality in the process of this research. Ltruk22 (talk) 13:37, 6 October 2015 (UTC)


I agree with the near majority of the class who is posting and saying that they are conflicted, because so am I. I understand the desire for people to test and research because we are all always in a constant quest for knowledge and have a desire to discover and find out more, but at what cost? Everybody wants to find out more, but then no one wants to be the test subject, or people don't mind being the test subject, but want to know that they are being tested on, which would possibly alter test results. I remember when it came out that Facebook had been conducting tests on Facebook and at first I was angry, but when I found out how minor the tests were it really didn't bother me. The tests that OkCupid and Facebook were running were not all that intrusive and invasive, but if I had to pick which was worse I would say that what OkCupid did was slightly less ethical because they were manipulating match results and distributing them to clients.

What I am really curious about is why OkCupid choose to stand up for Facebook and make themselves the new target of media and public backlash. Facebook was receiving an overwhelming amount of disapproval from the public and then OkCupid steps in and says stop yelling at Facebook we have been testing on all of our users also? Maybe there is something here that I missed, but from what I read this is how it appears. I can't deny that I find the results from their study interesting, so many these sites were hoping that people wouldn't care about being tested on if the results were intriguing and furthering research about what we know about the human brain and massages we receive.

The readings made me wonder how many other sites are conducting tests on me as I use my computer. I know that companies are continuously conducting A/B tests, but that is for their own marketing purposes and their only real goal there is to sell more and have a better customer experience, not to learn about the human brain, motives, and activities. It also makes me wonder how many sites are conducting research experiments, but don't want to come out an announce it due to all of the negative feedback Facebook constantly receives. We live in an interesting world where we all want to learn and know more, especially about ourselves, but always want someone else to be the test subject. Natawhee7 (talk) 14:02, 6 October 2015 (UTC)

Natawhee7, I think Rudder, of OkCupid felt strongly about this and he loves attention anyway! -Reagle (talk) 17:03, 6 October 2015 (UTC)

I am similarly conflicted when it comes to passing judgement on Facebook and OKCupid. On the one hand, I understand why websites specifically ones that are very much user driven like Facebook and OKCupid will want to use User Information to research how they can better serve their customer base, but on the other hand, I understand why users would like to know when their information is being used for research. I think that part of it is the users responsibility - without knowing whether or not my data actually was being used for research, I could safely assume that my data on Facebook or OKCupid is more prone to being used for research than my data elsewhere. However, part of being ethical also lands on the service provider - even though people usually don't read the Terms of Service/Data Use Policy, the service provider must still provide all necessary information there. When EPIC filed its complaint against Facebook, they learned that the Data Use Policy did not suggest that User Data would be used for research. So while I don't necessarily think Facebook/OKCupid were "ethical," I don't think they acted outside of the realm of what a user could reasonably expect (aside from the failure to self-report).

I do think academics should be held to a higher standard, and I think Bruckman's article clearly shows that academics, in most cases, are. Bruckman describes in detail how many things the students who are conducting research are required to do so that their research is made ethically legitimate. She explains how taking shortcuts sometimes has negative side effects - in one example, taking a shortcut on getting a consent form/release meant that the research could not be officially published. I think that academic works - which researchers tend to hold in higher regard than research based on random user data of a sites users - should absolutely be held to a higher standard, and I think Bruckman shows how we're already doing a good job doing that. Torma616 (talk) 15:09, 6 October 2015 (UTC)


Prior to reading Bruckman's article, I had never quite realized all the different aspects one has to consider in order for their online study to be conducted ethically (although it does make me wonder how often companies stick to all these components). In my opinion, the biggest components of ethical research is to not deceive the subjects involved in your study and to make sure your subjects have consented to being a part of your research. I bring up these two points in particular because these are the two main points that OkCupid failed to enact. Not only are they in an ethically blurry area of not informing their users they will be conducting research, but are playing with people's love lives which is the entire reason participants signed up for the site.

That being said, I found the "Love is Blind" experiment to be extremely interesting. In today's society, people are most interested in what they can have and see right away, a prime example of this are dating apps such as Tinder. With Tinder, when a new person's profile pops up the first thing you see is someone's main picture and age, thus promoting physical appearance is most important. However, in OkCupid's (unethical) study they found that with profile pictures removed "people responded to first messages 44% more often and conversations went deeper" which helps promote someone's personality rather than their appearance, which, for an online dating site should be the focus. This way when someone is attracted to someone they are attracted to their personality which unlike their appearance, is less likely to change. After reading this, what has me curious is how the results of OkCupid's study may have varied if the participants were aware of their participation in the experiment. BrazilSean (talk) 15:01, 6 October 2015 (UTC)

BrazilSean, In some experiments, you don't want to let people know what you are testing -- that would bias the results -- but it's usually for innocuous things and then you debrief them at the end. Even so, what about placebo's in health trials? -Reagle (talk) 17:03, 6 October 2015 (UTC)

I feel like I may be the only person that feels this way, but I approach online communities, especially those strongly connected to identity (i.e. Facebook, Twitter, OkCupid), with the knowledge that most of my information is accessible by uninvited parties because of the structure of internet. Big Brother has been a common topic of these sorts of conversations, and whereas the idea of having virtually no privacy on the internet sucks it has become something that I cater to. The Bruckman article elaborated on all the ways that researchers can use online communities for research in an ethical way, and the regulations are very extensive. I agree with User:Kev.w.pri that I feel that researchers should be as transparent as possible while utilizing online communities in order to avoid deceiving subjects and members of the group which the researcher is studying.

With the Facebook example, I find it silly that people should willingly share personal information on a public platform and then be enraged that the information was used for a purpose other than their original intent, but that's more a factor of social responsibility than research. Thus I would not find the Emotional Contagion Through Social Networks example to be as problematic if they hadn't manipulated the data by showing different users different interfaces and exposing them to different emotional cues. Additionally, I question the accuracy of the researchers work because of the methods and reporting talked about on the Facebook Wiki page. I also find it difficult to justify Sheryl Sandburg's claim that it was research for product information, because of the nature of the research and the methods used.

For OKCupid, I would be more comfortable knowing that I, or my data, was used for a study in which the company was testing the product and the way that the user interacted with the product and other members of the community, i.e. matches. I also found that data to be use and quite interesting because it actually told a story of how people were using the site and what changes were most effective. Alexisvictoria93 (talk) 16:02, 6 October 2015 (UTC)

Alexisvictoria93, One of the wrinkles about the FB study was that the data collection was FB but the analysis by academics under IRB. -Reagle (talk) 17:03, 6 October 2015 (UTC)

Oct 09 Fri - Needs-based and lock-in

According to Kraut's chapter Encouraging Commitment in Online Communities, a normative community is one that relies on its users feeling that they have an obligation to the community (102). These communities have an explicit purpose that drives users to sign up and remain committed. An example would be an advocacy community. A needs based community is one that depends on the net benefits that people experience in that community. Users feel like they are gaining something from the group: information, social support. companionship, or reputation (105). For example, an online gaming community relies on the users feeling like they are gaining a fun experience through playing the game, as well as making social connections with other users.

After reading the articles on Facebook's microformats, I must say I don't have a strong opinion. This is probably because I don't fully understand the consequence or reward of being able to download my Facebook data. Perez mentioned that this is an important step for Facebook to loosen their tight control on our data. But do we want them to loosen their tight control? Does this mean they can use my data without my permission? It seems that this feature has no benefit to mainstream users, so I'm curious what purpose it serves. I'm looking forward to a our class discussion and hope it will give me some clarity as to what this could mean as a positive or a negative adjustment to our digital lives.

--Nataliewarther (talk) 13:23, 8 October 2015 (UTC)


The fact that stood out to me from this chapter of Kraut (I had sauerkraut for lunch and dinner yesterday!) is in regards to community-specific investments, and how "people like groups more if they have to endure a severe initiation process to join them than if they undergo a milder initiation" (p. 111). Now, of course I initially thought of the idiocy that is hazing in Greek life. I could go on a rant about how I think a lot of Greek life is toxic for probably, like, 6 weeks, but I won't because that's not what this class nor this QIC is about. Instead, I'm going to talk about the MMORPG City of Heroes.

City of Heroes was a HUGE part of my middle school/early high school life. Like... probably unhealthily so. But that's neither here nor there. The reason I couldn't stop playing this game was because of exactly what was discussed in the Kraut reading and that I mentioned above, along with a multitude of other reasons discussed previously in class/readings. First of all, I would spend HOURS just customizing my character when I would initially create one. There were so many choices--superpower type, secondary power type, color scheme, cape or no cape, facial features, hair, physique, EVERYTHING. That alone would keep me playing for ages. But then, as the game goes on and you grow your character to a high level and you have all these new powers that beginners could only dream of having, you might start to get bored. But you bet your patootie I kept playing, because how could I stop after all that time and commitment! Thankfully, the game would update and there would be more challenges and you would meet new people to form teams with and try different missions and things would go on their merry way, until it was 4 in the morning and your dad would come in and yell at you to go to bed. Then you'd start the whole thing over the next day.

Anyway, this is some unhealthy behavior. But it's become so normalized in our day to day lives. It's low-key and abuser-abusee relationship. Something takes up all your time, hurts your relationships, physically hurts you (have you experienced post binge-playing eye and headaches? They're rough), but we keep going back to it. Greek life, video games, junk food, so much! Why are we as humans so susceptible to this? We gotta better ourselves.

Signing off, Kev.w.pri (talk) 20:31, 8 October 2015 (UTC)

Kev.w.pri, BTW: We'll be discussing this more on Nov 03 Tue. -Reagle (talk) 16:54, 9 October 2015 (UTC)

At the end of chapter 3, Kraut et al talks about user commitment in online communities, differentiating between normative and needs-based commitment. The former involves perceived obligations that users have towards the community, such as for a cause, other users, or internally generated reciprocity, i.e. 'paying it forward.' The latter involves net benefits acquired from participating in an online community, such as the sharing of information or companionship. Throughout the section, I couldn't help but think that nearly all of the characteristics he outlines are applicable to communities found IRL. For instance, he explains that a common normative reason that one commits to a community is for a cause, be it breast cancer, charity, or some petition–I think the majority of civil movements begin this way, but whether or not they maintain momentum depends on the level of perceived obligations (social proofing) and the direct net benefits received through continued commitment, as Kraut suggests. #OccupyWallstreet seems a good example of a community that failed because of its lack of internal mutual necessity–its goal was vague, access was too easy, it required little skill or active contribution, and in the end it accomplished little to nothing. As for needs-based communal commitment, I see this as the birth of the tribe, and subsequently societies of civilization; formation just happens quicker in cyber-space. Kraut also mentions that the difficulty or learning curve needed to enter a community is directly proportional to the likelihood that users stay–essentially, people like challenges. People will perform any amount of mental gymnastics necessary in order to convince themselves that the sunk cost of time invested in, say, reaching level 42 in World of Warcraft, was not a complete waste. It reminds me of those who insist that their favorite book is some megalith like Atlas Shrugged, as if that titular badge is a legitimate justification for throwing 80 hours of life away. If somethings hard to get into, it must be worth it right? I've noticed this psychological trend in people who go through 'hell-week' to join a frat, or those who wait 45 minutes in line for a movie, club, restaurant, etc., or even (not to denigrate any institution) people who join the military: hurdling umpteen obstacles must prove that the finish line is praiseworthy.

Kraut then discusses how communities can ensure continued commitment, noting that scarcity directly determines the likelihood of returning. If there is a huge supply of model train enthusiast websites, there will be a very dispersed population of specific community members. Yet, if there is only one of a certain communal service, such as World of Warcraft, demand will be high for that community. Just like in economics, supply determines demand and vice-versa. Also, the content in a community must be non-transferable, kind of like how you can only use Dave and Busters credit at a location, or any other gift card for that matter. Similarly, as mentioned in the other three readings, Facebook has taken several measures to ensure that users can't export their info to competitors. This exploits sunk user investment and severs alternatives. It makes me wonder what'll happen when, in the future, the majority of Facebook profiles will be those of dead people. Who will own the data? Will family members gain the right? Soon, it'll be one massive digital graveyard.

Oh, Kraut also talks about how large communities lose committed users due to excessive volume drowning content or user visibility. He proposes that to combat this, communities should divide and subdivide based on given criteria. Reddit does a great job of this with its many subreddits. Also, nations usually do this well (and unwittingly) by creating fragmented states, cities, neighborhoods, families, etc. while maintaining national solidarity, or patriotism. If only all of humanity could do this!

Anussbaumer (talk) 01:18, 9 October 2015 (UTC)

Anussbaumer, Let's talk about sunk cost in class today. -Reagle (talk) 16:54, 9 October 2015 (UTC)

In the latter part of chapter 3 Kraut focuses on normative commitment and needs-based commitment, but if I was going to relate one to my life and involvement I would say that the online communities I am a part of are due to needs-based commitment. Bringing the additional three readings into Krauts' idea of needs-based commitment is interesting because it appears that the people who run Facebook are aware that this is the dynamic or audience platform that they are fulfilling. Facebook seems to be VERY weary about allowing ANY information out of their database because they know once they let it out they can't get it back. It was interesting to me that Google sees no harm in allowing users to export data and use it elsewhere, while Facebook could not be more opposed. I was thinking this is because Google feels that they are foundational and less needs-based than Facebook is. In my life there have been times where for months at a time I rarely went on and used Facebook because other social platforms spring up and dominate the online world and I wonder if this is why Facebook refuses to let any data out, because it knows that it needs to lock us in to their community. Google on the other hand is something that I could not live without. Google is incorporated into so many aspects of my life because they have so many diverse platforms.

Natawhee7, interesting theory! -Reagle (talk) 16:54, 9 October 2015 (UTC)

I noticed that the articles we read about Facebook were from a few years ago so I am curious if they still maintain their same beliefs on exportation of data (I could Google the answer but I am feeling a bit lazy). I have not noticed any new ways to export data from Facebook, but I have noticed new ways in which they are trying to lock in users by adding new features to Facebook which keep users in the site such as pulling up old photos and memories, like timehop does, and allowing you to post a temporary profile pic which gives off more of an Instagram vibe. It seems to me that Facebook is attempting to keep all of the users needs within the walls of Facebook in order to lock them in because it knows that it is a needs-based community.

Natawhee7 (talk) 03:40, 9 October 2015 (UTC)


Kraut outlines some design claims that show how communities "lock in" their users, and the Facebook articles highlight how the network has negotiated these concepts over the past few years.

One of Kraut's claims really got me thinking: "showing information about other communities in the same ecological niche reduces needs-based commitment" (p. 108). So I started thinking about social networks because they are the most relevant online communities to my daily life. I see cross-channel content online all the time. In fact, I often take advantage of the built-in features which allow me to "push" my post across networks. These features seem to go against Kraut's claim, but as I thought more, I understand how this capability might just be a small concession in exchange for the inability to export data.

Hayden.L, can you give a specific example? -Reagle (talk) 16:54, 9 October 2015 (UTC)

Neither Twitter nor Facebook nor Instagram allow the user to export data easily, and each manage internal data sharing in unique ways. Instagram does not have the internal ability to re-post other users' content. Facebook files messages into a user's "other" folder when they are not connected (and for a while, tested a paid feature where messages could go into the inbox for $1). Twitter requires users to be fully public or fully private, and direct messages can only be sent between users that mutually follow each other. These networks negotiate these two claims, which is why I believe why they are successful. As with other chapters, Kraut's claims can be enacted in varying degrees to find the perfect balance for a successful community. - Hayden.L (talk) 04:22, 9 October 2015 (UTC)


After reading Kraut's design claims on normative and needs-based commitment, my idea of an exemplar community became clearer. Perhaps I am involved and committed to more than three sites, but the only examples I could think of while I was reading the chapter were Facebook, Instagram, Wikipedia and a health support group. I had never taken into consideration the abundance of thought that comes behind designing an online community and although I agree with almost all of Kraut's claims, I am still trying to figure out if I support claim 34: "making it difficult for members to export assets or transfer them to other members increases needs-based commitment" (Kraut, 2011, p. 110).

The concept of lock-in would make more sense to me if only external communities could not export members' assets, as seen in the example of Facebook blocking Google from exporting data. I understand what Kraut tries to convey through that claim, and perhaps my lack of involvement in online communities makes me disagree with him, but the idea that it is tedious for a member to export their own personal information should be reason enough to leave that community.

Both articles narrating how Facebook "locks-in" their members made me disagree with Kraut even more. I looked up how to use the "Download Your Information" tool and there were a lot of precautions expressed throughout the site. Are these communities trying to block us from getting our own information? Despite the fact that "it's data, but it's essentially useless" (Perez, 2011), at the end of the day it is my data and these assets shouldn't be difficult to export. The extreme measures would not increase my needs-based commitment but rather make me want to look for an alternative community. Will there be a time when you cannot create an account with both Facebook and Twitter because two different people own them? I know Instagram and Facebook have some sort of interconnectivity (Zuckerberg paid $1billion for the platform's creation) and therefore Instagram lets you upload your picture to Facebook as well. What would have happened if Zuckerberg was never involved in Instagram? Your Instagram photo would have never reached Facebook.

My opinion could be completely wrong, but the idea of being "locked in" a community and have little access to exporting assets makes the whole design process doubtful for me, not committed to it. Andrea guerrerov (talk) 12:28, 9 October 2015 (UTC)

Andrea guerrerov, I agree, but I think we are not typical. -Reagle (talk) 16:54, 9 October 2015 (UTC)

In Chapter 3 of Kraut, one of his design claims is, "showing people what they have received from the community increases their normative commitment."(p.104) This topic was about people posting and how there was feedback based on the those posts. The site would let the people know if they didn't think the post was that great, or if they posted more often their own scores would increase. It became a motivator to post if their was more recognition involved. That also rings true with Wikipedia and it was a similar concept that was used here and people began to post more. There were certain self-fulfilling benefits to posting and editing more. Similarly social media websites promote this same behavior. I have definitely seen or gotten emails from websites saying, "we haven't seen you in awhile" or "your last post was on..." This is an attempt to bring more context to your own page.

Ltruk22, LinkedIn got sued for all their stupid reminders! -Reagle (talk) 16:54, 9 October 2015 (UTC)

In response to the other articles regarding taking your information from Facebook, I don't remember hearing about that. It is an interesting concept to 'back-up' your Facebook and have that information with you. I do not find that personally useful because I don't feel the need to have all of my previous posts from years ago stored somewhere else. This concept may be more beneficial to people who use their Facebook page as a business because they can take the data and have records of what happened on the page. For personal reasons if I lost all the information on my computer and wanted to get old pictures back, I would just save them right off of their to my desktop. Having another space online where it is backed up does not seem necessary to me. And that Facebook is always there on any computer you sign into. I don't find it necessary for people to export my information and take my email address with them, that seems like an invasion of privacy, especially if people's phone numbers are on there too. Ltruk22 (talk) 12:31, 9 October 2015 (UTC)


When reading Kraut et al. and the design claims about needs-based commitment and exporting assets, I suddenly really understood the reason for the other two short articles. After reading those two first, I had an incredibly difficult time understanding why Facebook and other social platforms would want to design this downloadable data, since the reading by Sarah Perez made it seem like this was almost useless, due to the minimal information, identity-wise, that was being collected and compressed. However the idea of having a small zip file that basically catalogues your life on Facebook (If I'm understanding how this works correctly) can completely explain the design claims about needs-based commitment- if you make it more difficult to export information to another platform, the user will see how much they need the platform, ex. Facebook, because of the extensive information on the site and how they can interact with this information using only Facebook.

In response to a few of the QICs above, I have to agree that I still don't really see the need for this feature, at least from a user-standpoint. Maybe it is my commitment, or lack thereof to Facebook and these seemingly useless past posts, but anything without more valuable information for me seems futile. Not to say that these zip files should be including things such as phone numbers or e-mails, because that does sound like an invasion of privacy, I just still struggle to find solid reasoning for this feature to begin with, but that may also stem from my very limited knowledge of the subject matter in the first place. Smfredd (talk) 15:29, 9 October 2015 (UTC)


While reading Kraut chapter three, what really stuck with me was the term, indirect reciprocity which is where "people feel obliged to 'pay it forward' to somebody, even if it's not the specific person who helped them" (Kraut pg. 103). But what is not mentioned, and I think it should, one has to be a certain age to feel that kind of obligation to give back, especially if it's someone who didn't directly help them. The reason I say this is because when I was little I was diagnosed with Leukemia and went through chemotherapy for 3 years. But when I was in remission, I was approached and asked to give a speech in front of five hundred people (I was 10) and I didn't want to do it. I didn't want to give the speech because at that age, I never really knew just how close I had come to losing my life because I was so young. However, the people asking me to do this changed approaches and decided to use extrinsic motivation to convince me to give the speech by offering me Laker Tickets (I agreed before they finished their sentence).

But now, since I am now older enough to fully appreciate just how lucky I was to have survived, I do feel a sense of obligation to give back. Which is why I've been a participant in a long-term follow up cancer-survivorship clinic for the past 8 years to help those who are currently going through what I did.

In summary, as a kid, I only cared about what was given to me(extrinsic motivation), not giving back. It takes one to be older to feel an obligation to give back, but when they do, it's an extreme strong commitment. BrazilSean (talk) 15:30, 9 October 2015 (UTC)


After reading the two articles about Facebook, I was curious about the dichotomy between Facebook and Google in their willingness to allow users to export their data for use on alternate platforms. While Facebook was extremely protective of their data and was criticized for making their file formats incompatible for use with Google, Google was happy to allow their users full control over their data for exporting purposes. When combined with the Kraut reading, I think this is particularly interesting if we frame it as what type of bond each site offers. At first, I thought that Facebook operated on a bonds-based community attachment whereas Google operated as needs-based. However, after reading this article on Life Hacker, I realized that Google Plus boasts its own form of bonds based attachment as well. While Facebook capitalizes on the fact that you'll want to maintain the bonds you already have, Google Plus encourages new bonds to be formed. Rarely does someone add another person on Facebook without meeting them in person first, whereas many of the connections that take place on Google Plus are made online. My question then would be, what type of bond-based platform is stronger: a community that encourages the maintenance of bonds, or a community that encourages the constant formation of new bonds? Wikibicki (talk) 15:44, 9 October 2015 (UTC)


The section of the chapter on Encouraging Commitment in Online Communities that we read for today's class covered the factors that contribute to enhancing the normative commitment of a community. Normative commitment is based on the individuals feelings of obligation towards the community. The factors of commitment level were whether or not an individual was committed to the cause of the organization, the level of commitment present in other community participants, and reciprocity. "Normative commitment can be enhanced through highlighting the importance of the community's purpose, testimonials about others' commitments, priming the norm of reciprocity, and showing people how they have benefited."

The article [Facebook policy now clearly bans exporting user data to competing social networks], it occurred to me that the Facebook developers were trying to find the balance of the double-edged sword mentioned in the reading about the presenting of similar community. "On the positive side, it may enhance identity-based commitment. On the negative side, it can reduce needs-based commitment as members become more aware of an alternative community that they could explore and possibly switch to" (Kraut). Seeing that the article was written a few years ago, I would find it vary interesting to see the ways the Facebook went about sharing information with other networks so as to maintain the community of Facebook and allow users to explore other subgroups of a social network, like twitter and instagram. Now we have become so accustomed to signing in or signing up with Facebook because of the microformats that allow us to fill out information without manually inputting. This is another way that alternative networks connect us and make us feel connected... definitely going to look further into this!

Alexisvictoria93 (talk) 16:02, 9 October 2015 (UTC)

Oct 13 Tue - Internet rules & CoC

To effectively regulate behavior in online communities, Kraut and Resnick suggest three ways of learning norms of a community: "observing other people and the consequences of their behavior, seeing instructive generalizations or codes of conduct, or behaving and directly receiving feedback" (2011, p. 141). Although I do understand such purposes of code of conduct as promoting "normative behaviors that can help the communities achieve their missions" (Kraut and Resnick, 2011, p. 126), I think code of conduct is no longer powerful enough to generate appropriate, healthy online behavior. As we have already discussed in class, code of conduct for many people, especially nowadays, is merely a long, tedious process, and often times it seems like a waste of time to read a 60 page long code of conduct.

User20159, Code of conduct can be different than Terms of Service -Reagle (talk) 17:03, 13 October 2015 (UTC)

One time I was mindlessly liking pictures on Instagram, and Instagram blocked me from liking pictures for about 10-15 minutes. Ever since then I've been pretty careful about what I like. Although I believe that coerced compliance, like "gags and bans" (Kraut and Resnick, 2011, p. 137), is an effective way to limit and reduce bad online behavior, I think we need a mechanism that is more practical, like publicly displaying history of a member's inappropriate online behavior on his or her profile. Techniques like gags and bans are only temporary, and don't seem to fully "fix" bad online behavior. Publicizing one's inappropriate behavior may backfire that the community may lose the member, but if the community is well-constructed enough to fulfill members' interests and needs, this method could successfully attract more members with good online behavior and further motivate members to behave their best. It sounds scary and humiliating, but "observing consequences of others' behavior" (Kraut and Resnick, 2011, p. 143) could potentially eliminate the need of code of conduct and become the best alarm for trolls to acknowledge "consequences for violating [their] code of conduct" (2014, para. 7). User20159 (talk) 00:09, 13 October 2015 (UTC)


There are two parts of this week's reading that I am thinking about in particular.

1. The subjectivity of "good" behavior based on the community.

Kraut and Resnick generally describe mainstream online communities, and how they enforce rules. On the other hand, the "Rules of the Internet" piece applies to niche groups like 4chan and Anonymous. The rules listed set a specific tone for the communities, so it's interesting to consider how these groups encourage behavior that is seen as negative offline. Additionally, how do rules encourage contributions and commitment? Is it different for unusual communities like 4chan?

2. Thinking on a large scale, there are very few actual rules for the internet. This has led to numerous information and data scandals as users take advantage of the lack of international laws that govern cyberspace.

I'm specifically thinking about WikiLeaks. Some people agree with the group's existence, while others (and most governments) are staunchly against. In the future, will there be more rules for the internet? I believe there will be. Some will be for the better while others may limit the power of the web. I think that as digital natives become older, they will have more informed ideas about how the internet should operate and legislation will adapt accordingly. Many young people have the mentality of "why should I pay, when I can get it for free online?", and often use illegal torrents or streaming websites to access media. While many parts of the internet support this thinking, governments do not. Whose ideals will win in the future? - Hayden.L (talk) 01:34, 13 October 2015 (UTC)


In this week's Kraut reading, what first caught my eye were the design principles for a successful online community "community participation in rule making, monitoring, graduated sanctions and conflict-resolution mechanisms" (Kraut p. 130), specifically the community participation in monitoring. One online community which does this particularly well is Tumblr. I have seen hundreds of examples where people will report/flag posts, not because somebody took their own photo and claimed it as their own but because they recognized the picture as belonging to someone else and reported the post on their behalf.

One other way I noticed Kraut's ideals prevalent in the Tumblr community was through one of the design claims "Design claim 16 Displaying feedback of members to others increase members' knowledge of community norms and compliance", the biggest example is when a user either slut shames or body shames another, and the result? Massive amounts of comments on the post coming to the victims defense. Which also is a form of community monitoring.

On a side note (not relevant to what I was previously talking about), I didn't quite understand the article on the rules of the internet. After reading all the rules, which nearly all were completely ridiculous (especially the "Tits or GTFO"), at the very end of the article it basically says that all the rules didn't exist. It confused me to say the least. BrazilSean (talk) 09:45, 13 October 2015 (UTC)


I believe that I would love to be a part of a community that has a code of conduct because I have been in some communities that got tarnished by a few loud and obnoxious people that ruined it for everyone. I understand people wanting to be able to freely express their opinions, but at the same time there are some comments that are unneeded and unnecessary. I can see how it would be very challenging to create a code of conduct because you would be forced to assume the worst in everybody, which I would not like to do. It if was going to create a code of conduct for an online community I was in I would either follow a how-to guide, like the Django Code of Conduct guide, or more likely I would just try to copy one that someone else already had created.

I think that if I was actively and frequently participating in online forum communities I would love to have a code of conduct that helped people focused, because there is a time and a place to be silly and inappropriate and I feel like I wouldn't need or want that in certain groups.I would hope that the code of conduct would not hinder any and all out of the box discussions out of fear of being reported, but would just hope that it kept people appropriate, but maybe I have a different opinion since I have never had my speech hindered online in any way. Maybe if I was told that I was not allowed to say certain things it would piss me off and I would be opposed to a code of conduct, anything is possible. I understand the ethics behind a code of conduct, but I wonder how well it works in active online communities; whether they are big forums with lots of people who don't know each other, or smaller communities where relationships develop over time.

- Natawhee7 (talk) 10:44, 13 October 2015 (UTC)


Kraut and Resnick say that conflicts are inevitable in online communities, but there can be certain ways to regulate them, with a code of conduct. Although there are always going to be people that are attempting to disrupt a community, by trolling, there are ways to manage it as stated in the HOWTO article. Kraut and Resnick talk about how to limit bad behavior and having moderators screen for inappropriate behavior. (p.131) It does become difficult to monitor all posts or emails without having a few problems along the way. People can always manage to find a way around the system. The system does seem a little skewed in the sense of why do these moderators get to decide what is rated higher than other things. Unless it is completely inappropriate, who's to say someone doesn't have a valid opinion or point? Why should someone's work be penalized because of another opinion.

I do agree that having a code of conduct is beneficial because of some nasty people out there. I'm not sure if there is a better way to regulate it than by the people that attend the conferences as mentioned in the HOWTO article. The people who attend the conference should be exemplifying the most appropriate behavior and guide members that don't understand what is correct. But also as mentioned people will find a way around it, by creating a new username etc. if they really do want to troll a certain page. Although I'm sure not too many people do actually read a code of conduct it will save a website if there is an issue at hand. It becomes easier to point out an issue based on the code of conduct rather than take steps backwards to recreate a code that will help change the issue at hand. Ltruk22 (talk) 14:03, 13 October 2015 (UTC)

Ltruk22, yes, once a problem has arisen, it's hard to do CoC's in hindsight... As Nataliewarther quotes K&R: "it is the argument over whether something is harassment that makes people leave, not the harassment itself". -Reagle (talk) 17:12, 13 October 2015 (UTC)

After reading Kraut and Resnick's chapter on regulating behavior online, I have to say I'm a bit surprised that these things even need to be discussed to this length. I think I'm very fortunate in that I've never encountered an online community that was crawling with people who wanted to sabotage the experience or opinions of others, but I suspect this is because like any other busy female millennial, I pretty much stick to your basic Facebook and Instagram. Kraut refers to these people as "outsiders who have no vested interest in the community functioning well" (128). It makes sense that these people would use chatrooms, since those forums are easy to join. But again, the last time I saw a chatroom was probably on Neopets when I was ten. Let me tell you, that's where the party was at.

A few of the design claims in this chapter confused me, as they seem different from what I've experienced. In particular, I question design claim #3: "Consistently applied moderation criteria, a chance to argue one's case, and appeal procedures increase the legitimacy and thus the effectiveness of moderation decisions" (133). In applying this to the online community I'm most familiar with, Facebook, I'm pretty sure if your post or comment gets reported you don't have the chance to argue your case. I believe this is an affective strategy, because if your online content seems innapropriate to anyone within your online circle, it probably doesn't have a place on the internet. I would like to only be part of online communities that allow offended parties to annonymously voice that they think content is harmful, and don't even allow the conversation to publicly take place of whether or not it is appropriate. After all, as is stated in How to Create a Code of Conduct for your Online Community, "it is the argument over whether something is harassment that makes people leave, not the harassment itself".

--Nataliewarther (talk) 14:21, 13 October 2015 (UTC)

Nataliewarther, Excellent critical thinking in constrasting the two sources! BTW: That's effective -Reagle (talk) 17:03, 13 October 2015 (UTC)

Sometimes, I think I've spoiled myself online in regards to who I surround myself with. Once I was in middle school, I became very active online. I played MMORPGs and was a part of a few forums on Newgrounds.com. Back then, I didn't really concern myself with trolls or just genuinely negative people, because I was just trying to have fun. Now that I'm older, though, a lot of my time online is spent on Twitter, Youtube, Facebook, and the like, and a lot of that time is dedicated to being socially conscious and responsible with my words. I only follow non-assholes on my social media. On tumblr, I follow a lot of feminists and people who post things surrounding gay rights, body positivity, and generally uplifting things. Because of this, I'm sometimes lulled into this false sense of security that people are changing for the better! People are becoming more kind, more openminded! Woohoo!

But then something comes around and punches me square in the throat. Something like the "Dear Fat People" video, or some dumbass sexist BS.

I love the internet. I really do. I grew as it grew and we've been through a lot together (Myspace....). Unfortunately, it can be a really hurtful place. I would LOVE if the internet actually had Rules, but only if those rules were centered around equality and sunshine and rainbows. The rules that exist now, if I read them correctly/understand them well enough from previous exposure, are the kinds of rules that were made up by what I'm assuming to be the stereotype of lowly nerds in their parents basement afraid of real human contact. I realize that's mean, and I was just talking about sunshine and rainbows, but I just wanted to get my point across so everyone could have a similar idea of who I was talking about. These rules were made on 4chan. These rules seem very much like the type of things that little kids make up in order to keep other people out. They have all their inside jokes and if you don't understand them, then you're a n00b and you're only useful if you show ur t!ts. I think the deep dark corners of the internet that have existed since the beginning of time that are filled with this type of person are really causing a lot of problems. They are typically very sexist, homophobic, and nasty to newcomers. That's not what the internet is about. We should totally just stab Caesar!!.... Got caught up in Mean Girls, sorry.

I don't know if I've really made any sense in this QIC. Have I made any solid points, or have I just been vomiting nonsense onto the page? I'll try to explain my ideas better in person. Sigh.

-Kev.w.pri (talk) 15:11, 13 October 2015 (UTC)


Our generation had a Facebook experience unlike any other before or after us. Facebook got its start in 2004, just 3 years before most of us were going into the most influential, experimental and judgmental times of our lives…high school. At the time we were all just beginning to figure out who we were, just as this online Community (Facebook), was also coming along to shape who we were becoming. Since Facebook and the Internet were extremely new to us, we were fascinated by Facebook and began connecting and posting regularly. One of the initial rules of Facebook was that you had to at least be in high school, however with a basic knowledge of the Internet we were all able to circumvent that. As Facebook gained popularity, we began to see how these "rules" came into effect. Once Facebook had been around for a little bit people began to test the limits to how much they could post. Originally, it was just statuses and wall posts but eventually it became embarrassing/harmful pictures. This was my first experience with an online community outstepping its normative behavior. I knew immediately that this seemed wrong, as did the rest of the Facebook community and it wasn't long until Facebook initiated the "report" button and the behavior of the community was once again in order. Finally, I want to look forward to now when Facebook's Code of Conduct is too long to even begin to read. I found it interesting that in the "HOWTO design a code of conduct for your community" article, the author mentioned, "a code of conduct that isn't (or can't be) enforced is worse than no code of conduct at all". I believe that to some extent this has become an issue for sites like Facebook, where many of the basic rules are known and can be enforced but the vast majority of the Code of Conduct is lost to the community. Johnmdaigneault (talk) 15:29, 13 October 2015 (UTC)

Johnmdaigneault, what is the FB Code of Conduct folks are referring to? -Reagle (talk) 17:03, 13 October 2015 (UTC)

Reddit is a fascinating site that constantly comes to mind while reading Kraut and Resnick (2011), especially when it comes to normative behavior. Since it is an open-content platform, balancing free expression with privacy and the protection of community members can be very challenging. Reddit has a reddiquette, "an informal expression of the values of many redditors, as written by redditors themselves"(Reddit.com).

The reddiquette abides to many of the arguments Kraut and Resnick (2011) propose in chapter 4, making it believable that redditors "agree about behaviors that are acceptable and those that are not" (p. 126). The page serves as a code of conduct, where behavioral norms for the online community are displayed. Nonetheless, I find the norms quite surprising, especially if it is true that redditors themselves came up with these. Several points mentioned under "Please don't…" are constantly a "Please Do" in Reddit. For example, "insult others," "be (intentionally) rude" "conduct personal attacks on other commenters" are categorized under "Please don't" although those are salient examples seen throughout Reddit. Trying to regulate every redditors behavior online must be a challenging task for moderators, especially if there are so many members (or trolls) who do not comply with the consensus standards of normative behavior

Reading over the reddiquette and finding examples in the site that contradicted their norms made me question, what content do Reddit's moderators delete? Which ones are okay to keep? If comments that insult others (such as those left to Ellen Pao) are still in the site despite the fact it is mentioned in the reddiquette as a "Please don't," does that mean that members in the online community tend to disobey with the code of conduct?Andrea guerrerov (talk) 15:31, 13 October 2015 (UTC)

[User:Andrea guerrerov|Andrea guerrerov], Good question! Let's discuss in class. -Reagle (talk) 17:03, 13 October 2015 (UTC)

As I have grown up using the internet I have experienced the many stages of development along with it. I tend to fall into the same space as Kevin in that I like to believe that most people are good or have good intent; however, the 14 year old girl in me is screaming that I'm full of shit. As I have matured I have been pretty selective about who I surround myself both online and in real life, and because I am not active in many online communities outside of the ones that connect me with people I actually know... I am not often exposed to the malignant trolls of the internet. My biggest concern lies in the generations that are to follow, and so I feel that if "Rules of the Internet" were to strictly be applied to all forms of online community it would produce a safer, more inviting medium for people to interact and engage with the people and content of their choosing.

Kraut and Resnick identified the three ways in which people learn the norms of a community, as many people have already listed these ways are by observation, clear instructions/ rules, and by receiving feedback. They noted that formal feedback is far more effective than informal feedback. So my reaction is, with such high traffic on the internet I can imagine it being very difficult for moderators to evaluate posts and users, and give proper feedback in a constructive and learning environment without the use of temporary and frustrating "gags or bans". What is the most effective way for an online community to monitor the content being posted and ensure that the community members are performing well together?

Alexisvictoria93 (talk) 15:37, 13 October 2015 (UTC)

Alexisvictoria93, K&R's "observe" reminds me of Rule 33. Lurk Moar. -Reagle (talk) 17:03, 13 October 2015 (UTC)

In reading the Kraut & Resnick and other two articles about rules and regulating behavior, the first thought that came to my mind was why someone would invest so much time and effort, along with knowledge about a community or at least knowledge of the computer/hacking skills to hack in the first place. Call me an idealist or a "glass half full" but I still cannot wrap my mind around hacking for no reason-i.e., "disemvoweling" or such, it seems to be people disrupting the status quo for their own enjoyment to watch people scramble and I just don't see the meaning of it. That being said;

I feel like there is always a way around the rules and regulations, or a "hacker" is always trying to come up with something new, which has lead many online communities to be skeptical of high use among users-for example an activity quota. You can only participate this much over this period of time until you are cut off. Even if the high rate of participation is deemed coming from a "bad actor", there is only so much a site and moderator can do to ban someone until they might be able to find a way around it. I also now understand why some sites use "payment"-as Kraut & Resnick (2011) stated in design claim 11, paying to be a part of the site reduces "trolls and manipulators" (p. 139). It seems like these people are trying to cause the most damage in the shortest amount of time. That also being said, I still find it just a waste of time to put effort into attempting to ruin a community people have put many hours into. However the type of rules we read about in "Know Your Meme" might also give an idea as to how some of these moderators operate-it seems as if there is a "good guy/bad guy" thing going on, and both are very good at their jobs. Smfredd (talk) 15:42, 13 October 2015 (UTC)

Smfredd, this makes me this of this article I just read: Pregnant Woman Discovers Husband Is Vile Reddit Troll Who Won't Stop

Oct 16 Fri - Compliance and norms

Wow I haven't been the first QIC in so long!!! This is my moment! I'm back, baby.

Anyway, this chapter of Kraut and Resnick was by far my favorite, because it pointed out so many things that I see all the time online (except for anything having to do with money online--like charging for pseudonym switching and those things from towards the end of the chapter)

when I was in middle school, I spent a lot of time on newgrounds.com. A lot of that time spent there was spent on the forums. Seeing all of these rules about how to keep trolls from trolling and how to keep people acting politely online was really interesting to me. What really fascinated me when reading this was just the reiteration of how much people car about their reputation. Even online, when you may not even be revealing much information about yourself, you want to make sure people like you and don't think you're a total d*ck. I know for a fact that when people would be inappropriate/annoying on the forums I was a part of, I would immediately write them off and never communicate with them. I don't know where I'm going with this, but it's just interesting. Why do we care so much about the way others perceive us? Is it because we're pack animals? Are we pack animals? I think so, right? Someone call Bill Nye and double check.

But yeah, being a good, contributing part of the community was very important to me. On this one thread, we had a rules list posted and breaking it made people feel like total schmucks.

Another thing I found really interesting was the ways in which websites prevent spammers from successfully spamming their comments sections. I don't know why, but I find the field of SEO (Search Engine Optimization for those of you who don't know. Ha ha ha! Look how smart I are). I never even fully realized the reason spammers were posting things wasn't even for the people that might see that shit, rather for the search engines.

ANOTHER thing I found funny was the term "cheap pseudonyms", because so many of those follow me on tumblr and Instagram and it's so annoying. And also just spam accounts in general. And a lot of random porn accounts. Very off-putting.

I realize I haven't posed anything all that deep or meaningful in this QIC. I'm disappointed in myself.

Full of shame, Kev.w.pri (talk)


In reading Kraut and Resnick the term that stood out to me the most was "cheap pseudonyms", which was a term I'd never heard before even though the thing itself was something I've not only know about but seen happen before. The first thing that came to mind here was the idea of whether to disclose personal information online or not, and how depending on how much information you choose to disclose, you can form bonds with other members of the community. This cheap pseudonym idea makes sense since someone repeatedly violating the community guidelines is also someone who is probably not trying to put time and effort into adding to the community (similar to the ideas form this past Tuesdays' readings). Therefore this person would also most likely not disclose a large amount of personal information, since their goal isn't to bond or add to the community, but rather mess around (I'm still struggling to see why someone would, in my opinion, waste their time there). The design claims also talk about how to prevent cheap pseudonyms, providing real-life identification upon signing up could help combat this (design claim 29). However would this remove the idea of anonymous on the internet? I feel like the reason many people do not give out personal information is 1. security but also 2. the idea of an internet persona. I mean let's be honest, do you live your life everyday in reality the way you post about it on the internet? This would, in my opinion, remove the idea of creating your own identity online as a member of a community separate from your reality, but would also be the best way to combat repeat community guideline violators. In terms of social breaching, I look at this as an experiment (p. 49, Garfinkel, quote about family not wanting to be experiment "rats") which also makes me think-- is it entirely ethical? While I know there are certain levels of social breaching, aren't we (as in those not actually performing the breaching) entitled to know? Or is this one of those "for the greater good" situations where the harm is low enough that it doesn't matter?

Smfredd (talk) 06:12, 16 October 2015 (UTC)

Smfredd, It's the latter plus (because it's for a class exercise) it's not formally considered "research."" You do have a choice with which breach you wish to employ. -Reagle (talk) 16:59, 16 October 2015 (UTC)

After reading Kraut and Resnick, I was interested in the first few design claims that mention how people in the online community have rules and try to 'save face' on the internet. It's kind of odd to think that even over the internet people are trying to maintain the peace and will not come out directly to let someone know they have done something wrong. People would rather beat around the bush and be a little nicer. The example was used of a situation where someone said, "You may not be aware of this guideline... No big deal, but please stick to this in the future." (p.153) By using these filler words it lessens the situation and the person who is 'being the bad guy' seems less intimidating. It seems odd that people become more timid and don't want to step on any toes when they are working online, similarly to the real world. Kraut and Resnick in a way relate to the social breaching when reading about the examples of the experiments and how people respond. After some of the experiments people are asking, 'what is wrong with you" because of the odd behavior they are maintaining. And in some ways they are just not complying to any normalcy they and everyone around them is used too. Now that I'm thinking of it, I experienced, as a third party, a social breaching moment last night at a restaurant. My friends and I had patiently waited for a table for an hour and twenty minutes and while we were sitting down at our table, we noticed two girls lingering in the dining area that eventually just sat down at a two-top without any menus and pretended they were just seated. My friends and I were astonished and we watched them lie to the waitress when they said they were told they could sit there. Another hostess came over and like Kraut and Resnick mentioned, "saved face" by saying, "It's okay because you are eating dinner, but this isn't the bar area we were talking about as a seat yourself area." (Just an odd anecdote that seems to tie in *hopefully*) Ltruk22 (talk) 09:53, 16 October 2015 (UTC)

Ltruk22, if it's self-serving, is it a "breach"?




On page 152, Kraut and Resnick address punishment systems and how MIT successfully deterred unwanted behavior on their networks by including "saving face" methods. As they describe punishment systems in more detail, I started thinking about Kohn again. Kohn argued that rewards systems encourage certain behaviors which is ultimately limiting; they stifle desires to work hard and take risks. In some online communities, that may not be an entirely bad thing. A large amount of people can access online communities, so there is less need for individuals to perform a high amount of tasks. More people can work less efficiently rather than a few people working hard. While the outcomes may not be identical, the volume of tasks performed could be the same. Therefore, rewards systems do not against an online community but instead act as a way to regulate behavior themselves. The problem here is that people eventually attempt to game the system which is an unwanted behavior in itself; so what happens when your behavior regulation system causes more detrimental behaviors?

Hayden.L, I think your right: ""Fixes" to manipulation have their own, often unintended, consequences and are also susceptible to manipulation" Revenge rating and tweak critique at photo.net -Reagle (talk) 16:59, 16 October 2015 (UTC)

Kraut and Resnick address the use of reputation systems as a way to encourage and deter certain kinds of behavior in a community (p. 157), but in a somewhat limited way. I'd like to discuss more about the relationship between systems that encourage positive behavior and those that punish negative ones. If either is too agressive in a community, it could stifle any contribution altogether. - Hayden.L (talk) 13:46, 16 October 2015 (UTC)

(P.S. - If an online community does not formally create a status or reputation system, will users create one amongst themselves given the information the community does track? For example, there isn't a status system on Instagram, other than being verified, but users often consider the number of followers or the follower to following ratio a sign of status.)

Again, yes! -Reagle (talk) 16:59, 16 October 2015 (UTC)

In today's reading Kraut and Resnick touch on a lot of seemingly basic concepts of how people save face online. I say that they seem like basic concepts because I think they can so easily be translated into the ways that people act and react in real life. I found the chapter to read a lot like a how to raise children book, because it talked about the ways that you can reward and punish, and the many alternatives to punishments. In my opinion the design claim that seemed to be best suited for a successful online community is claim 31 based on Ostrom's fifth principle of graduated sanctions. Wikipedia utilizes this when appropriate by sending personal messages to users that violate the rules of the site with an good faith message assuming the individual is not purposefully tarnishing the community. If the behavior continues, then they receive a last warning message notifying the user that they will be blocked from editing if they misbehave again.

Again, this all seems like child raising techniques used on adults across an online medium. I can see where frustrations might fall on both sides of the situation, but I feel that graduated sanctions are a good means for communicating bad behavior. I was unaware that on many sites people have to pay for their anonymity. This is interesting because it clarifies that individuals are indeed very concerned with saving face, many of the design claims touched on that. It does not seem to be a very novel idea that individuals are concerned with saving face online to a majority of strangers, even saving the face of their anonymous account. Alexisvictoria93 (talk) 14:00, 16 October 2015 (UTC)


Kraut and Resnick (2011) discuss the antinormative behavior anonymity can unleash in (large) online communities as well as why it is clear some members prefer anonymous or pseudonymous participation. As they addressed both sides of the argument, I immediately thought of Twitter's identification system. The small blue checkmark next to the username's account had once a practical purpose—you want to follow Adam Levine? There's a blue checkmark next to its username, so it is actually him, not someone posting a phony account. Nonetheless, I disagree Twitter's main objective was to discourage potential harm by community members. The idea that only 'important' users in "music, acting, fashion, government, politics, journalism, media, sports, business, and other key interest areas" (FAQs about verified accounts) can be certified creates a hierarchy of sorts. I think Twitter's attention should redirect from celebrities to common users. Would 'important' users create harm to the community in the first place? Isn't it more common for anonymous accounts from regular users like you and me to cause harm?

Online communities such as Twitter, who is not a support group or an activist political group, should allow anonymous usernames (does not have to be your full name) but require identification in the process of signing up. In my opinion, an email and full name is not enough, especially in a mediated world where so many harmful events have developed through anonymity issues. Andrea guerrerov (talk) 14:36, 16 October 2015 (UTC)


I would like to bring Kraut's claim that, anonymity leads to more anti-normative behavior. The first thing I thought of when reading this was the app Yik-Yak. For those of you that don't know, Yik-Yak was an anonymous, twitter like app, which allowed people in the same area to anonymously post and chat. For a while in the beginning stages, Yik-Yak was extremely popular in Boston (I would be curious to know if it still is, since I haven't been on in quite some time). The reason I brought this up is that according to Kraut, anonymity should discourage normative behavior but I feel that the anonymity actually created the normative behavior. I was curious if this was more so an exception to Kraut's rule or since the Yik-Yak "feed" was confined to a local area, there was enough identification for people to act normatively? I think that in Yik-Yak's case, the normative behavior may be more open than others because of its anonymity. In addition, since there was enough anonymity people weren't afraid to call those people out who violate the norms or vote their post down to reduce its effect. Also, I found it interesting that once Yik-Yak started giving people points for there normative use (Ex. Commenting and posts that are voted up), people became mush more concerned with their points level. Overall, I'm not positive if this is an exception to Kraut's claim but I just found it very interesting that in Yik-Yak's situation, the anonymity actually contributed to the culture and norms of the group. Johnmdaigneault (talk) 15:35, 16 October 2015 (UTC)

Johnmdaigneault, I added


Two of the concepts I found most interesting from this week's reading were the ideas of face-saving and cheap pseudonyms. Design claim 23 states that face-saving ways to correct norm violations increases compliance. As a psychology minor, I wasn't surprised that those who had been "caught" chose to falsely claim that they had in fact been hacked and change their password in order to save face. People often violate rules on the assumption that they won't get caught, but in reality, unless our goal is to be a troll, we don't like attention for doing "bad" things. What I found particularly interesting about this part of the chapter was that it highlighted two ways to allow users to save face; telling them that it was believed that their account had been hacked and telling them that they may not have been aware of this guideline stated in the website's policy. I wonder, if asked the question did you A. get hacked B. not know the rule or C. make the infraction knowingly, if more people would choose to say that they had been hacked than not knowing the rule. In addition to being perceived as good, people generally don't like to admit that they don't know something, especially in communities where they are active and invested. The second concept I'd like to talk about this week is the idea of cheap pseudonyms. It makes sense to me that different communities would have different incentives for retaining a long-term identifier, such as the financial consequences of maintaining or losing feedback on eBay. I think that even children learn the lesson that attempting to game an online community's system can be done but has consequences or limitations. For me, I distinctly remember Neopets.com limiting the number of neopets I could create on any given account. In order to get around this, I would make another username, but the website only allowed for one username to be created per email. My question would then be, how effective is using a one username to one verified email system for incentivizing compliance under a single username? In a survey, how willing would someone be to create a new email in order to gain a new username if there was no long-term username incentive? Wikibicki (talk) 15:43, 16 October 2015 (UTC)


Similar to others, I also found myself reflecting on Kraut and Resnick's 23rd design claim: Face-saving ways to correct norm violations increases compliance. I've never really thought about this tactic in terms of confrontation and conflict resolution, but I think it can be applied in more ways than just in online communities. Kraut and Resnick explained this phenomenon: "When we accuse perpetrators directly, they often assert that their misbehavior was within their rights. They then repeat the misbehavior to make their point and challenge our authority. When we let them save face by pretending that they did not do what they did, they tend to become more responsible citizens with their pride intact" (153). I wish someone had told me this years ago, as I imagine it's an effective strategy in relationships as well as online. Just imagine: "Hey honey, I'm sure you had no idea that it really bothers me when you leave the seat up. No big deal at all, I'm sure it wasn't even you. It was definitely probably Johnny. You should probably tell him to leave it down. Love you xoxo". Or "Hey roommate, isn't it strange that someone came over and ate all of my chocolate? That's so not cool. So glad you never ate my chocolate. You're the best let's watch Netflix". I can only imagine how many hours of arguments I could have saved. Maybe Wikibicki has the right idea with the whole psychology minor thing. I wonder what else I would know if I went to MIT. --Nataliewarther (talk) 16:27, 16 October 2015 (UTC)

Oct 20 Tue - Community and collaboration

I was drawn to a specific element in Reagle's article: his mention of humanity and real-life application of conflict resolution which he learned through Wikipedia's community. In discussing the collaborative culture of Wikipedia and the 5 pillars which the community relies on, Reagle reflected that these norms have proven to be "a great way to end an argument in real life". He also mentioned that this virtue may increase people's motivation to participate in the community.

I attended a Quaker school for most of my life and lived on the campus for my last two years of high school. Quakerism is a sect of christianity which focuses on the virtues of community, collaboration, charity, simplicity, and honesty, to name a few. My high school experience was essentially the same as everyone else's except for the emphasis there was on community. Everything we did was for the greater good of contributing to the community. We grew a lot of our own food, students ate "family style" at tables of 8 in a dining room, students had rotating "jobs" such as washing the dishes after meals, cleaning classrooms, and setting tables. Our classrooms were always set up in a large circle to facilitate class discussion and collaboration. And when it comes to Quaker religious events, there is no "leader". Essentially, there is no hierarchy in the community at all, except for those who volenteer to take on a leadership role. The congrigation sits in a circle, meditating, reflecting, and sharing whatever is on their mind with the community for 45 minutes every week. To those who have never heard of Quakerism, it sounds like a cult. After ten years of explaining it, I'm very used to people asking me if we were allowed to use electricity and if I've ever driven a car. I promise you it's not as strange as it sounds.

I found myself thinking about Quakerism and community a lot while reading Reagle's article. The truth is, I really undervalued the community aspect and emphasis on collaboration that was instilled into my childhood. I wasn't Quaker (most of the students weren't), and we found every excuse to complain every time we had to pick vegetables or "give back to the community". I picked Northeastern because I was desperate to get out of the bubble. But after a year or so in the city at our huge school, it became painfully obvious how much those community building elements contributed to my level of happiness. Being part of a collaborative space where everyone has shared investment and contributes what they can is really special, and I wish I'd valued it more. As Reagle mentions in his article, colaboration is "the process of shared creation: two or more individuals with complementary skills interacting to create a shared understanding". I never understood that this could be achieved within an online space before this class, and I'm quite fascinated by it. It makes much more sense now that people would be committed to the outcomes and eager to make Wikipedia great.

I really like the description of Wikipedia used in Reagle's conclusion. He quotes Leuf and Cunningham (2001): "Wiki culture, like many other social experiments, is interesting, exciting, involving, evolving, and ultimately not always very well understood". I think this is a transparent and honest description of a community that is working together for a shared outcome, experimenting together, and willing to fail together in order to figure out what works. Much respect.

--Nataliewarther (talk) 18:50, 19 October 2015 (UTC)

Nataliewarther, a book about Quaker's inspired my research on Wikipedia actually, which then inspired WP's executive director to give the similarities more thought -Reagle (talk) 19:23, 19 October 2015 (UTC)

Zhu's study on the impact of feedback on contribution brought about some very interesting points. First, she notes that there are four types of feedback (positive, negative, directive, and social). Positive feedback is intended to energize through acknowledgement while providing rewards, negative feedback is intended to regulate people through negative messages, directive feedback is intended to direct people through issuing instructions, and social feedback is intended to maintain close social relationships. Previous research indicates that positive and social feedback can motivate more contribution while negative feedback decreases member contribution. However, Zhu's experiment aims to show how feedback can influence not only motivation, but also specific task performance. In her study she finds that negative and directive feedback increases task performance while positive and social have less effect. In terms of motivation, positive and social increase motivation, while directive has little effect, and negative feedback decreases motivation.

This made me consider our course, Online Communities, and how the Professor uses feedback to regulate and motivate the students. For example, in my previous QIC, you left the comment "try to not leave the impression you only read the easiest reading." By no means was this a negative comment, but was more directive in the sense that you wanted me to show better understanding of the course concepts. By leaving directive feedback, you forced me to bridge the gap of understanding. Taken notice of my surveillance and progress, I have an increased focus on the task to complete my work with more effort (for a lack of a better term). As a student, I am overly concerned with speaking out in courses due to a fear of criticism. Although this "fear" does not dictate the way I conduct myself as a student, it does limit my response in class time. In my last QIC, you state "interesting point" about one of my insights and comments, leaving me positive feedback. Although it was not the most stimulating or academically interesting point I had made, the positive feedback certainly validated my thought process and had a positive effect on the way in contributed in the course and the way I hope to conduct myself on future QIC's. These two personal experiences can directly correlate to Zhu's study on how types of feedback can affect increased contribution. So I wonder, since I assume you are well versed in this study already and have been for some time, that the way you leave feedback in this course is based off of what you and I have learned from this study. And thank you for limiting the negative comments, even though it shows that it causes newcomers to work harder. Ahn.cha (talk) 22:46, 19 October 2015 (UTC)

Ahn.cha, excellent summary of the article. I do try to employ best practices, as I understand them, in giving feedback, but as in anything, it takes time to do things well and it's easy to make mistakes. This article was relatively new to me actually. The best summary on the research on how to give feedback is from Shute and Nicola & Macfarlane‐Dick -Reagle (talk) 17:25, 20 October 2015 (UTC)

In this week's readings, the biggest takeaway I had was the emphasis on the importance of newcomers in online communities. In Reagle's book, we learn about the idea of "Assume Good Faith" and Patience, and how frustrating behavior often is the result of ignorance rather than malice. I thought that the "please don't bite the newcomers" guideline was particularly important because it acknowledges that new members are the most valuable resource in Wikipedia's online community. This certainly applies to other online communities that come to mind because of what we've learned about membership in these spaces. New members have not yet had the time to form personal bonds within the community, nor do they know enough about the community to have formed an identity-based commitment. Therefore, the potential for user retention is most vulnerable during this initial member stage. In practicing patience and assuming good faith, current members give new users the time to form those attachments to the community that will result in long lasting, contributing members. I thought it was interesting that the results of the case study we read conflicted with Zhu and Halfaker's results regarding negative feedback and newcomer motivation. The experiment admittedly designed negative feedback that was milder than messages actually sent between Wikipedia users, as was done in Zhu's study (which claimed negative feedback decreased motivation.) I think that because the field experiment results showed that mild negative feedback led newcomers to work harder on the target article without reducing general motivation, it may be advantageous for Wikipedia to create a page with suggested phrases of mild negative feedback modeled after those in the experiment. It is understandably difficult for frustrated long time users to navigate how to inform new users of what they're doing wrong; a guide could provide them with an accepted form of providing negative feedback that neither scares off new users nor decreases motivation. Wikibicki (talk) 02:49, 20 October 2015 (UTC)

Wikibicki, templates can serve this role, but the question then is are they too "bitey"? -Reagle (talk) 17:25, 20 October 2015 (UTC)

As I read through Reagle's chapter, I realized the unknown world behind online communities. To me, Wikipedia was merely an online encyclopedia that anyone could edit, never expecting to learn there is such a strong and collaborative community within. It fascinated me that Wikipedians have been building a culture around assuming the best of others, "If you expect people to 'Assume Good Faith' from you, make sure you demonstrate it;" patience "Please Don't Bite the Newcomers;" civility, "Treating others with respect is key to collaborating effectively in building an encyclopedia;" and humor, Reagle's own guideline that "serves as an instrument of anxiety-releasing self-reflection."

I would have assumed that the offline world's regulations inspired Wikipedians to come up with the five pillars and Reagle's four "virtues" or behaviors. Nonetheless, I am amazed at how much the offline world and its communities could actually learn and benefit from an online community like Wikipedia. If the NPOV "Writing for the Enemy" could somehow be developed in the offline world, almost every argument would be avoided. The same idea applies to AGF and the notion that yelling [insert word here] "at people does not excuse you from explaining your actions, and making a habit of it will convince people that you are acting in bad faith." How would our communities change if participants stopped being uneasy and defensive? If being civil and respecting others was one of our main guidelines?

I'm looking forward to discussing with my peers what they felt while reading the chapter. It is difficult to explain all the positive things I am taking from this reading, but I can say that despite controversies behind Wikipedia (gaming, trolls, trustworthiness), it proves a community's collaborative culture, as Reagle mentions, plays a major role in determining what its future holds.Andrea guerrerov (talk) 13:30, 20 October 2015 (UTC)


Joining Wikipedia for some people may be a difficult step, but they want to try to make a mark for themselves and add to the overall community. They may face some difficulty when doing that with their own writing style. After reading Reagle's chapter there was a lot of emphasis on the "Neutral Point of View". This may be difficult for some people to fully understand and further place into action. The best example that Reagle uses is the "Evolution" and "Creationism" articles where some people can become too invested in their own beliefs that their may not even be evidence to back up personal beliefs. Since those are touchy subjects as it is, anything that someone may have a specific bias too can also become extremely difficult to write about. Although a person may believe they have all the tools to write a certain article because they know a lot about it, but their bias will speak through in the article. Then there will be feedback that is offered to them by other users. This is where Reagle's and Zhu's article comes into play. As Reagle mentions, "don't bite the newcomers" there is a fear of feedback that some may not be comfortable to receive and Zhu explains there is different types of feedback that can affect people's behavior. Feedback is always difficult to receive and people go about it different ways. I had a boss who would always talk about giving a sandwich of feedback: some good, some things that need to be worked on, and some more good. When working online there are different tactics because you aren't speaking face-to-face with someone. The example of negative which would state that the person could potentially be banned because of what they had done would probably have been more threatening toward newer users. Since they used a more mild form of negative feedback, people became more motivated through the process. Ltruk22 (talk) 13:47, 20 October 2015 (UTC)


In the Wiki, Practice, and Policy section of Reagle's chapter I was interested by it's mentioning of documentation as almost a fundamental need for people. As I started off reading I thought it was going a little overboard, but as I finished up the section I realized that documentation is more than just writing things down for your own personal use later, but ti also serves a massive purpose to the community and world who now don't have to make the same mistakes you did.

While considering the important of documentation I also thought back to online forums, platforms, and communities and realized that all of these things are as strong and crucial as they are in a large part to documentation. Because on almost all sites things are saved and the user has the ability to go back in time because of a site, an irreplaceable aspect is added to that platform. This made me think back to my involvement in online forums such as Instagram and Facebook. In both of these the ability to go back and see previously documented points of my life is what keeps me loving the platform. Facebook really matters to me because it is the best documentation of my life that I have, nothing else allows me to see what kind of person I was of allows me to see what I wrote to someone when I was in 8th grade. The section was highlighting the importance of documentation in regard to learning from past mistakes and there have been many times that while I was reading through old Facebook messages that I promised myself I would never say something like that ever again, but if it wasn't for Facebook I would never remember that conversation with Alex from high school. This section really interested me because it made me realize that involvement in online platforms and documentation of that involvement is not only interesting and important to outsiders, but it is actually most important to the initial contributor because it provides them with valuable documentation that can help to be a point of reference years later.

Natawhee7 (talk) 13:54, 20 October 2015 (UTC)



I liked the points Andrea raised in her response. I agree that the reading made me feel generally positive toward members of the Wikipedia community, and that they genuinely want to see their peers and the community succeed.

However, I think that these behaviors are specific to online communities for a few reasons. First, we've talked in class about anonymity and pseudonymity. I think it's easier to accept or reject feedback online because it does not reflect on you has a person; it's easy to save face when there is the barrier of the Internet. Second, those that give feedback and participate in online communities are already working toward a shared goal whether it's writing an encyclopedia or sharing the cutest animal pictures on the Internet. I think users actively want feedback so they can make better contributions.

Of course, this all changes when we think about why users are motivated to troll sites. In certain cases such as controversial websites, I can understand why trolls would want to halt community collaboration. However, on harmless or generally beneficial communities like Wikipedia, trolls still exist and take advantage of NPOV and the good faith principle. I think that the ability to freely share feedback and create a culture that represents Reagle's four principles of behavior, in spite of trolls, directly shows the strength of a community. -Hayden.L (talk) 15:54, 20 October 2015 (UTC)


One of the first things that stuck out to me regarding Zhu's research was the model for a message containing positive feedback, negative feedback, and directive feedback. I've definitely seen this before in my own life, especially on school assignments (you know those longer comments on the last page of your paper), and my friends and I refer to this as the "comment sandwich". Broken down more simply that Reagle and Zhu's definitions, it's two things you did awesomely sandwiching one thing you really sucked at. And I do admit, it works for motivation. I'm much more likely to keep working on something and continue improving if I have this type of feedback. I also thought the connection between feedback and experienced users was something to look at, since my original thought was that if you are a more experienced user and therefore have shown high commitment to the cause, wouldn't you then take the feedback seriously and therefore act upon it? Or maybe it's more of a "I know what I'm doing" situation, or as Zhu showed, that some experienced users took negative and directive feedback as a challenge to their knowledge and skill. It's almost as if there is some type of hierarchy-those who have "senior membership" are more likely to take advice from senior members based solely on their time committed to the community. One last question-could we take "Wikilove" as a type of positive feedback? Is there ever wikilove that is more negative feedback or is that more reserved for talkpage conversations? Smfredd (talk) 15:56, 20 October 2015 (UTC)


Essentially Zhu's study found that negative feedback and direction increase people's efforts towards a task, while positive feedback increases motivation to work in general, and that new comers are more adaptive to feedback so it tends to show stronger effects compared to the experienced users who were significantly unaltered by feedback. I find this interesting because it seems counterintuitive to the success of a large online community like Wikipedia. As a student studying TV Production I am very often found in a room full of students critiquing my work, the expectation is that I incorporate the critiques into the next edit of the project. Usually, I think that I am pretty receptive to the feedback of other students; however, I do see the side of not considering the feedback from less experienced people compared to that of students that I know have a few years in production as well. This is purely ego driven.

When I associate the same situation to an online community, with consideration towards the conversations that we had last week about Pseudonyms, I can understand how an ego may play into the way that an individual receives and responds to feedback. In online communities that "assume good faith" it is hopeful to think that all those involved have the sole motive of producing the best content for the site. With consideration towards the conversations from two weeks ago regarding trolls, I can see where motives might be misplaced and how what could be positive feedback would be construed negatively depending on who is delivering the feedback. What is the best way for a community to maximize the feedback response?

Alexisvictoria93 (talk) 18:12, 20 October 2015 (UTC)



I want to base this QIC off of a quote from Reagle: "In any Wiki, you discover a sense of growing community that expresses itself through its archived writing." I'm talking about this, because I just moved my article to the main space yesterday! And today I saw that two people have already made some edits! And I couldn't be more excited!!

The first edit was fixing some typos I made--I was writing "Youtube" rather than "YouTube". Someone went through and fixed all of those for me! And the second one was someone, at the very end of the article, adding something that gave my article the "Uncategorized" banner, meaning that I (or someone) should categorize it with similar articles on Wikipedia! This is the first time we're all really being exposed to the community of Wikipedia, and it's so exciting.

And after reading through the pillars in Reagle's chapter, it's no wonder I'm so excited. There's a lot of genuinely good life-advice in their! I especially loved, while not a pillar, the half-joke of "Assume Stupidity". Honestly, that's something I should be doing a lot more of in real life. I also really appreciate how Reagle/Wikipedia handles the discussion of "writing for the enemy", because it really forces people to be more openminded. Also, I love the concept of red links. They're like saying "there's a whole wide world out there to explore!"

However, one thing that I wanted to discuss was the pillar that says "anyone may edit". This might be me being a bit of a SJW, but is that really true? If you think about it, it seems a little classist, honestly. You can edit, but only if you have access to a computer and internet. You can edit, but only if you have a good enough education--if you never learned proper grammar, how can you edit? I'd really like to see what the demographics (class, education, race, etc.) of the Wikipedia culture are, at least in the US. Maybe I'm reading too much into this, but it's definitely something I'm interested in looking into.

-Kev.w.pri (talk) 16:09, 20 October 2015 (UTC) (I posted this at about 12:03, and while it was posting I went to the bathroom. I came back and it said that there was an edit conflict because people had posted since I started writing. That's why this is late. Please still count it)

Oct 23 Fri - Moderation

Disclaimer: I've only read half of the reading so far, but I wanted to respond tonight (Wednesday) because I'm not sure I'll have time tomorrow/Friday morning to thoughtfully respond (after I finish reading the rest of the sections).

All this talk about moderation made me think immediately of HONY. For those of you unfamiliar with the work, HONY stands for Humans of New York, and is a photography blog (mainly on Facebook) run by Brandon Stanton who photographs a wide array of people throughout New York City (and the world, occasionally). A few months or years ago, I read an article about how HONY censors its commenters (I don't remember what article this was, SRY). I wasn't sure what article said that exactly, but I remember being fine with it. HONY wasn't censoring people with different opinions from him or anything like that; he was censoring assholes--people who were straight up being mean to the subjects of his photos (which he comments on here. HONY manually and secretly moderates his blog (secretly because you can't tell when he's deleted a comment, I believe). He uses centralized moderation techniques, be it him or a few assistants, but the community also filters itself and it is always ex-post, unless Facebook itself prevents someone from posting something. And I think this is the perfect way for this style of community to be moderated. It's moderated basically by one power, but the community also has the power to moderate. It's not necessary for us common folk to see the negative comments, because that's not what the experience/art is about. It's about these specific moments of shared human experiences, and peoples' reactions to them--genuine, honest reactions, not rude remarks.

In my search for the original article I read a few years ago, I stumbled upon this article which I think does a great job of explaining the community/controversy around HONY and comments. HONY is basically an art exhibit (on Facebook, yes), and HONY has the right to decide what he wants included in his piece. Hey User:Nataliewarther you're an artist/someone who has experience with the art world. What are your thoughts on this?

I feel like we could spend hours discussing HONY in our class, because it is such an expansive online community. It goes back to our whole discussion of "Is Facebook itself a community, or are there just communities within Facebook?" HONY has international success and millions of followers. We should have a class dedicated to talking about this, because it could be really interesting to discuss the ways he stays relevant (TBT to motivation/persuasion techniques). I also read an article about how HONY uses sentimentality in a really f***ed up, manipulated way, but I didn't agree with that article. Maybe some of you do?

On an unrelated-to-HONY note, another thought I had during the reading was this: If we, as a class, found some sort of online community in it's early stages, be it a blog, and app, or something of the like, could we all join it and subtly shape it's norms to be whatever we want? Norms are created as communities grow, so why not? If we were active enough, I think we could make an impact on how a community develops. This would be an interesting social experiment, I think.

Anyway, that's all for now. If I have time tomorrow or Friday morning, I'll add some more to this QIC once I finish the reading. Although maybe I shouldn't because this is already a novel. I need to be more concise, BUT I JUST HAVE SO MANY THOUGHTS AND FEELINGS AND I WISH I COULD BAKE A CAKE FILLED WITH RAINBOWS AND SMILES AND EVERYONE WOULD EAT IT AND BE HAPPY. But that's not the world we live in, folks, because this isn't Mean Girls (but how fetch would that be?) (I don't know what's going on right now, either. I'm leaving now. It's time for bed. I want to get up early tomorrow and go running, but I'm not sure if that will happen. I'll send an email to the list with an update) (I won't actually, but what a fun social breaching experiment that would be) (maybe I will--is it inappropriate to socially breach a class? It'll also be a test to see if any of my classmates read my QICs, because if they're confused it means they didn't read this. Although I can't blame them, this is long as can be)

TTFN! (tbt to that phrase, amirite?) -Kev.w.pri (talk) 03:22, 22 October 2015 (UTC)

Kev.w.pri sounds HONY would be a good subject for your community analysis! -Reagle (talk) 16:52, 23 October 2015 (UTC)

I really enjoyed the way Grimmelmann organized the first half of the article; it touched on several themes we have discussed from Kraut and Resnick, including how to encourage compliance, set norms, and encourage community and collaboration. However, Grimmelmann takes a slightly different approach in his explanation by highlighting how a few variations on different moderation tactics actually achieves various ends. Kraut and Resnick start with the desired outcome and then describe how to acheive it; Grimmelmann starts with moderation practice and then describes the outcomes.

Hayden.L, Astute! -Reagle (talk) 16:52, 23 October 2015 (UTC)

At the end of the article, in the case study about Wikipedia, Grimmelman says that the site "has an extensive parallel architecture of talk pages devoted to conversations about Wikipedia and its norms" (p. 82). I then realized that Wikipedia is fairly self-aware in this respect; users are actively questioning standards and regulations in the community. In the other examples Grimmelmann uses, this characteristic is absent. It's true that MetaFilter and Reddit can moderate their own content, but to what extent can users moderate the moderation? Can community members change the way these things are enacted. Although grand changes would be difficult to make in any community, Wikipedia empowers users to create the space in which their content lives (which I think ties into Reagle's points on reification and Kelty's definitions of a recursive public).

Therefore, are communities that encourage meta-moderation or discussion on moderation tactics more successful than those that do not? - Hayden.L (talk) 15:03, 22 October 2015 (UTC)


When I was reading Grimmelmann's, "The Virtues of Moderation", I immediately was taken back to this week's episode of South Park. I am a huge South Park fan, not only for the comedy but also because of its social commentary. You can watch any given South Park and know generally when it is from because of its ability to use relevant social mockery. This week's episode, addresses this idea of moderation. In the episode, Cartman reports that he has been harassed over his social media platforms after posting a revealing picture of himself. The school then makes one of the other students, Butters, moderate all of Cartman's social media platforms to exclusively see positive content. There is this stigma that goes along with online moderation where people would prefer almost a masked sense of acceptance rather than facing the critiques of the entire Internet. I believe this provides incredible social commentary because online moderate is a huge topic of debate. Many different websites use a variety of different methods to moderate their community. However, this moderation has come with wavering levels of success. Wikipedia and Reddit are probably the largest, well-moderated community and both achieve this in extremely different ways. Reddit, employs actual people to act as moderators for the community and control what is and is seen on customer feeds. They were actually recently highly criticized over some of the decisions made by moderators. While Wikipedia promotes a collaborative and public culture that is able to self moderate with great success. Other sites like Draftkings, require a physical payment as a form of moderation to keep their customer base dedicated and loyal. Going off of my QIC from last week, is there some level of moderation that Yik-Yak could take to help appeal to the customers that like the idea of the anonymous feed but don't want to deal with the few, usually vulgar, users that are currently in need of moderation? Johnmdaigneault (talk) 13:59, 23 October 2015 (UTC)

Johnmdaigneault, Oh man, I got to watch that this weekend then! -Reagle (talk) 16:52, 23 October 2015 (UTC)

I appreciated Grimmelmann's detailed descriptions throughout the first two parts (Introduction and The Grammar of Moderation) of the article, since it provides the reader with the necessary tools (imagine puzzle pieces) to construct and understand the bigger idea of moderation (imagine overall puzzle). As Hayden.L mentioned, Kraut & Resnick's (2011) ideas and design claims are echoed on Grimmelmann's article, further proving that collaboration, norm-setting, organization, community size, identity, among other themes, are prevalent throughout online communities. Nonetheless, after reading the case studies on successful communities such as Wikipedia, MetaFilter, and Reddit, I realized how different moderation, with different techniques, can still produce the same (efficient) result.

Andrea guerrerov, nice detail. -Reagle (talk) 16:52, 23 October 2015 (UTC)

Wikipedia's moderation is "human, automatic, transparent, opaque, ex ante, post ante, centralized and distributed" (Grimmelmann, 2015, p. 87), MetaFilter is human, ex post, centralized, emphasizing norm-setting, and loving, while Reddit's moderation is "ex post, distributed, human annotation, used as an input to centralize automatic filtration" (Grimmelmann, 2015, p. 94). Although many of the characteristics overlap, moderation is so diverse that it might be challenging for a new designer to know what he/she needs to build a successful community. In other words, if I wanted to design a successful online community, which verbs of moderation should I prioritize? Both Wikipedia and MetaFilter emphasize a centralized moderation, but Reddit does not (even though Reddit is more similar to MetaFilter than Wikipedia). If we had to come up with the basic structure of a moderated community, would it be easier to promote one that contradicts everything Los Angeles Times tried to create, or come up with a flexible structure of what an online community should include? Andrea guerrerov (talk) 14:24, 23 October 2015 (UTC)


"Everything in moderation, including moderation." This classic Wilde witticism pretty much sums up the suggested amount of moderation given in The Article we read for today. In his conclusion, Grimmelmann fairly notes that "no community is ever perfectly open or perfectly closed; moderation always takes place somewhere in between." Now, this isn't a surprising last note, especially considering how common certain phrases are such as happy medium, golden mean, and The Goldilocks Zone, but it is a slightly funny one because 'being moderate' is implicit in being a moderator, and so the definition seems to be self-supporting, at least when viewing the full spectrum of moderators across the web.

Something I also find interesting whenever reading about online communities, particularly in this article, is the vocabulary they use to describe the parts and functions within online communities. Now I'm just speculating based on my limited experience with online communities, but it seems to me that since the study is relatively nascent, it hasn't had the time (or necessity) to acquire much of its own unique jargon (other than for coding, programming, and other technical techno-stuff). Instead, I've noticed it borrows from a very wide variety of other disciplines such as infrastructure, architecture, taxonomy, and even moderation itself, including its many relevant sub-terms such as governance. Despite being totally digital and virtual and interconnected but intangible, online communities are still described in terms grounded in real, brick and mortar life, such as with infrastructure: being electronic in form doesn't change the fact that over-congestion can occur on a site, just as traffic can occur in a city; or that a crowd of people can become overly cacophonous, as with mobs or rabbles.

The similarities between online communities and live ones doesn't stop there. Democratic moderation as online self-governance reflects democracy in the political sphere. In fact, a community with no moderation is aptly called an anarchy, and reminds one of The Warriors; a community with hyper-strict near totalitarian moderations is dictatorial or communist–it saps the life from users/citizens and looks like 1984. In the first part of the second section about the verbs on moderation, the author mentions that excluding, pricing, and organizing, while crucial actions in their own right, are essentially indirect means at achieving some form of self-reinforcing norms. Norm-setting is social conditioning that can arise organically or intentionally by a governing force, and just as in societies spanning the globe, once a norm is set, it becomes difficult to change; and the larger the group, the more ossified the norm. Just as with impoverished places around the globe: "If you've been ignoring all of the uncivility on your site for the past 2 years, it's going to be difficult to clean it up." The government, in this sense, are moderators. As are parents. Parents moderate a child's interaction with the world by organizing, excluding, and pricing (through, say, chores) different things until the child has had enough pertinent norms instilled/installed into it that it becomes autonomous. Such is the goal for nations as well as online communities: self-reinforced efficient governance. But, as previously stated, "no community is ever perfectly open." Anussbaumer (talk) 15:57, 23 October 2015 (UTC)

Anussbaumer Good point, I think at the start, everything was "cyber" this or that, but we do now appreciate that, as stated in the syllabus online communities are "real." -Reagle (talk) 16:52, 23 October 2015 (UTC)

When reading this article, the online community of SoundCloud came straight to mind. Although the function of SoundCloud isn't exactly the same as websites like Wikipedia or reddit, SoundCloud is an open community where people can share music they like, and post music they create. The design and overall layout of the website is simple and focuses on intrinsically motivating users in the gratification of finding good music, and sharking good music knowledge amongst peers. In terms of moderation, illegal music copyrighting is a huge issue that plagues the music industry. Given the popularity of SoundCloud, it was a problem that had to be approached. In order to moderate illegal music use, if you are uploading unowned material, you must always give credit to the user and usually be uploaded in a lower quality format. If users are uploading cover songs, SoundCloud has music analyzing systems that can detect whether or not you are the owner of the copyrighted song. Any violations result in immediate deletion and warnings are sent. This system prevents congestion. In terms of being a well moderated community, SoundCloud offers productivity, openness, and is free/low-cost. Productivity in the sense that musicians can network and share their originals. Openness in that every profile has a biography or identity with music they like or shared. Most profiles are free, but "pro" members have a relatively low cost for $4 a month, all showing indications that SoundCloud is a well moderated community. Ahn.cha (talk) 15:54, 23 October 2015 (UTC)

Ahn.cha, how many violations before a user is banned? -Reagle (talk) 16:52, 23 October 2015 (UTC)

I agree with User:Andrea guerrerov in the fact I really liked the way Grimmelmann introduced what he was going to talk about as well as breaking them up into sections to make it easier to understand. I especially appreciated how he would refer to different aspects as verbs, adverbs, or nouns to give us a better understanding as to how they are all interrelated. While reading, naturally I started comparing all the different types of moderation and which work better in different settings.

Initially, I thought the best course of moderation would be to manually moderate the community instead of automatically. This way, instead of relying on algorithms there are people deciding what is appropriate in a certain context, where as algorithms do not have that capability so everything it deems inappropriate would result in some type of action. Although I think this is the best case scenario, I do realize this is entirely unrealistic for every online community to function in this way. Not only because there are so many contributions daily but also because of the required cost to hire much staff for a singular purpose. Also, after reading about Centralized and Decentralized, I thought decentralized would be the best course of action since then there are multiple people helping to moderate the community. However, after reading User:Kev.w.pri QIC entry, I can see how having a centralized moderation has it's merits, especially if there is a single ownership of the blog. One comment I did have for you, User:Kev.w.pri, is if Brandon Stanton did advertise when he deleted comments, do you think there would be less asshole comments posted since participants know they would get deleted? Just some food for thought. See you all in class! BrazilSean (talk) 15:59, 23 October 2015 (UTC)

BrazilSean, Humans are expensive; MetaFilter actually had to let go some of its moderators because of changes in Google's search engine. -Reagle (talk) 16:52, 23 October 2015 (UTC)

What I found particularly interesting about this week's reading was the section on abuses. Congestion, in particular, was one form of abuse that I wouldn't have realized belonged in this category. Congestion refers to the idea of overuse, which "makes it harder for any information to get through and can cause the infrastructure to stagger and fall (Grimmelmann 53)." I think that as users, most of us think as the Internet in abstract terms, and only realize that systems can become overwhelmed in extraordinary circumstances (like when a national event "breaks" Twitter temporarily.) I didn't consider the idea of cacophony, which refers to overuse at the content level, as being a serious problem for communities as well. The idea of "search costs" is particularly interesting, because upon reflection, I realized that I had experienced this problem on many of the sites that I use. I find that sites that are traditionally marked with a fast-paced creation rate are more difficult to navigate than others. Both Tumblr and Pinterest encourage users to tag content with hashtags or categories upon posting, but because of the fast-paced nature of blogging (e.g. I see this, I put it on my blog/board, I see the next item, etc.) many posts go unidentified. I don't find that it bothers me when I'm generally browsing content, but this can be frustrating when trying to search for something specific. In an act of automatized moderation, Pinterest has started to mandate that users include a description of a Pin before it is allowed on the Pinterest site. However, users have abused this new system as well, with many Pinners simply putting a period where the description should go so that the system will register a description and allow them to post their content. In addition, I didn't realize that I have encountered extreme forms of abuse, which "involves an entire community uniting to share content in a way that harms the rest of society (Grimmelmann 53)." I think that many people wouldn't consider themselves "abusive" when they post copyrighted music on applications such as Spotify or copyrighted films on sites like Putlocker. This brings up an interesting point; without moderation, an entire community may not only be abusive, but see their so-called abusive acts as bettering the greater good. Many users who post and engage with content operate on the idea that all content should be free and shareable. Without a moderator presence, the outside consequences of copyright infringement aren't made known to them. Perhaps those who feel strongly about online forms of abuse will one day (or maybe they exist, I'm not sure) act as a sort of "troll" to spread the message of how copyright infringement can be destructive. Wikibicki (talk) 16:00, 23 October 2015 (UTC)

Wikibicki, In a later chapter, K&R talk about some of these things explicitly in terms of costs too. And period instead of a description? Love it! Do you have a URL to an example? -Reagle (talk) 16:52, 23 October 2015 (UTC)

The Virtues of Moderation article was very descriptive in the many aspects of moderation, almost too many to digest at once. There is exclusion, pricing, organization, and norm-setting as the basis of moderation. There are different levels at which these moderation verbs are effective. An effective online community utilizes moderation by balancing pricing, organization, and norm-setting with transparency, ex ante and ex post moderation, and centrality of the moderator which may employ either automatic or manual moderation. The examples of Wikipedia, Nupedia, the LA Times wikitorial, reddit, and Metafilter prove that there are many ways to organize moderation techniques to be successful tools in various communities. This made me think about the ways social networking communities are moderated.

In past classes we have talked about Facebook and other social networks as possibly not be examples of online communities; however, I believe that instagram is a social network that is an exception. The instagram platform is intended to be a medium for creative individuals to share their aesthetic identity with their 'community', or followers. Granted many instagram users do not identify as creative individuals and the overall population of the community has been diluted because of its popularity. However there are still certain moderation techniques in place for hashtag, geotag feeds and brands. For instance, if a brand starts a hashtag they can moderate the feed by receiving notifications as new posts are tagged, they can then select whether it will appear on the main feed for that hashtag, decide to blacklist a user that is abusing the tag or using it maliciously, and they can set up automated filtering systems to moderate posts in bulk for high traffic tags.

Alexisvictoria93, good point, but I wonder if we can say that moderation doesn't necessarily imply a community? Brands, platforms, and networks need moderation, but we still might not say the people there aren't a community. -Reagle (talk) 16:52, 23 October 2015 (UTC)

It reminds me of the article that Dami shared last week about the no bra day campaign, and the way that instagram's gags and bans acted as the first level of defense in filtering inappropriate content I wonder if the breast cancer aware added other layers of moderation to the hashtag so as to create a feed that is inline with the mission of the hashtag. I'm going to keep on looking into these tactics, I wonder if individual users have the ability to moderate the traffic on their profiles. Maybe there is a limited community size requirement in place to be able to moderate in such a way.

Additionally, how can brands moderate the content on there sites without operating within a bias?

Alexisvictoria93 (talk) 16:01, 23 October 2015 (UTC)

Oct 27 Tue - Governance and banning

After reading Reagle's chapter, I think it's appropriate yet again to apply the concepts to my experience with Quakerism. I was relieved after the first few paragraphs to see that Reagle was going to address the religion directly, because again, there are such a high number of parallels in terms of governence strategies. The institutions of Quakerism and Wikipedia both rely on consensus, which on paper sounds like a beautiful idea. In reality, achieving consesnus is often like pulling teeth, and even when you think you've achieved it, you haven't. There's a reason my Dad always says "you can't make everyone happy", but the spirit of Quakerism revolves around trying desperately to do so.

I found myself reflecting a lot on the following sentence from Reagle's chapter: "While the progress and the outcome of consensus are rarely assured, the focus is on the potential benefits of deliberation rather than the speed of the decision." This reminds me of something in my high school called Discipline Council. Essentially, when a student broke the rules in any way that would merit suspension or expulsion, the student attended a discipline council hearing. These meetings consisted of three faculty members, the student, the student's advisor, and two or three student leaders. The student had to prepare and read a statement explaining what had happened. The council could then ask questions about the details of what had happened. The student would then leave so that the council could decide what the appropriate punishment was. Once the punishment was decided, it would be posted on the "Discipline Board", a bulletin board outside of the main office in the school. It is the single most dreaded room any one could ever dream of being in, but it is hard to argue that decisions are fair when they are given that much time and dedication.

But, just because it makes sense doesn't always mean it is easy. As someone who sat in on Discipline Council as a student leader, achieving that consesous is messy and exhausting. There will always be extrinsic motivators that sway your opinion, and there will always be someone who dominates the debate and makes others feel like they can't contribute. Even when consensous is achieved, there are almost always community member who are unhappy about the decision. Not only does this bring debate into the community, but it has brought lawsuits from parents. In a few cases, decisions have been revoked after an appeal process.

I suppose my point is that I'm very sympathetic towards the good people of Wikipedia and what their process of achieving consensus is like. While it sounds like a great concept, it takes patience, commitment, more patience, and a lot of energy. I can't imagine how frusterating the process must be with people constantly coming and going. Wikipedia keeps a loose guideline of who can contribute to the decision making process, which must become a nightmare at times. At least in my experience there was a small designated group of people who were elected by the community, not a fluctuating sea of people coming in and out. Kudos to Wikipedia for attempting to maintain organized order in the sea of madness that is the Internet.

--Nataliewarther (talk) 20:15, 26 October 2015 (UTC)


Considering openness of Wikipedia seems to be problematic in every aspect, I now wonder if it would ever be possible to limit people's participation on discussions and to establish periodic terms for those disputes. Openness becomes especially troublesome because it is "susceptible to trolling and forum shopping" (Reagle, 2010). This happens because there are those who abuse and fail to appreciate the given privilege called "openness" to participate and engage. If Wikipedia limited the number of participation to three times within one discussion, for instance, Wikipedia would not only be able to regulate trolling but also encourage people to produce and contribute more thoughtful and richer participation because through that regulation people will hopefully learn to appreciate and maximize their participation.

As it is "occasionally appropriate to revisit a topic and to reevaluate alternatives," setting periodic terms for discussions may be helpful in avoiding "unnecessary repeated discussions" and generating more constructive arguments and ideas (Reagle, 2010). Dr. Speed Wiley has a rule called 24-72 Hour Policy in which students must "wait more than 24 hours before inquiring about a grade, but no more than 72 hours." This rule exists to give students a night to sleep on the grade and to absorb the feedback while it's fresh in both professor's and student's minds. People often become blinded by their own thoughts in heated discussions, but giving them a break to reflect and reorganize their ideas may lead to "the potential benefits of deliberation" (Reagle, 2010).

These methods could potentially ease decision making processes, as openness makes Wikipedia and its discussions strong yet so vulnerable. Are these regulations too restrictive and just impossible to happen? User20159 (talk)

User20159, interesting idea, the question is what other (possibly detrimental) effects would your participation rule have? They've tried to do something like this with the WP:The three-revert rule, but it can be gamed too. -Reagle (talk) 10:21, 27 October 2015 (UTC)



On paragraph 53, Reagle mentions "consensus presumes good faith and sometimes sustains it; voting can operate without good faith and sometimes depletes it altogether."As I read the chapter, I found myself debating whether or not consensus is the right decision-making approach for Wikipedia.

The idea that consensus promotes online collaboration and an egalitarian value is fascinating; it accurately reflects Wikipedia's culture of passionate discussions and debates. Nonetheless, I find it hard to believe consensus is so accepted in a large community populated by different-minded individuals where difficulties arise daily. I immediately thought voting would be a better option because it avoids the ongoing discussions with no solutions, groupthink, and the uncertainty behind who controls which dispute is no longer open. The General Assembly (United Nations), for example, works in a voting system where resolutions are passed by a two-thirds (0.67) vote. Similarly, when a support reaches 70% (0.7) or higher in Wikipedia, it means a consensus has been reached. Clearly, the General Assembly example is a bit off—it takes months to create and agenda and vote on resolutions, but it is indeed true that their voting system is (mostly) efficient for 193 members.

My biggest issue with consensus in Wikipedia is its effectiveness in such a large and interdependent community. Reagle mentioned that as "W3C matured, it [was] characterized as overly slow because of growing bureaucracy and the difficulty of achieving consensus in a large group" (Reagle, 2010). I understand voting is considered "evil," unfair and misleading in Wikipedia's culture, but will members keep that mentality if Wikipedia keeps growing? As more people join the community, facilitators won't be able to control the ongoing discussions and it will become extremely difficult to please its members. Perhaps a voting system is too far from Wikipedians values, but polling should be used more often, such as in article development. If Wikipedia keeps increasing, challenges for consensus will increase as well. What other methods, if it can't be voting, could Wikipedia use that represent its values and effectively resolve conflicts? Andrea guerrerov (talk) 00:16, 27 October 2015 (UTC)

Andrea guerrerov, BTW: One of the UN's governance mechanisms has been recently criticized: Why Is Saudi Arabia Heading a UN Human Rights Council Panel?

According to the article, consensus needs a group of interested and dedicated members, a trusted facilitator of the conversation, and an accepted method of finding and sharing opinions. I was thinking about how different online communities might encourage consensus whether they use the term or not. Which design choices have we talked about in class would encourage consensus?

Communities need a space to discuss and the tools to collect opinions from members. The code of conducts for the forums would need to be open for editing based on consensus; a community that does not allow rules to be changed would have less of a need for consensus. Additionally, moderators would need to be mostly distributed and transparent. Community members must be able to speak their (relevant!) opinions freely without an administrator removing the posts. Can anyone think of any other or more specific ways Kraut's design claims play into consensus? -Hayden.L (talk) 13:30, 27 October 2015 (UTC)


As we know reaching consensus can be like pulling teeth. It can be frustrating to get everyone on the same page and not everyone is willing to comply with the same outcome. While I was reading the chapter, I was thinking about different ways to generate a successful discussion without having people flip flopping to a different side when others expressed their ideas. The topic that Reagle speaks on is polling and voting. I found this section to be informational about the different definitions of these two terms. In paragraph 45 he mentions, "…people may confuse polling with voting" (Reagle, 2010). He goes onto describe the differences between the two and says, "polling should prompt and shape discussion, rather than terminate it" (Reagle, 2010). This was striking to me because I had never thought of polling in that light before. I always think of polling as a survey-esque type thing that is a quick way to get insight into what people are thinking, but not that it will shape a discussion. But a poll is still not a way to get to a consensus. It will help guide the conversation and allow people to debate the different sides to a conflict. I think having people poll on topics and having to leave a briefly stated opinion would help determine the outcome slightly easier.

In regards to the Wikipedia: Banning policy I had never thought about people being banned from specific areas of a website, especially like Wikipedia. If people are banned from an article they could begin to troll other parts of the website and jeopardize others works. The interaction ban confuses me slightly, because people may be unable to converse with others over a topic? Wouldn't it be better to eliminate them from a topic? And what if both parties begin working on another topic, who gets to stay and who has to leave? Ltruk22 (talk) 14:05, 27 October 2015 (UTC)

Ltruk22, sometimes the animosity starting on one topic gets to the point that folks wikistalk. In those cases, you might be banned from interacting with someone no matter where you are. -Reagle (talk) 15:34, 27 October 2015 (UTC)



I absolutely loved the line that Professor Reagle quoted from the Wikipedia "Consensus" policy which stated polling is, "often more likely to be the start of a discussion than it is to be the end of one." I believe we can actually apply this line to much more than just the process by which Wikipedia comes to a consensus. It actually allows us to look into the governing method of basically any community. It can be anything as small as a group of friends polling, "Whose hungry?" and then starting the discussion on dinner to how the American system appoints it s leaders. I believe the U.S has engrained this principle into the majority of our electoral system (I am not going to get into the electoral college because that is a totally different topic and this QIC would be like 10 pages). When it comes to presidential candidates and "hot" political topics, the United States use polling on ascending scales to determine what different parts of the United States and the general public as a whole are concerned about and who appeals to them. We use the polls to determine the consensus of Americans and then start the election road and corresponding discussions. Tying this back to Wikipedia, it seems like they only employ polling in small or large scales, either between few members of the entire community as a whole, so I wonder if offering a method of intermediary polling could lead to an improved method of disambiguation. Also, are there any circumstances where polling a community up front could actually discourage a conversation? Johnmdaigneault (talk) 14:58, 27 October 2015 (UTC)


My basic idea about consensus very much echoes some from above, in that while I see how consensus is working for Wikipedia now, I am wondering if it can work for the community as it gets larger, and also for communities that do not have as much commitment and strong community guidelines as wikipedia does. I found this echoed particularly in Reagle's chapter, paragraph 49, about how CommunityMayNotScale, and how more members means the likelihood of disagreement grows as well. I am interested to learn more about how consensus applies to bans, especially the levels of bans which I thought could sometimes cause problems for consensus. Firstly, I think a vote (and yes I know VotingIsEvil) would get bans done more quickly, as sometimes punishments are not quick to be implemented (as seen in RiotGames Tribunal FAQ). However I am interested to see if there is ever difficulty in coming to a consensus about how long someone should be banned, and if the levels of banning in Wikipedia help to eliminate negative users. Are people more likely to keep up bad behavior after being punished at a "lower level", or does it work almost as a warning? Smfredd (talk) 15:48, 27 October 2015 (UTC)

Johnmdaigneault, interesting questions. There are lots of ways of making decisions, each with merits and demerits. There's dozens of voting systems as well. -Reagle (talk)

The biggest takeaway for me from the three readings that we had today is that a tribunal method may be the lesser of all evils in the VotingIsEvil conflict. In Professor Reagle's chapter we learn about why consensus is not optimal for large groups such as Wikipedia, where "the community may not scale." It's easy to see why it would be difficult for a community of more than 70,000 users to all agree (and even participate) in every single case for review. Unlike small groups that can probably manage discussion among its members, larger communities with a variety of opinions are more difficult to keep productive. In addition, groupthink is likely to occur because of possibly negative consequences against those with minority opinions. It doesn't seem realistic to have every member of an online community voting or discussing every issue. This is why I really enjoyed reading about the tribunal that Riot games employs for conflict resolution. The tribunal really mirrors the American legal system, in that each case gets a jury, where peers are expected to review the case according to the current laws (Summoner's Code in this case) as well as all of the facts in the reports and either vote guilty or not guilty (voting to punish vs. voting to pardon.) Having a small number of peers decide each case should lead to speedier decisions about case outcomes as well as a level of comfort in knowing each member in good community standing is afforded the opportunity to monitor the behavior taking place within that community. Wikibicki (talk) 15:49, 27 October 2015 (UTC)



Reading Reagle's chapter about consensus was enlightening because I have never really thought about the benefits or limitations of coming to a consensus. I am sure there have been times in my life when there has been a consensus reached, but I feel like people do not refer to it as such. More commonly people worry about what the majority wants and not what the overall consensus is because appeasing the majority is much easier, but still usually not easy. I do like the idea of coming to a consensus as a group but also understand that it is very difficult to do this with a small group and is next to impossible in a larger group. I agree with Nataliewarther and her father that it is close to impossible to please everybody and this is probably the reason why Quaker ideals were left behind and a majority vote became the method for making decisions within a democratic society.

Although it is not possible in the society we live in or within any of the larger online communities we are part of I really like the utopian ideal of being able to talk out the problem and come to a better outcome than previously thought of. In this chapter Reagle pointed out that "if consensus is achieved, the legitimacy of the decision will likely exceed that of a coin toss or vote", which I am taking to mean that when a discussion occurs and a new outcome or plan is agreed upon it is better and more thorough than just a majority vote. I am going to try and take this idea of consensus with me as I have discussions because after unpacking what consensus really means I feel as though it could help to create more robust and elaborate plans for societies.

Natawhee7 (talk) 15:50, 27 October 2015 (UTC)


I know this QIC is late so it won't count, but I wanted to post something anyway, mostly to help my own memory/discussion in class. I'm still reading the reading, because I've officially become one of those people who say things like "there just aren't enough hours in the day" and "I'm dead inside", but it's cool. But what I did want to ask was this: Can Dunbar's number be applied to groups making decisions? Is there a better or ideal number for a governing body of equal people, like the Quakers, or something of that nature? I realize the reading mentions Dunbar's number, but is it really the perfect number still when it comes to debates and arguments and consensus? I don't know. I'm not Dunbar.

I'm also curious about the idea of consensus being reached because of the people who happen to show up to the debate. I never really thought about this online--whenever there is a majority decision made, it's just because one group had more people show up to defend a certain topic/view. The internet is MASSIVE, and trying to get everyone on one side is unreasonable. But what should be done in cases like this, where it's basically just a luck of the draw kind of thing? Hm.

(Is this my shortest QIC ever?) -Kev.w.pri (talk) 16:34, 27 October 2015 (UTC)


Whenever I read about consensus and governance and making decisions for a whole group of people and all that good stuff, I can't help but to think about the literal government. I know that, for instance, the US House of Representatives has 435 members, whereas the Senate has 100. Considering the Senate has a larger impact in decision making, and that the House essentially just votes on legislation, I'd say that the Senate at least roughly follows Dunbar's law. However, how much congress gets done is up for debate, so whether or not Dunbar's number proves efficacious here is unsure; though I would say that the US probably didn't pick 100 arbitrarily, especially considering other governing bodies such as the British Parliament have similar numbers. Whether its 50 or 250, I think that a smaller number of people making decisions for a larger is the most efficient way to do so because, as the theory states, you can only maintain efficient discourse with a limited group of individuals. So let's say the whole US population represents Reddit, for example, then I think Congress would equate to the administrators and State governments would equate to subReddit moderators–to put it in an online context.
As for what should be done in huge groups like the internet for making decisions, well, I don't know. But, like the world's population, there is no umbrella governance of the entire internet, so I as of now there would never be the need, or possibility even, of swaying all online users towards one thing (unless it's deciding whether a dress is blue and black or gold and white). I also think a lot of what goes into 'democratic decision making' boils down to groupthink, herd/mob mentality type of thing, where people just do and think what everyone else around them does and thinks. There's an interesting theory called the 11% rule which states if 11% of people in a given group adopt a certain thought or behavior, the rest will follow; it's basically critical mass in sociology. Anyway, I think this relates to consensus and decision making in governing groups as much as larger ones; it also explains why often times juries and the like will vote on trials unanimously. I know I didn't really explicitly answer any of your questions, but there it is.

Anussbaumer (talk) 17:14, 27 October 2015 (UTC)

Oct 30 Fri - Newcomer gateways

In regards to Problem 1: Recruiting Newcomers, I have a few comments. The first relates to the very popular "invite your friends" feature of recruiting new members that games like FarmVille and Candy Crush use. I just find it funny, because I feel like this type of sharing method might not be the most effective, really. I think they make games blow up too fast and they aren't prepared to deal with the popularity. Or maybe they are. But it seems that, because of the nature of exponential growth through sharing things on Facebook, these games go from lil' unknown games, to HUGE BEHEMOTHS THAT ARE EVERYWHERE (like, to the point where everyone on your friends list feels the need to post some snarky status about all the invites they're getting) in the course of, say, a week. I feel like this can't be at all effective. Granted, these games still have loads of devoted fans playing them, I'm sure (lord knows I genuinely believe I have a severe addiction to Candy Crush/Soda Crush), but I feel like they sort of become niche games almost. But maybe that's what they wanted in the first place. Or maybe they don't actually care, as long as they can come up with ways to extend the popularity of the game and make more and more money.

I also couldn't stop thinking about Pinterest. Back in the days of high school (I think), Pinterest used to be an Invite Only social networking site. I would really love to find out why they did this in the first place, because now they're everywhere and widely open to the public. Was it to create interest through scarcity? Was it simply to distinguish the platform from from other social media sites that were open to everyone, like common whores? It made me really want to use it. Maybe because I had to wait for it? And because it meant I was part of some elite group?

Huh, I just got to design claim 12 - Forcing potential new members to pay or wait makes people who value the community more likely to join and weeds out undesirables. Good on you, Pinterest.

I read a little further into the next section, but stopped myself. I'm very excited to discuss newcomer initiations and the idea of basically hazing people, whether it be online or in real life. I think it's all sorts of messed up. Sigh.

-Kev.w.pri (talk) 00:21, 29 October 2015 (UTC)


In Chapter 5: The Challenges of Dealing With Newcomers Kraut and Resnick present ways in which communities can be designed to recruit newcomers through interpersonal recruiting, impersonal advertising, self-selection, and screening. Although I don't deny the power of heuristic processing, I've always been skeptical about celebrity endorsement as an advertising strategy or a recruitment aid. A couple years ago when I was looking for a new hair shampoo for my damaged thin hair, I decided to give this brand a try over 935324 other brands only because one of my favorite celebrities advertised this particular product. Interestingly, despite the fact that he was bald at the time, I still purchased the product because his "attractiveness and perceived trustworthiness [did] spill over to the product" (Kraut & Resnick, 2011, p. 190). And guess what? The shampoo did absolutely nothing. In fact, I think it made my hair worse than before. I felt not only disappointed but also angry and deceived. Ever since the incident, I don't buy anything from that company, and I've been telling my friends and family not to fall for their false, misleading advertising.

Now you got me wondering what product and celeb!? -Reagle (talk) 12:18, 30 October 2015 (UTC)

I understand that celebrity endorsements could potentially implant a "substantially positive impact" (Kraut & Resnick, 2011, p. 191), but what if the community fails to satisfy newcomers' needs and expectations? Wouldn't that backfire and lose more potential newcomers, especially when those celebrity endorsements fail? Using attractiveness and likeability is a clever, effective strategy, but it could also be a shortcut to the downfall of the community. User20159 (talk)

User20159, interesting, can we think of any backfiring fails? -Reagle (talk) 12:18, 30 October 2015 (UTC)

Kraut and Resnick (2011) offer a variety of different challenges communities may face when attracting newcomers in Chapter 5. In the first part of the chapter, the authors state that problems are recruiting newcomers and selecting the right newcomers (p. 194) for a community. As a former Rush Chair for my fraternity, these were both issues that I had to consider when organizing our rush process last spring. It's actually interesting to me to see several parallels between that process and Kraut and Resnick's design claims, specifically because the authors' focus is on online communities and the rush process is for the most part offline. That got me thinking, while not necessarily about newcomers, but about the nature of communities and online communities - besides the obvious delineator of one being on the internet, because of the nature of life in 2015, online and offline communities are bleeding into each other. My fraternity is a community that is based in real life, however we all are in contact with each other in a group on facebook, through email, messaging, texting, etc; we have countless more options to communicate online with each other than we do offline. Rather than the on and offline communities being distinct, they are both part of a larger hybrid community. Very often, brothers will be in the same chapter room as each other, but still using various media to communicate with other people in the room other than face to face. The community would have to severely readjust if either the online or the offline community ceased to be.

Torma616, I think you are right.

Tying this back to newcomers, or pledges in this case, this plays into what we'll be discussing in next class, Problem 3: "Keeping Newcomers Around" (p. 205). Because so much of our life is lived online today, either actively or passively, one of the benefits of this type of community to newcomers is a sense of consistency. My relationship and interaction with my brothers can very quickly and easily be transferred from medium to medium. If I were to use media switching on them, they would be annoyed... but if they played along with it there would be very little latency in terms of our capacity to communicate with each other. When everybody is always only a few taps away, it makes staying communication more easy and constant. The online interaction only leads to further a sense of community in real life. - Torma616 (talk) 08:40, 30 October 2015 (UTC)


The design claims in today's chapter reminded me of some relevant things I'm writing about for my final paper in my Youth Communications and Technology class. I'm exploring how the internet and user generated content has changed how we receive and understand medical information, and it's potential negative effect on new parents. Unfortunately, Web 2.0 has created a place where false and unsupported medical theories are easily spread and repeated, such is the case with the anti-vaccination movement. I'm fascinated by this topic and have learned a lot about what attracts people to seek information on the web, what factors do they take into account when they evaluate a source's trustworthiness, and what the long term societal effects might be. I see a lot of crossover in my research with the design claims about how to recruit new members.

I'll talk specifically about design claim 8: "Recruiting materials that present attractive surface features and endorsements by celebrities attract people who are casually assessing communities" (Kraut, p.191). This reminds me of how the anti-vaccination movement gained many followers by using Jenny McCarthy and Jim Carey as the face of their movement. When a celebrity speaks out as an advocate for an issue, they are likely to have extreme persuasive ability due to their pre-existing following and their relatability to their audience. The anti-vaccine movement used the Internet to gain users in very similar ways as the online communities we are reading about. They used social media, YouTube videos of personal testimonials from ideological similar people, forums, ect. Recruiting members into the online anti-vaccination forums and conversations contributed to a widely misinformed public and contributed to dangerous amounts of distrust in America's medical system. I found it interesting that while recruiting newcomers by using celebrities, recruiting from social networks, and making it easy to share content is a good thing if we are talking about gaming communities, it is a potentially dangerous thing to discuss when we look at the long term harm certain online communities can do to our society.

--Nataliewarther (talk) 12:57, 30 October 2015 (UTC)


I found this chapter in Kraut and Resnick to be interesting because I have been talking to my roommate a lot about an online Music company that he and his friend are heading up. They have been running into issues not only getting newcomers but retaining. For recruiting members, they have pulled a few thousand members but they are not very active and they aren't sure if these are members that will remain or even produce engaging content. So essentially what they are trying to create is a community of musicians that collaborate with each other to create new music from current music that has been broken up by track. So Justin Beiber's new song could be stripped of everything but the bass line, and then people can take the brass instruments out of a Robin Thicke song. Then it gives people that don't have DJ experience or equipment the opportunity to create music. So the problem that they are running into is that they don't think the newcomers are the quality that they are looking for that will grow with the community and be influential.

I think the design claim that would work best in their current situation is Design Claim 4. All the guys working on the site should be successful and respected members of the community and should be using their social networks outside of this new one to increase how to community is seen. However this made me think, each of the boys has a couple thousand friends and if everyone signed up it might dilute the pool of the community and continue the pattern of pulling in low quality users that have lower potential to become the influential members of the community that it needs for structure. So what would be a way to avoid that while pulling in more new members?

Alexisvictoria93 (talk) 14:36, 30 October 2015 (UTC)


The claims covered by Kraut and Resnick made me remember the claims we read previously about maintaining user motivation and commitment, particularly in terms of relationships within the community. For example, claim 5 uses not only the most prominent members, but their personal connections to recruit users-going back to the idea that you are more likely to stick around in a community if you have relationships or similar interests, this covers the relationships part. The other design claim that stuck out to me most was claim 11, which states that giving accurate information upfront to new members will allow them to see the user experience, and therefore have better user experiences and "fit" in a community. What originally came to mind after reading this, and from seeing a pretty good example of it on the philosophy/newcomers page for Debian, is about what happens when these might change. For example, online communities are almost inevitably bound to alter somewhat especially with growing number of users and new technology, which means that sometimes these new guidelines might not fit for an experienced user. It makes me wonder whether experienced users who eventually leave communities due to differences leave on good terms, but also how a community can change while still keeping on their experienced users, since I think an argument can be made for the importance of experienced users in addition to new users. Smfredd (talk) 15:31, 30 October 2015 (UTC)

Smfredd, we'll touch on this a bit on our last day. -Reagle (talk) 16:36, 30 October 2015 (UTC)

For my QIC, I want to look into how this idea of newcomers applies to the communities of paid fantasy football sites like DraftKings and FanDuel, since I wrote about those communities for my first paper. I found the idea of recruitment very interesting because in most circumstances adding a larger population benefits the community while in Fantasy Football, the larger population actually hurts the community, but benefits the administrators. Statistically speaking, the more members on a Fantasy Football site, the less likely each member is to win. In addition, Fantasy Football sites also ensure retention by keeping the money invested within the site. They don't just automatically deposit it into your bank account, they leave the winnings in your account on their site so you are more likely to reinvest before withdrawing money. Next, I noticed within the past few years that Internet websites have begun to use more traditional advertising. I believe this ties into Kraut & Resnick's Design Claim #2 (p. 184) that says word of mouth recruiting is better than impersonal advertising. I believe that advertisements online began to seem somewhat deceptive and hard to distinguish with the constant bombardment online. Employing more traditional methods creates more concrete feelings towards the companies and ensures more word of mouth recruiting. This is definitely the case with online paid Fantasy Football sites like DraftKings and FanDuel who have combined run 30,000 commercials since the start of the NFL preseason. Finally, these sites also use celebrities to promote their large pool leagues. For example, when on ESPN I saw an advertisement for a $400,000 league against famous NBA player Dwayne Wade. This falls inline with Kraut & Resnick's Design Claim #5 (p. 188) that says to identify the most influential members in the community. When discussing fantasy sports, the most influential members are definitely the sports players themselves. Overall, I think fantasy sports employ very successful methods of recruiting newcomers, but these newcomers only seem to benefit the company itself.

-Johnmdaigneault (talk) 15:30, 30 October 2015 (UTC)

Johnmdaigneault, how did you first come upon these sites? -Reagle (talk) 16:36, 30 October 2015 (UTC)

Kraut and Resnick design claims in chapter 5 seem to be interrelated with the idea of interpersonal. The first few design claims mention how useful interpersonal relationships are with getting people on board for a new program or gaining traction for a company. We trust the people closest to us and can be turned off at times by mass advertising. Thinking back to when people were joining different social medias, the only way I was going to join was if one of my friends did too and told me all about it. The trusting relationship makes you want to be on the same platforms as other people so you can continue your relationship while online. An example of a platform I used was GroupMe. I started using this while playing field hockey because it was the best way to communicate with everyone on the team including international players. It all started from one person using it from their friends to our whole team using it and we saw other teams start using it as well. Even though this is a chat medium the cycle continued of who was using it and one relationship with a person made other people willing to use it.

Although we are talking about gaining newcomers and enticing people to join networks or web groups it makes me wonder about the retention. A member in the group will feel obligated to stay depending on how much they have contributed to the group, but I wonder how often there is a huge decline and we see sites or games shutdown because people just stop sharing and lose interest. Ltruk22 (talk) 15:31, 30 October 2015 (UTC)

Ltruk22, yes, this is like Smfredd's question. This is typically referred to as "exit" and will talk a bit about it. -Reagle (talk) 16:36, 30 October 2015 (UTC)

This chapter in Kraut and Resnick was all about the recruitment of new members. In the beginning, it focuses on recruiting members through word of mouth and through social networks of members already active in the community, especially since this form of recruitment is more effective than generic/impersonal advertisements. Ideally one would want a member of influence in the community recruiting new members because other members (who are being influenced by him/her) will follow their lead by recruiting members as well. The chapter then went on to describe different methods of recruiting new members, most of which revolved around the principles of RSACLC (Reciprocity, Scarcity, Authority, Consistency, Liking, and Consensus). The book encouraged ads to present "endorsements by credible sources" (p. 191) (an example of authority being used), or "present attractive surface features and endorsements by celebrities" (p. 191) (an example of Liking), and even suggesting "Emphasizing the number of people already participating in the community" (p. 192) (an example of Consensus or Social Proof). The final section this chapter focused on was how to weed out members who would not be a helpful member of the community. Most of these methods required new members to perform a task (p. 200), complete a diagnostic test (p. 202), provide external credentials (p. 203), provide referrals from current members (p. 204), or even have to pay a fee to join the community (p. 200). All of which would help determine who actually wants to be involved in the community.

When I first began reading the section and read the first couple design claims, that was the first time I realized why social media sites such as Instagram and Tumblr have options to post to other forums such as Facebook and Twitter. It not only increases the visibility of the community buy will increase word of mouth advertising with the added benefit of those who are see it being in the same social circle of other members. Which results in helping the other community in two ways instead of just the one.

One question I wondered while reading is how much of a difference in participation is there from members that have been personally recruited compared to those who were able to seek it out via impersonal/generic ads? Meaning, do those who were recruited feel a higher obligation to participate in the community because they were chosen to be part of the community instead of requesting to be. BrazilSean (talk) 15:47, 30 October 2015 (UTC)

I don't know how or why my first paragraph is formatted the way it is.. BrazilSean (talk) 15:49, 30 October 2015 (UTC)
BrazilSean, it began with a space, which I removed. Otherwise, it would be an interesting experiment to test what you ask about. -Reagle (talk) 16:36, 30 October 2015 (UTC)

One of the most interesting takeaways from this week's reading was this idea of celebrity endorsements and their potential to aid recruitment in terms of heuristic processing. I would be interested in seeing the effects of celebrity endorsements that are paid for versus celebrity endorsements that are genuine and whether or not that is statistically significant in how effective an enticing factor it is for newcomers. For example, the reading uses the example of William Shatner and the Blizzard advertisements that said "I'm William Shatner, and I'm a Shaman." As a potential community member, I am not likely to believe that if I join the World of Warcraft community, I'll run into William Shatner during my travels. There's not likely to be engagement between the celebrity and the newcomer. However, many celebrities have joined Pinterest or Tumblr of their own volition (most likely for career-based motives, but nonetheless independent of company incentives) and this may be more enticing to users. Not only is their more engagement between the celebrities and the potential users, but there's also the idea that the celebrity endorsements may be more genuine. I've also had experience with Design Claim 12 of Kraut's chapter, which states that forcing potential new members to pay or wait makes people who value the community more likely to join and weeds out undesirables. Over the summer I signed up to be a part of the Ipsy online community. Ipsy is a community where users pay to have small glam bags shipped to them after which the users review those products for additional rewards. I had to both pay and wait to be accepted into the community as a way to manage the volume of the community or perhaps illicit ideas of exclusivity. The wait time is typically a month and you are given the option to opt-out before you receive the bag. I may have chosen this option if I hadn't heard about the company through word-of-mouth recruiting, but I decided to stick around. Wikibicki (talk) 16:05, 30 October 2015 (UTC)

"Joanne - tell @DelTaco I will accept $12,000 to p lug their shitty food. Thanks, Rainn" -- Rainn Wilson and Del Taco: major Twitter mistake or brilliant marketing scheme?

]


When dealing with newcomers, there are five problems that need to be solved: recruitment, selection, retention, socialization, and protection. With recruitment, the community first needs to advertise itself to ensure a supply of new members for replenishment and security for growth. The communities are tasked with selecting potential member who fit their group. Selection occurs through screening, where the community selects the members, or through self-selection, where potential members decided whether or not it is a good fit. After members are selected, it is important to retain the new members. Theory and experience suggests that newcomer ties are fragile; the community must engage to keep valuable members around until they can develop stronger ties to the community. They create these stronger bond by socializing, teaching new members how to adapt and adjust to 'appropriate mannerisms' to the group. Finally, the community must protect themselves from those who have little knowledge or motivation to follow community norms.

Ahn.cha, excellent summary! -Reagle (talk) 16:36, 30 October 2015 (UTC)

In terms of the Debian reading, I have no clue what Debian is. Either they have done a poor job of marketing, or it is advertised only though a specific niche. However, upon reviewing their "New Members Corner," it is easy to see that they select members through a self-selection process. Those who believe they are a good fit can apply to become a developer and contribute tot heir site, which I think is a great way to increase contribution since the developers feel like they are a "good fit." The most important takeaway from this page is the description of what being a member is like. They post what Debian provides as a non-developer, and what you can gain from being a developer. With being a Debian member, comes a lot of responsibility and power so Debian states that the New Member process takes time because they require trust and commitment. By placing an emphasis on the process of becoming a member, more meaning is added to when a new member might be confirmed.

This entire process reminds me of college applications. We apply to the schools that are best advertised to us, like for me, Northeastern was advertised as one of the best campuses in America, something I was very interested in. Through a self-selection and screening process we are choose and are chosen into the school we want to attend based off factors like location, major, and opportunity. And as everyone, transferring out of school is a common occurrence. It is up to the institution to retain their members for their academic prowess and tuition money. They do so by offering organizations and club that allow students to interact with one another and develop strong bonds and norms with the community. In terms of protection, there is NuPD that regulate those who are unaware of campus policies. So even in an institution that isn't online, it still follows the same principles Kraut explains for new members. Ahn.cha (talk) 16:23, 30 October 2015 (UTC)


Nov 03 Tue - Newcomer hazing

According to Leon Festinger's cognitive dissonance theory, when conflicting ideas emerge to be psychologically inconsistent, humans are motivated to "change one or both to make them consonant" (Kraut and Resnick, 2011, p. 205) in order to maintain or obtain their mental comfort. Applying this theory, Eliot Aronson properly demonstrates that individuals who "undergo a severe initiation to attain membership in a group increase their liking for the group" (1959, p. 180). In his experiment, participants who were requested to perform a challenging task perceived the initiation as "too painful for them to deny," and therefore, they chose to reduce their dissonance "by overestimating the attractiveness of the group" (Aronson, 1959, p. 180).

Interestingly, though, I have always thought of this behavior rather as self-evaluation or challenge to myself. For instance, if I participated in Aronson's experiment, my liking for the group would increase not because the task was too painful for me to deny but because I would feel accomplished and qualified to become a member of the group by successfully completing the requirement. I would see "reading aloud some sexually oriented material" (Aronson, 1959, p. 178) as a challenge to myself instead. If I found this task too difficult, then I would simply believe that I'm not good enough or not suitable for this particular group. Now that I carefully think about it, this is how my mental mindset seems to work a lot of times, and maybe this is why I get discouraged so easily but also encouraged easily at the same time. Is this also a part of cognitive dissonance, or is this a completely different psychological mind? How is cognitive dissonance also different from self-justification?

User20159, interesting counter-theory. You may be intrepeting the same mechanism under a different name/motive, or perhaps it is a distinct mechanism. I think the article "Cognitive Dissonance: How Bullies Rationalize Their Behavior Toward Their Victims" indicates C.D. is distinct.

I also think this can widely vary depending on how manageable or difficult tasks appear to people. Some people may find reading aloud some sexually oriented materials easy, but for some individuals it could be extremely challenging and even shameful, thus "driving away potentially valuable contributors" (Kraut and Resnick, 2011, p. 206). When assigning such entry tasks, how do or can communities design them so that tasks are adequately challenging and manageable for newcomers? How do they choose the level of easiness or difficulty? User20159 (talk)

Good question, please raise it in class. -Reagle (talk) 13:58, 3 November 2015 (UTC)

Disclaimer: I'm writing this on my phone on a plane so I apologize for my disconnected thoughts/writing style. My ears keep popping and I'm getting sick and having an all around A+ time *100 emoji x3*

This is the section of this book that I've been so excited to get to! Oh happy day! Let's get to it.

How long or how severe of an initiation process is too long? Is there some sort of equation/methodology for determining this, or is it simply based on peoples' different opinions of themselves and the community they're trying to join? For example, is there a general rule of thumb about how hard a tutorial phase of a video game should be, how many minutes/hours it should take a user to complete (and if they don't, then that's good because that means they weren't right for the game), or anything like that?

Kev.w.pri, this is like User20159's question. I suspect the answer is that it emerges rather than being designed. Are extent members talking about not having enough newcomers, or being overwhelmed by them? -Reagle (talk) 13:58, 3 November 2015 (UTC)

Moving quickly to real life communities, it constantly confuses me that those Greek life communities that harshly haze newcomers still think this is a good idea and actively practice this bullying. They don't do it so the pledges can bond over it and become a tighter knit community--they do it because they're vindictive and think "I had to go through this, so now that I'm in power I'm going to make someone else go through this". The sentence from the chapter in Kraut and Resnick (2011) that states "Initial positive interactions help retain new members" (p. 207), in my opinion, should really be applied to every community. If someone thinks that being an asshole to me and my peers is going to make me appreciate their group more, then they clearly have got some learning to do.

I suspect what is happening as well is some degree of cohort parochialization. You bond with your cohort/pledges more through that process. -Reagle (talk) 13:58, 3 November 2015 (UTC)

After reading this section on welcoming newcomers, Instagram's whole thing with "Your friend abc just joined Instagram" or "Your friend xyz just posted their first photo" makes a lot of sense.

What I gather from this chapter is basically just the golden rule of "don't be a dick"/"do unto others as you would have done unto yourself". I'm not saying that newcomers to communities deserve gold medals and pats on the back for simply existing, but if they are treated with respect, given time and opportunities to learn, and have a safe space available to them where they will not damage the existing community, then things should be fine! Better than fine--the community should thrive! Positivity is always the way to go, imo!!!

See y'all (I was in Georgia this weekend) tomorrow

-Kev.w.pri (talk) 00:12, 3 November 2015 (UTC)


To start off this post, I do have to reveal that I was not looking forward to the section entitled "Newcomer Hazing." As a member of a sorority, one of the most devastating stereotypes of FSL is that we haze. The word "hazing" has quickly become my least favorite word. For me, joining a sisterhood has been nothing but love, acceptance, and support, and it breaks my heart that some women and men do not experience the same from the organizations that they join. As a psychology minor, I know that the cognitive dissonance theory is often used to justify why people allow themselves to be hazed. But as a member of Greek life I can tell you that just like the members of an online community, if someone hazes you there's always the option to leave rather than stick around and justify it later, especially if the "entry barriers" are unnecessarily sophomoric (like replacing "First Post" with "boobies") or harmful. Now that my semi-rant is out of the way…I'd like to talk about why Design Claim 19 was particularly salient to me. In this claim, Kraut discusses the ways in which sandboxes give newcomers a safe space to learn how to use Wikipedia tools. Rather than having flawed, first-round articles appear on/crowd Wikipedia's main space, newcomers can actually use the Wikipedia interface to work on and preview their article. I experienced a similar type of set up with the site edublogs.com, which I was required to use for a high school journalism course. Both Wikipedia and Edublogs recognize that it's important for users to learn how to do more than just draft an article and paste it into the workspace. Newcomers must learn how to link within pages and externally, add photos and adjust their size and placement, as well as other features unique to that site. By allowing newcomers to create, learn, and edit within the community itself, I feel that newcomers feel more engaged with the community from the start, and are more confident when they can perfect their article using a preview function before posting publicly. Wikibicki (talk) 02:47, 3 November 2015 (UTC)


Entering a new group can always be difficult and scary because you don't know how you fit in. And you may not know how the group will respond to you. From Kraut and Resnick (2011) they talk about people joining different online communities and how some members may have an issue with holding back their emotions and being nice to newcomers. But as we read in Aronson newcomers don't mind being treated in a certain way to become part of a group. The example of World of Warcraft they almost have a try-out for a group for a period of time before they are fully accepted into the group. It might not make sense for them to admit someone who isn't willing to add anything beneficial to the group. Another one of Kraut's design claims is the example of the swing dancing group where they write more personal messages to people. This makes people more willing to disclose information to each other. People always feel more comfortable disclosing information to a person who has disclosed a similar amount of information. It also makes them more part of the group.

When I was a freshman playing field hockey we had a senior reach out to us to talk and tell us about themselves. They wanted us to feel comfortable when we came into preseason and it helped unify the team before we went through all the running tests and tough days of practice. Having the seniors encourage you and want to get to know you makes a difference in your play and willingness to be part of the team. I have been on other teams where seniors won't talk to freshman and it can really hinder the group climate and make people miserable being there.

My only question is why Wikipedia uses a type of template rather than trying to personalize a message to someone and does that influence the retention of new members? Ltruk22 (talk) 02:57, 3 November 2015 (UTC)

Ltruk22, this is something WPians often talk about; I think it's a matter of effort/time and organizational bureaucratization. -Reagle (talk) 13:58, 3 November 2015 (UTC)

I see the initiation period of joining a group as going a few possible ways. Between the claims made by Kraut and Resnick about how entry barriers for newcomers makes those who complete them more committed to the group, my thought went to those who do not complete them. Instead of adopting a "I'm not good enough" view on the group, do some instead come out with a view that the group is simply not good enough for them, or worthy of their time and effort. I would wonder if this had an impact on people who might view these communities negatively, if any at all.

Smfredd, interesting. Cognitive dissonance theory would predict this I think -Reagle (talk) 13:58, 3 November 2015 (UTC)

Secondly, since newcomers are sometimes put through rigorous, or at least time consuming tasks to prove commitment, I also wonder if this has anything to do with positive or negative interactions with more long term members. Would this then decrease the number of bad interactions between new and old members ( RTFM for example among them) along with banning and general misconduct? I feel like if all of these commitment proving tasks are done, there will be less negative interaction because firstly the newcomer has more help and encouragement to learn (ex. Wikipedia's sandbox) and an early start to relationships with other community members. Smfredd (talk) 04:14, 3 November 2015 (UTC)


The textbook describes the value in having a 'welcoming committee' and how "formal, sequential, and collective socialization tactics" (p. 215) create more committed members. Based on my own experience, I have a different perspective. When I joined Wikipedia, someone wrote on my talk page but the message seemed insincere. I knew that the welcome message was a template, and that the admin who said hello received some e-mail notification that a new member joined Wikipedia. I think the message worked in the opposite way and actually made me feel disconnected from the larger community of Wikipedians. However, I see that the call to action in the letter was for me to comment and participate in a specific new user discussion board. Maybe if I had actually written something there, I would have found authentic interaction with old and new members.

The second piece, about formal socialization tactics, confuses me because a formal introduction to the community would make it easier to join and begin contributing. Based on Aronson and Mills, I therefore would have been a less dedicated member to Wikipedia. I like figuring out how to perform tasks on Wikipedia, and see it as a small victory every time I make an edit. Since I had to teach myself how to do these things (or inquire in class about them), it is similar to an initiation in its own way; it was difficult to learn but now I am dedicated to keep learning and working on Wikipedia.

-Hayden.L (talk) 05:00, 3 November 2015 (UTC)


Honestly, I've been looking forward to this discussion. The first question I get asked when I tell a person I'm in a fraternity usually is along the lines of "what is the hazing like?" Such is the case when Greek Life has a stereotype of hazing (and to be fair, there is some truth to most stereotypes). What I'd like to address is Aronson and Mills, who say, regarding Festinger's Theory of Cognitive Dissonance, that a person who has undergone hazing to enter a group will rationalize away the negatives about the hazing/the group - either convincing one's self that the initiation wasn't bad or exaggerating the positives of the group. I don't necessarily agree with this. As a former pledge, I can honestly (and maybe luckily) say that in my experience, I never once felt like I was being hazed. Yes, I had a pledge process, and no, I wouldn't say that everything I had to do is my idea of a fun, relaxing night, but I never once felt like I was being hazed. I think that to write off that statement by saying that I'm rationalizing the experience by downplaying the negatives or playing up the positives is bullshit. It's like when one person says "I'm not an addict," and then another person says "That's exactly what an addict would say!" You know who else would claim to not be an addict? A non-addict. For me, pledging is still the greatest time that I never want to repeat. To me, both then and now, the pledge process I went through felt more like Kraut and Resnick's Design Claim 23, where old-timers provided us newecomers with formal mentorship, rather than needless hazing at any point.

I look forward to discussing my experience in a similar vague obscurity in class -Torma616 (talk) 08:20, 3 November 2015 (UTC)


Just like many others in the class have really been looking forward to this section because hazing has always been something that has puzzled me. I remember growing up and reading stories about people who had died from hazing and never understood it, I also was also easily able to think negatively of it because I knew that I never wanted to be a part of Greek life so I assumed that I wouldn't have to take part in any hazing, at least not in college. I have always thought it was so stupid and never understood why people would go to such great lengths just to join a group, the article doesn't address this and it still something I am left wondering. Why do so many people stick it out and endure sometimes torturous hazing? That is a hazing study part two that I would be very interested in reading!

Something else that is interesting but not addressed by the study was the component of exclusivity. Did participants also like the group more because they felt that they had a real chance of not getting into it? I know that the mild group also had to be tested before getting in, but the test was not hard and they may have assumed nearly anyone could complete and pass it, while the group that had to read the embarrassing words knew that not everyone would be able to read it aloud, making their membership in the group more obviously exclusive. I really liked the study but it did leave me with a plethora of other questions because I find the human mind so interesting especially when it comes to how it deals and copes with negative issue and situations.

Natawhee7, another good research question. How would you design a study/experiment to test this? -Reagle (talk) 14:06, 3 November 2015 (UTC)

What the article and study did address was very interesting and something else I have always wondered. Why do people stay in a group after they have sometimes been tortured as a way of initiation. I always figured that since they made it through they want to be able to do to others what got done to them as a way of reasserting their power and dominance, and since many times the worst hazing is done by males, their masculinity. Although I think that may still be part of the reason it was cool to have an "ah-ha" moment while reading through the study. It does make complete sense to me that you would force yourself to like the group and its members more as a way of justifying and affirming that what you had gone through was worth it because what you are a part of now is amazing, and exclusive.

Natawhee7 (talk) 13:59, 3 November 2015 (UTC)


This QIC is particularly personal for me as I am a member of a fraternity on campus. As a freshman, the main reason I looked to join a fraternity was to be a member of a community that I was close to. My high school was a relatively small school and everyone was extremely close to each other so when I came to a university as large as Northeastern I found it hard to find my niche. The only thing I was worried about was what I heard about pledging in general. Luckily for me, the pledging process was not as bad as I was expecting. However, even the bit of drinking, joking around and bonding we did made the members closer to the group and more committed. Four years later, we see the real effect of the entry barriers and an example of Kraut & Resnick’s Design Claim #17. In addition, the pledging process also epitomizes Design Claim’s #18 – 20. Those who felt as if they overcame some barrier to join the Fraternity are much more involved as a result of Normative Commitment. Those like myself who mostly joined for the members are Affectively Committed and have become much less involved because they didn’t feel the same entry barriers the others did. This brings me to the question of whether this is just a skewed opinion since northeast colleges are typically much more mild than southern ones. I believe that the entry barriers felt, even if minimal, increase commitment to the community for a particular amount of time. The larger the barrier feels for the newcomer, the longer they will end up staying in the community. This effect is extremely present within fraternities.

Johnmdaigneault (talk) 14:45, 3 November 2015 (UTC)


Contrary to Aronson’s (1959) hypothesis that “persons who go through a great deal of trouble or pain to attain something tend to value it more highly than persons who attain the same thing with a minimum of effort,” I believe it is far more important to discourage hostility, foster friendly interactions, and encourage self-disclosure. A couple of months ago, I joined FindSpark, an online community dedicated to setting up young professionals for career success. The core of the community is to connect with other students who are going through the same stressful process of finding a job, with the help and resources FindSpark provides. On a monthly basis, the community hosts events, panels, and even organizes in-person and virtual programs to facilitate job search.

One of the characteristics that convinced me to stay and contribute to the community was the initial positive interactions granted from existing members. The first couple of introductory emails were extremely welcoming, including inclusive language, such as “we know how stressful this can be” and an informal tone. These contributed to a more personal relationship with the founder of the community. As a new member, one of the first assignments is to introduce yourself in both the main site and the Facebook private page. As I read through other people’s backgrounds and experiences, I felt too the need to “reciprocate and reveal information in exchange” (Kraut et al., p. 208) in hopes of receiving positive comments to my post. The responses were indeed positive; members from Boston reached out and existing members shared links to helpful sites and upcoming events around the area. Fortunately, until today, I have not experienced any hostility in the community

Although FindSpark does not provide formal mentorship to new comers, the resources can easily substitute that. The community provides emails of existing members who have joined FindSpark and now have a successful job in order for new members to contact them if any questions arise. I remember my first time in the community, I shared a link to an event in New York, particularly addressing marketing positions. An existing member, who I still remain in contact with, reached out and taught me there were different forums for different industries, and if I wanted to post something related to marketing, I should do so in the appropriate forum. Without any severe initiation ritual, but with positive and friendly interactions, I have stayed committed and contributive in the FindSpark community. Andrea guerrerov (talk) 15:37, 3 November 2015 (UTC)


Kraut & Resnick roughly state that 'entry barriers for newcomers increase commitment,' a phenomena that is later explained in the second reading as a form of cognitive dissonance–justifying something after the fact to save emotional face. The first thing that popped into my head was fraternity/sorority hazing and strange, grueling initiation rites for other clubs like the Freemasons, but since they aren't online communities. I also thought of MMOs and some of the networking sites found here.

For MMOs, such as World of Warcraft, there are several hoops that newcomers must jump through. Firstly, you have to actually buy the game with a subscription. Then, you have to spend a substantial amount of time making your character and learning the dynamics of the gameplay. Now, in the virtual space of WoW, there are many subcommunities, starting from Alliance and Horde, going down to specific guilds and raid-teams. As I understand it, one must be of the highest level to join a raid-team; within the world of WoW, the raid-teams represent the choicest and most exclusive subgroup, and so once members reach that point, their commitment is ironclad (usually)–not to mention the fact that it takes months to reach that points. Anyway, one reason I think strenuous entry contributes to user commitment is because people often think and feel that if something is difficult to attain, or is encircled by leagues of red-tape, that it is therefore valuable by nature–the bigger the diamond, the riskier the heist. Humans naturally correlate risk and reward directly. - Anussbaumer (talk) 16:17, 3 November 2015 (UTC)


I was not at all surprised by the results of Aronson’s study The Effect of Severity of Initiation on Liking for a Group. I found myself thinking a lot about college acceptance rates, and how if you have to work hard to be accepted to a certain college, you are going to feel more loyalty and pride about your presence in that community. I also thought about the Olympics, and how the process of becoming an Olympian is difficult, but the end result is far more rewarding that being a swimmer at your neighborhood’s all inclusive country club. You could make this argument over and over again; prestigious gentleman’s clubs, varsity level sports teams, high paying jobs…. if you work hard, the results are rewarding.

I will say, however, that I’m not sure this applies very smoothly to online communities. As I use them, online communities are a place where I expect to be respected, and their benefits and features serve a recreational and relaxing purpose. I personally do not think I would be motivated to join an online community that required extensive amounts of work. Kraut and Resnick state that “a severe initiation process or entry barrier is likely to drive away potentially valuable contributors at the same time that it increases the commitment of those who endure the initiation or overcome the barrier. This makes sense to me in terms of real life communities. Of course the Olympics should be selective and make it challenging for athletes to join the Olympian community. But should online communities have this sort of challenging initiation, especially if their purpose is primarily entertainment-oriented? Is driving away potentially valuable contributors ever a positive thing for online communities? What are the benefits of being exclusive on the Web?  

--Nataliewarther (talk) 01:50, 4 November 2015 (UTC)

Nov 06 Fri - Debrief: Social breaching

Nov 10 Tue - Gratitude

Something I found interesting in the presentation we read for this week is slide 14, which reads "Individuals tend to accept support from those of a kind they could themselves return on occasion" (Homans, 1958). I started looking for places in my own life that would help disprove or prove this point, and I definitely agree with the idea. I think the reason some favors don't feel so big is because they are easily returnable. Buying milk for you and your roommate, for example, is more easily returnable than lending someone $500. Sure, one is also more financially extreme, but one is also much easier to return.

The next slide states that community members contribute more when they know the unique impact of their contributions. This made me think about google analytics, and how the site Medium uses it to tell readers their stories' weekly stats. I love this feature, as it helps me understand which creative writing pieces are getting to large audiences and are widely appreciated. I definitely started contributing to Medium more and becoming a more confident poster once I realized the benefits of this feature.

Lastly, I'd like to touch on gratitude, as is discussed in "Gratitude and it's dangers to social technologies". I can't talk much to the second part of this article, which talks about "The dangers of thanks", but I can speak to the power of gratitude in every day life. I realize that I'm now "The girl who talks about Quakerism in every QIC", but it's just so relevant here. When I was in high school I went through some challenging things, and my advisor would sit with me twice a week to support me. Each time he made me bring a gratitude list. It could be anything at all, I could write that I was thankful for my family, for my favorite cereal, for a movie I liked, it didn't matter. He would read off each item, and make me really visualize what it was I'd written. If I wrote "I'm thankful for my dog", I had to close my eyes and actually really channel my dog and what it is about him that makes me thankful, and try to fill myself with that happiness. This was all part of a bigger idea called the "Law of Attraction", which is basically a belief that positive thinking and visualization can help you attract the things and the life that you want. Basically, all thoughts are energy, and positive energy attracts more positive energy. Long story short, this process of making gratitude lists really impacted me in a positive way, and it became a part of my daily routine. I constantly make gratitude lists in my head and tell people when I am thankful for them. It was impacted my life in such a positive way, and I think it deserves a big part in everyone's life! Nataliewarther (talk) 02:07, 9 November 2015 (UTC)

Nataliewarther, it is a great practice and one I do myself! -Reagle (talk) 15:27, 9 November 2015 (UTC)



Couchsurfing was brought up in two of the different readings and is something I connected many of the points being made back to since I participated in couchsurfing during my Eurotrip two summers ago. Couchsurfing was a unique experience and I can definitely agree and relate to many of the feelings of indirect reciprocity that surfaced during a three day, two night stay at a strangers house in Amsterdam. My friend Janis and I couchsurfed at Joakims house whom we had never met before. It was a very strange experience and both Janis and I were thinking of was to repay the favor days before we even stepped foot in his house. The hostels in Amsterdam were not very expensive but we thought it would be a fun experience for a few nights. Joakim mentioned that he liked to have couchsurfers so that he could do the touristy things in Amsterdam like go to museums or on tours. Janis and I felt bad because our plan was mainly to rent bikes and wander around the city for two days, but when Joakim mentioned this to us we felt the need to go to a few attractions and to invite him along. Although we essentially repaid the favor of having him host us at his house by inviting him out on a tour with us and paying for it we still felt the need to do more. We asked him what kind of food he liked, cooked a dinner for all of us, and bought a plethora of wine and deserts. After this we went out to drinks at local pubs where we also bought a few rounds.

This weekend of couchsurfing is a wonderful representation of the feeling of needing to pay, and overpay, what was provided to us. Even though we did all of these things for Joakim, and probably spent more on this than a hostel would have cost, we still left feeling bad that we had not gone to any museums or on more tours, which he wanted to do. Janis and I spent the entire weekend feeling indebted to Joakim even after we had presumably repaid the favor. Couchsurfing is meant to provide a cheap way for people to travel and see the world, which both parties are aware of, but our sentiment with the arrangement remained. Additionally, Janis and I both agreed that when we had our own place that we would consider hosting couchsurfers to repay the favor.

Reciprocity is interesting to me, especially indirect reciprocity because the absence of a fair equivalent of payment leaves so much up to personal interpretation. All of the readings and my memories of couchsurfing made me wonder if people on average pay more than they should when there is not a direct amount known: indirect reciprocity; or if people pay more when there is an agreed upon amount such as in direct reciprocity. If there is not a clear cut answer and it wavers what are the variables? Does the good or service determine amount of payment, is it based on the person, or could it even be a regional difference?

Natawhee7 (talk) 03:18, 10 November 2015 (UTC)


Nathan Matias’ article and the WikiLove experiment reminded me a lot of the reciprocation factor in Cialdini’s “The Science of Persuasion,” specifically when associating gratitude with “paying it forward,” “repaying,” and “reciprocal exchange.” Matias alludes to Adam Grant and Francesca Gino, researchers who tried to distinguish among the interaction between “thanks, motivations based on a sense of self-efficacy, and motivations rooted in a desire for community belonging” (p. 3). The experiment’s results showed that “expressions of gratitude doubled the likelihood that people would help someone a second time, increased the time spent helping (by 15%), and increased a person’s rate of work (by 50%)” (p. 3). Similarly, the authors mentioned participants engaged in gratitude to be socially valued and not because it gives them a sense of accomplishment. The findings highly associate with Cialdini’s (2001) ideas that “all societies subscribe to a norm that obligates individuals to repay in kind what they have received” (p. 76). Although Cialdini (2001) uses donations to veterans group as an example, the same notion that one is exchanging things with others for mutual benefit, applies to expressions of gratitude online.

The findings discussed in Fung’s (2011) article made me realize I have unconsciously expressed gratitude as a result of reciprocity several times. He mentions “having others compliment you on your edits/articles is the most likely to cause people to say they will edit more frequently (78% agreement)” (p. 1). A couple of weeks ago, Lauren mentioned she liked the points I had raised in my QIC, and as a way to say thanks and express gratitude, I did the same thing; I mentioned her in my QIC and agreed with the ideas she had discussed. Although my actions surprised me at first, I then realized the online space is not safe from persuasion factors, such as reciprocation. Although giving thanks and expressing gratitude are different terms, both are notions that should come from a “good faith” perspective rather than used as a quality that will most likely cause reciprocity. Andrea guerrerov (talk) 15:10, 10 November 2015 (UTC)


Matias differentiates between thanks and gratitude in his article; thanks often being a one-time action directed toward a person for a specific task while gratitude is a general act of goodwill or appreciation. Matias also describes the benefits of expressing gratitude, and it seems like something online communities would like to foster. I tried to think about how online communities encourage gratitude, but I found only systems of showing thanks. In fact, I thought about Hope’s story during class last week about randomly getting a positive message on Facebook from someone she didn’t know. She thought it was weird and even saw it as a form of social breaching! That doesn’t sound like any of the positive outcomes Matias mentions.

One example I came up with was the Reddit community, Random Acts of Pizza. People ask the community to send them a pizza by posting in the forum, and other community members can then respond to the post by ordering them a pizza to the given address. There’s no real reason why anyone would send a random stranger online a pizza other than for the feeling of positivity. Of course, this entire community is based on acts of gratitude so it’s a bit different than implementing design changes to encourage it. There is a thanking system on RAOP, and I would be interested in seeing how the thanks given by the recipient affects the generosity and sentiment of the community as a whole. - Hayden.L (talk) 15:12, 10 November 2015 (UTC)


Hayden.L, I’m so glad you mentioned my story in your QIC this week! That’s actually not the first time I’ve been a “victim” of a kind act that I don’t know how or if I should reciprocate. This led me to pay particular attention to Matias’ idea of gratitude as well. While the #365daygrateful movement pays homage to gratitude, giving thanks signals an understanding that there has been a reciprocal exchange. I think that this line often gets blurred, especially in exchanges that take place over the web rather than in person. In addition to the anecdote I shared about the Facebook message in class, I once received $15 on Venmo with the tag “Because you totally weren’t expecting random money.” A former high school teacher who has since become a life coach, and has dedicated her life to practicing a grateful life sent this money to me. I have a very strong appreciation for what she does, but I had that same uncomfortable, awkward, unpleasant feeling of indebtedness discussed in the slides because I had not given anything in return. I was unsure if I should just return the money (and potentially come across as rude) or to send her a message (which seemed awkward because we hadn’t talked in years.) I think that the #365grateful movement creates much less anxiety about reciprocation, as well as the #100HappyDays campaign. I remember seeing quite a few of my Instagram friends participating in the “100 Happy Days Challenge” and considering starting one myself. Like the woman in the #365 video, many of their photos were of nature or things they were grateful for. People did tag people in the posts sometimes, but it usually had a more overarching theme of “thank you for being in my life” rather than “thank you for mailing this letter for me.” People didn’t feel obligated to reply with a similarly tagged post because it felt like appreciation rather than favor reciprocation. Something I found interesting when I looked into the 100 Happy Days challenge was that if you complete the challenge, you can request to have a printout of your 100 happy moments made. The request is added to the queue and a stranger sponsors your prints to be made. I think that these double blind acts of kindness reduce our desire to reciprocate because we are left grateful rather than indebted. Oddly enough, the 100 Happy Days campaign has started an IndieGoGo page to initiate a subscription program to “happiness boxes.” As of right now, you pay to have a box sent to you by the company. I think that if they were staying within their current framework of being grateful rather than thankful, it would be a good idea to have a similar sponsorship setup that allows you to send a happiness box to a complete stranger. Wikibicki (talk) 15:46, 10 November 2015 (UTC)

I really liked Lampinen, Lehtinen, Cheshire and Suhonen’s, “Presentation of Indebtedness and Reciprocity in Local Online Exchange” slides and in particular slides #9 and 11. These slides cover ways in which people can lessen the discomforts of Indebtedness. I have definitely looked for a solution like this many a times. It always feels uncomfortable for me when someone gives me something or does something for me without reciprocating. The feeling of owing someone something can eat you up just as similarly as guilt. So, why do people feel this guiltiness from just receiving? I am going to look into how some of these methods apply to my personal life. First, slide #9 talks about how “offering small tokens of appreciation” will lead to less indebtedness. I have seen and done this first hand. A friend of mine had just gotten a new iPad and knowing I wanted an iPad for quite sometime, gave me his old iPad. It was still almost new but for him he didn’t need it, however instead of solely taking it, I felt the need to slowly offer some reimbursement. For the next several weeks, maybe even months, I would grab him a drink or snack before heading to his house or asking if he wanted anything just because I continued to feel indebted to him. Even though it isn’t exactly guilt, indebtedness has that heavy, persistent feeling that comes with guilt and can cause action just the same. Just as most don’t like to remain feeling guilty, most will attempt some means of lessening indebtedness.

Next, slide #11 states that “managing expectations, framing offers and requests carefully” will help with the feeling of indebtedness as well. This was extremely relevant for me as I was the only one of my friends that had a car up at college all of last year. Just as the picture says, discussing costs for a trip is never a fun thing to do, but as the driver of the car you quickly notice the differences. My roommates would ask me to take them to campus or the grocery store or rides home for breaks and for all I was very receptive but in the back of my mind I always wanted to ask for some compensation but didn’t really want to make the trip home awkward. Eventually, it go to the point where if I was planning on going somewhere and they were tagging along then I felt that they didn’t have to pay since I would have incurred the same costs either way. However, if I was going to take them somewhere for their benefit then either some gas money or a small token would have been much appreciated. I believe this falls back on me “managing expectations, framing offers and requests carefully”. Johnmdaigneault (talk) 16:00, 10 November 2015 (UTC)


The end part of the "Indebtedness and Reciprocity" study where they wrote about leveraging feedback was the part that stuck out to me the most, because I can relate to feeling nervous about giving direct user-to-user feedback. Lampinen, Lehtinen, Cheshire, and Suhonen (2013) wrote that instead of focusing on direct user feedback, we should focus on the process overall instead of the specific member we are relaying this to. They also state that this is a way to give newer and more uncertain members a way to give feedback on helping and hurtful behaviors. By viewing good and bad examples of positive additions to the community, users are able to learn the community better and therefore participate accurately (p. 10). I happen to think this is a really good way to think about reciprocity, since I think indebtedness and reciprocity can make some people uncomfortable. For example with the presentation on Indebtedness and Reciprocity, there were quite a few slides on lessening the discomfort that comes with indebtedness, and these mostly had to do with reciprocating in some way for whatever you were given, etc. I found this to be a very unique set of social norms-for example, we price a kind act, such that leaves us "in debt" depending on what we view is an appropriate "price" for it. This also makes me think about the "WikiLove" option and how there was originally a mixed reaction to it, if I remember correctly--could that bad reaction have come from wikipedia users not recognizing their contributions as something that needs more reciprocation than they already received through talk pages and general feedback? It's almost as if reciprocity and indebtedness have their own norms, and we don't recognize what it ok and not ok until we are experienced in repaying and receiving repayment. Smfredd (talk) 16:34, 10 November 2015 (UTC)


In Lampinen, Lehtinen, Cheshire and Suhonen's "Presentation of Indebtedness and Reciprocity in Local Online Exchange," the authors discuss indebtedness and the discomforts that come along with either feeling like you're in debt to somebody else or like somebody is in debt to you. Slides 9 through 13 make mention of this, offering examples of ways to lessen the discomforts of indebtedness, making suggestions about things a person can do in order to resolve the feeling that they are in debt to somebody else and that they know them. I've noticed this in particular in my personal life with birthday presents. As I've gotten older, I've had the feeling more and more that "this year I'm finally too old to get a birthday present from extended family." However, if they continue to get me a gift, regardless of how small it is, I will feel like I am in debt to get them a gift in return. This happens even if its something like a small gift card to a website, I still feel the need to reciprocate in the gift-giving process when it comes to their birthday. Slide 10 states that one way to lessen the discomforts of indebtedness is, "Understanding and accepting the indirect nature of generalized exchange." In a way, by getting my a present, my relatives (or friends) are indirectly communicating to me that I'm not too old to participate in the birthday gift exchange. Furthermore, if my aunt and uncle send me a card for my birthday, they are in a sense "paying it forward;" because I got a card from them remembering my birthday, I'm now more likely to remember to send a card for their birthdays, or my cousins' birthdays. This is the "generalized exchange" that Lampinen et al. discuss in their presentation. By accepting favors or gifts, I am also accepting my role in a generalized exchange - in order to receive favors/gifts/etc., one must reciprocate and participate in the full "exchange," both giving and receiving tokens of appreciation -Torma616 (talk) 16:37, 10 November 2015 (UTC)


One thing that I'm curious after reading Matias' article is this platform, Kudos. This idea of thanking someone you work with to get rewards is disconcerting. My initial thought ties back in to Kohn's essay on rewards--if you're giving people rewards and points and an ego boost for thanking someone, then will that make them less appreciative of peoples' small deeds and gestures in the future?

Now, I realize that, technically, people are never really thanking people for the sake of thanking them or doing good deeds for the sake of doing them, but rather they enjoy the feeling they get from doing something positive like that for the world. I'm very much an optimist and like to ignore this fact, or at least keep in mind that this self-satisfaction does not discredit the deeds that were done.

But getting back to Kudos... I really do not think that I am not a fan of this platform (based on the little that I know of it from the reading/their website). I believe that if you want to thank someone, you should just go thank them! Do it face to face. Call them! And I understand that some people have some anxieties about in person or phone conversations, so send them an email or a text! It feels like this platform is just feeding our negative relationship with technology--we are so dependent upon it that we cannot even express gratitude without it? I think gratitude is based in love, and that's one of the most beautiful things in the universe! We should freely express it in person! (I'm not a hippy, I promise). Also, do we really need the constant validation? As a millennial, I know how great "likes" feel, but do we need this scoring system for us to be decent humans? Can't we just love each other, show it respectfully, and not be a piece of trash?

Maybe this is a good system to use in huge companies, but still... I think we should keep some things from being digitized.

A quick mini-QIC in addition to this QIC: I really like how much emphasis the Indebtedness and Reciprocity in Local Online Exchange paper puts on being a receiver of good deeds. Sometimes, it's important to let people help you, even if you don't (think) you need the help. Maybe they just really need to feel useful and helpful sometimes. It's just something I find really important, and I'm happy this research spoke about that.

-Kev.w.pri (talk) 16:54, 10 November 2015 (UTC)


An eye for an eye, the Golden Rule, paying it forward, quid pro quo Karma, the foot in the door, monetary exchange, charity and vengeance: Reciprocity knows a variety of names but always manages to maintain the principal principle that 'whatever goes around, comes around.' In his slideshow, Airilmpnn notes that reciprocity as a means for online exchange can take on two basic forms: indirect and direct. Indirect reciprocity (or exchange) implies that social exchange exists primarily between an individual member and the community-at-large; it is social proofing: if some community member receives, say, a huge donation to pay for some life-saving surgery, that person is much more likely to donate a sizable sum themselves to another needy medical patient, for moral equilibrium. Direct reciprocity involves immediate exchange between individuals, as facilitated by a third party, and often constitutes a product based exchange; barter and buy! Both types of reciprocity create senses of indebtedness in the recipient; I have felt inexplicably indebted to my bank for a loan they gave me to open up a Hoagie stand in South Boston, though the feeling's passed since I've changed my name to avoid their calls. The feelings of frustration and discomfort created by unreciprocated indebtedness can be perfectly exemplified by an episode of The Wire: Michael, an inner-city teen, refuses to take money given out by a prominent Baltimore drug dealer, as an attempt to win neighborhood praise, because he "don't wanna owe anyone shit." Like many, he would rather remain empty handed than receive a favor he can't reciprocate–a strong source of feeling dependent and inadequate.

According to Nathan Mathias, the exchange of gratitude is not only important in terms of generating user participation in online (or offline) communities, it's effective. At the end of the day, people just want to be appreciated, loved, lauded, and acknowledged as essential; it molds a sense of importance that even the sense of belonging doesn't. I think about how sometimes a person will whole-heartedly belong to a community, and even be a consistent and necessary part of it (such as a doctor) while still feeling completely under appreciated and therefore demotivated for not having received any directly expressed gratitude; no matter how good you know you are at something, it still hurts to never hear it from others. It's probably bona fide pseudo-science, but this experiment by Japanese 'scientist' Dr. Emoto illustrates the power of gratitude well, if inaccurately. The experiment tests the effect that certain words have on the formation of ice crystals, and shows that when cursed at, water forms hideous patterns, but when told 'thank you,' the water crystalized into the most magnificent kaleidoscopic snow-flake formations. Like I said, probably balderdash, but a nice illustration of the power of gratitude.

At another point in his essay, Mathias states that gratitude can inadvertently misguide us into paternalism, the restrictive force of authority upon its subjects. He believes that expecting a thanks can lead to people wrongly justifying anything from the institutionalization of an friend to sexual assault, even if the desired 'subsequent consent' never arrives. I think that this is not so much a problem with gratitude itself, but with people finding creative ways to perform mental gymnastics in order to justify doing something that for whatever reason they want or feel the need to do. In this sense, gratitude is a currency, a stamp of approval; it is not much different as a desired end for dubious behavior as money is for thieves.

Also, as Kraut and Resnick mention throughout the book, personalized feedback motivates user contribution, and even more so when it is qualitative. We all know that tiny tickle we get in our stomachs whenever someone gives us a well-deserved pat on the back, whether it be a like on Instagram, a good grade on an essay, or a literal pat on the back from a coach. Appreciation can even be a great buffer for criticism, such as in the Sandwich Rule that suggests wedging constructive criticism between to compliments so as to minimize damage, as well as maximize gratitude or applause. Anyway, thank you very much for reading this, you are a wonderful, beautiful, and smart person whom I am grateful to have in my life. Anussbaumer (talk) 16:59, 10 November 2015 (UTC)


Prior to reading Matias' "Gratitude and its Dangers in Social Technologies", I have always associated expressing thanks and gratitude as something positive without a dark side. It wasn't until he went in depth about how "Gratitude or its absence can influence relationships in harmful ways by encouraging paternalism, supporting favoritism, or papering over structural injustices" that I was made aware of the negative side effects. Unfortunately, he doesn't seem to go into detail about how to avoid these harmful patterns, but rather just points them out.

Matias also points out that there it is more likely for one's performance to increase or continue when someone else expresses gratitude for the services they are performing. This is more motivating than successfully accomplishing a task because people seek to be socially valued and by a coworker or colleague expressing thanks, that's exactly what is accomplished. For communities to be successful showing gratitude to it's member is crucial.

However, one aspect of the article I did not like was the idea behind 'Kudos', specifically the use of a leaderboard. I believe the use of a leaderboard in keeping track of thanks as well as competing for them, leads to the gratitude coming off as insincere. As stated in Kraut's book, when thanks or gratitude comes off as insincere, it does not have the desired effect. It is no longer a healthy form of motivation. BrazilSean (talk) 17:00, 10 November 2015 (UTC)

Nov 13 Fri - RTFM

Nov 17 Tue - Bootstrapping a niche

Nov 20 Fri - NO CLASS

Nov 24 Tue - Debrief: Wikipedia

Nov 27 Fri - NO CLASS

Dec 01 Tue - Bootstrapping and critical mass

Dec 04 Fri - Infocide

Dec 08 Tue - Debrief: community