The General Data Protection Regulation (GDPR) is the core of Europe’s digital privacy legislation that brings some control over data back to users all over the European Union (EU). It has implications for businesses, like requiring them to protect it from misuse or exploitation. Organizations from other countries that offer goods or services to EU citizens are also subject to penalties. The GDPR covers more personal identification than previous legislation and alerts individuals when their data has been leaked. Despite cracking down on business data protection like never before, it is expected to save Europe billions by making it simpler and cheaper for businesses to navigate the legal side. Ultimately, the General Data Protection Regulation will transform the type and ways businesses collect, store, and transfer data. In addition, it requires them to be more transparent with customers about how their data is used and, in some cases, even ask for consent. In this case analysis, I will argue that consequentialism shows us that the United States should follow Europe’s lead in stricter data privacy laws because it will maximize happiness and minimize suffering.  

Zimmer’s article, “But the data is already public”: on the ethics of research in Facebook, covers data taken from a cohort of undergraduate students’ Facebook accounts. These students attended Harvard University in 2008, and the prolific data collected was approved by Harvard’s institutional review board. The name of the article quotes one of the researchers defending their activity. However, this information was available only to people on the Harvard network on Facebook and would have never been accessible without approval. It was only possible to obtain assistance from RAs who were friends in this cohort on Facebook. The researchers stated that they removed or encoded all identifying information, but were overly optimistic. Within a short period, the “anonymous, northeastern American university” was quickly identified with the released information that included unique ethnicities, the number of subjects, and particular majors. To further this breach of privacy, housing records were also collected to “connect Internet space to real space”. The T3 researchers had steps in place to “ensure” anonymity, but they were hardly adequate. The necessary information that makes this data worthwhile to sociologists, like the fact that it covers a cohort at school, is also the reason that the individuals included could be easily identified. This shows that even the people to whom your data is entrusted can lack a basic understanding of privacy and anonymity. Also, the researchers had a lot to lose from being careless, which shows how easy it is to mishandle identifiable information. The identification of the cohort’s college leads to the result in the dataset being offline for a year longer than expected. In any country part of the European Union today, this scenario would have resulted in steep penalties for Harvard, the research institution, and possibly the RAs. The General Data Protection Regulation, which covers the data protection framework, would require students to consent to the collection of their data. I agree with the penalties and actions that would have fallen on these entities if they were to occur in the European Union today. The students’ information should never have been collected without their knowledge and consent.  The researchers would have never had access to their Facebook profiles without the RAs downloading them and unwarranted consent from Harvard’s institutional review board. Furthermore, this dataset would have been made available to anyone the researchers’ institution granted access. The granting of access makes it seem like they are the owners since they collected it. However, the data is not about them, which is what truly matters. Regarding consequentialism, this data would have maximized happiness for a few in the world, except for the individuals it was about. A base amount of data was released for everyone, but for full access, a request had to be made. I imagine that the people wanting this information would mainly be sociologists. The researchers and Harvard should have recognized the unethical nature of this data collection. They shouldn’t have gotten RAs involved, who probably felt like they had to. Most importantly, they should have informed the students and only collected data on those who consented. I think that our government should have punished Harvard and T3 for their disregard for students’ privacy and anonymity.   

Buchanan’s article titled “Considering the ethics of big data research: A case of Twitter and ISIS/ISIL” covers the topic of large-scale data mining analytics for national security. We live in an online era, so it makes sense that law enforcement is improving its technological abilities to identify and disrupt communications faster. Twenty-first-century terrorists like ISIS/ISIL use social media sites, namely Twitter, to recruit, promote, and increase participation. There is no better way for them today to share their beliefs. However, danger lies in the many possibilities for other ways the technology being developed to track this terrorist group could be used. With minute changes, it could be set to collect data on susceptible groups such as those struggling with finances, mental health, or beliefs. If this technology fell into the wrong hands, it could be the golden ticket for scammers by instantly finding faults in people they can play with. However, I feel like similar technologies are already utilized in my major tech and social media organizations for personalized marketing and product improvement. I believe if such technologies are being utilized for noble goals like tracking terrorist group activities, there is no problem. They need to be tracking what terrorist groups are communicating presently, so there is no time to ask for consent from individuals who interact with such pages. It’s unfortunate that mere bystanders’ data has to be collected and analyzed. With that said, I don’t know how much of the data is sifted through algorithms and discarded before a human sees it. However, mass surveillance is a slippery slope to a breach of everyone’s privacy and anonymity. It might make our country safer if mass surveillance and data farming were conducted, but at what cost? It’s not feasible to receive consent on these mass data collection projects from everyone the system happens to collect data from. Are we simply supposed to accept such activities because they are carried out by an entity of the government? With the current General Data Protection Regulation legislation that protects European Union citizens and businesses, I’m not sure if such actions would be legal. It would most likely require the consent from individuals whose data is collected, or they would have to make an exception for the government. It might be different if it were a state emergency, but in Buchanan’s, they are testing out an algorithm to find new information on ISIS/ISIL. Ultimately, a citizen has to give some level of trust and allowance to their government so it can protect them. One must trust that they won’t use technologies like this one for the wrong reasons. If it’s for the welfare of the country and its people, I see no problem with it. Regarding consequentialism, I think the mass surveillance and data farming of extremist groups and those that interact with them is maximizing happiness with little suffering. It helps us to understand our enemy’s motives and beliefs, which is invaluable. Terrorist attacks happen, and we need every edge possible to predict when and where to protect lives. Some bystanders’ data might get collected by accident, but I consider that a small price to pay and think most wouldn’t even care if they knew the reason.  

In conclusion, the General Data Protection Regulation legislation applies to the European Union and any organizations that conduct business there and gives more control to individuals over their data. I believe this Regulation should be applied across the globe to ensure the fair use and protection of personal data. Doing so would ease the flow of global business practices by making data regulation practices universal. This would ensure that individuals are asked for consent before using data for purposes not explicitly agreed upon. It also gives them a chance to more proactively protect themselves when their data has been leaked. However, there might be some cases when asking for consent, like monitoring terrorist organizations, is unfeasible. I think that the vast majority would be accepting of their data being collected and analyzed in such scenarios by a governmental body. However, a government could take advantage of such leeway and push the limits of what its citizens find acceptable. There is always that possibility, but hiding such activities is harder than ever before because of the mass improvement of digital literacy and connectedness.